Sample records for kernel particle preparation

  1. Preparation of uranium fuel kernels with silicon carbide nanoparticles using the internal gelation process

    NASA Astrophysics Data System (ADS)

    Hunt, R. D.; Silva, G. W. C. M.; Lindemer, T. B.; Anderson, K. K.; Collins, J. L.

    2012-08-01

    The US Department of Energy continues to use the internal gelation process in its preparation of tristructural isotropic coated fuel particles. The focus of this work is to develop uranium fuel kernels with adequately dispersed silicon carbide (SiC) nanoparticles, high crush strengths, uniform particle diameter, and good sphericity. During irradiation to high burnup, the SiC in the uranium kernels will serve as getters for excess oxygen and help control the oxygen potential in order to minimize the potential for kernel migration. The hardness of SiC required modifications to the gelation system that was used to make uranium kernels. Suitable processing conditions and potential equipment changes were identified so that the SiC could be homogeneously dispersed in gel spheres. Finally, dilute hydrogen rather than argon should be used to sinter the uranium kernels with SiC.

  2. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  3. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  4. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  5. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  6. A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.

    2017-05-01

    Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.

  7. Effect of surfactant concentration and solidification temperature on the characteristics and stability of nanostructured lipid carrier (NLC) prepared from rambutan (Nephelium lappaceum L.) kernel fat.

    PubMed

    Witayaudom, Pimchanok; Klinkesorn, Utai

    2017-11-01

    Nanostructured lipid carrier (NLC) was fabricated from rambutan (Nephelium lappaceum L.) kernel fat stabilized with Tween 80 in this present work. The influence of the Tween 80 concentration (0.025, 0.05, 0.1, 0.2, 0.5 and 1.0wt%) and solidification temperature (5 and 25°C) on the characteristics and stability of the NLC were investigated. The results showed that an increase in the Tween 80 concentration caused decreased zeta-potential (ζ-potential) and particle size (Z-average) with no significant effect on the polydispersity index (PDI). Lipid particles in the NLC at all Tween 80 concentrations had a tendency to grow and the PDI tended to increase due to Ostwald ripening upon storage over 28days. At least 0.2wt% Tween 80 concentrations could be used to stabilize 1wt% rambutan NLC. The solidification temperature affected the microstructure, melting behavior and stability of rambutan NLC. Pre-solidification at 5°C could create stable NLC with monodispersed-spherical lipid particles. Consequently, these stable NLC particles produced from rambutan kernel fat may serve as useful carriers for the delivery of bioactive lipophilic nutraceuticals. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.

    PubMed

    Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D

    2016-04-01

    Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the kernel fraction was redried at 60°C for 48 h in a forced-air oven and dry sieved to determine GMPS and surface area. Linear relationships between CSPS from WPCS (n=80) and kernel fraction GMPS, surface area, and proportion passing through the 4.75-mm screen were poor. Strong quadratic relationships between proportion of kernel fraction passing through the 4.75-mm screen and kernel fraction GMPS and surface area were observed. These findings suggest that hydrodynamic separation and dry sieving of the kernel fraction may provide a better assessment of kernel breakage in WPCS than CSPS. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jolly, Brian C.; Helmreich, Grant; Cooley, Kevin M.

    In support of fully ceramic microencapsulated (FCM) fuel development, coating development work is ongoing at Oak Ridge National Laboratory (ORNL) to produce tri-structural isotropic (TRISO) coated fuel particles with both UN kernels and surrogate (uranium-free) kernels. The nitride kernels are used to increase fissile density in these SiC-matrix fuel pellets with details described elsewhere. The surrogate TRISO particles are necessary for separate effects testing and for utilization in the consolidation process development. This report focuses on the fabrication and characterization of surrogate TRISO particles which use 800μm in diameter ZrO 2 microspheres as the kernel.

  10. Preparation of Simulated LBL Defects for Round Robin Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerczak, Tyler J.; Baldwin, Charles A.; Hunn, John D.

    2016-01-01

    A critical characteristic of the TRISO fuel design is its ability to retain fission products. During reactor operation, the TRISO layers act as barriers to release of fission products not stabilized in the kernel. Each component of the TRISO particle and compact construction plays a unique role in retaining select fission products, and layer performance is often interrelated. The IPyC, SiC, and OPyC layers are barriers to the release of fission product gases such as Kr and Xe. The SiC layer provides the primary barrier to release of metallic fission products not retained in the kernel, as transport across themore » SiC layer is rate limiting due to the greater permeability of the IPyC and OPyC layers to many metallic fission products. These attributes allow intact TRISO coatings to successfully retain most fission products released from the kernel, with the majority of released fission products during operation being due to defective, damaged, or failed coatings. This dominant release of fission products from compromised particles contributes to the overall source term in reactor; causing safety and maintenance concerns and limiting the lifetime of the fuel. Under these considerations, an understanding of the nature and frequency of compromised particles is an important part of predicting the expected fission product release and ensuring safe and efficient operation.« less

  11. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  12. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  13. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  14. Application of stochastic weighted algorithms to a multidimensional silica particle model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menz, William J.; Patterson, Robert I.A.; Wagner, Wolfgang

    2013-09-01

    Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associatedmore » majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.« less

  15. SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamberlain, S; Roswell Park Cancer Institute, Buffalo, NY; French, S

    Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3more » × 10{sup 6} to 3 × 10{sup 7}); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10{sup 6} was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10{sup 6} have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.« less

  16. Occurrence of 'super soft' wheat kernel texture in hexaploid and tetraploid wheats

    USDA-ARS?s Scientific Manuscript database

    Wheat kernel texture is a key trait that governs milling performance, flour starch damage, flour particle size, flour hydration properties, and baking quality. Kernel texture is commonly measured using the Perten Single Kernel Characterization System (SKCS). The SKCS returns texture values (Hardness...

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlou, A. T.; Betzler, B. R.; Burke, T. P.

    Uncertainties in the composition and fabrication of fuel compacts for the Fort St. Vrain (FSV) high temperature gas reactor have been studied by performing eigenvalue sensitivity studies that represent the key uncertainties for the FSV neutronic analysis. The uncertainties for the TRISO fuel kernels were addressed by developing a suite of models for an 'average' FSV fuel compact that models the fuel as (1) a mixture of two different TRISO fuel particles representing fissile and fertile kernels, (2) a mixture of four different TRISO fuel particles representing small and large fissile kernels and small and large fertile kernels and (3)more » a stochastic mixture of the four types of fuel particles where every kernel has its diameter sampled from a continuous probability density function. All of the discrete diameter and continuous diameter fuel models were constrained to have the same fuel loadings and packing fractions. For the non-stochastic discrete diameter cases, the MCNP compact model arranged the TRISO fuel particles on a hexagonal honeycomb lattice. This lattice-based fuel compact was compared to a stochastic compact where the locations (and kernel diameters for the continuous diameter cases) of the fuel particles were randomly sampled. Partial core configurations were modeled by stacking compacts into fuel columns containing graphite. The differences in eigenvalues between the lattice-based and stochastic models were small but the runtime of the lattice-based fuel model was roughly 20 times shorter than with the stochastic-based fuel model. (authors)« less

  18. Do we really need a large number of particles to simulate bimolecular reactive transport with random walk methods? A kernel density estimation approach

    NASA Astrophysics Data System (ADS)

    Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-12-01

    Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a limited number of particles.

  19. Preparation of UC0.07-0.10N0.90-0.93 spheres for TRISO coated fuel particles

    NASA Astrophysics Data System (ADS)

    Hunt, R. D.; Silva, C. M.; Lindemer, T. B.; Johnson, J. A.; Collins, J. L.

    2014-05-01

    The US Department of Energy is considering a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with dense uranium nitride (UN) kernels with diameters of 650 or 800 μm. The objectives of this effort are to make uranium oxide microspheres with adequately dispersed carbon nanoparticles and to convert these microspheres into UN spheres, which could be then sintered into kernels. Recent improvements to the internal gelation process were successfully applied to the production of uranium gel spheres with different concentrations of carbon black. After the spheres were washed and dried, a simple two-step heat profile was used to produce porous microspheres with a chemical composition of UC0.07-0.10N0.90-0.93. The first step involved heating the microspheres to 2023 K in a vacuum, and in the second step, the microspheres were held at 1873 K for 6 h in flowing nitrogen.

  20. Influence of wheat kernel physical properties on the pulverizing process.

    PubMed

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  1. Calculation of plasma dielectric response in inhomogeneous magnetic field near electron cyclotron resonance

    NASA Astrophysics Data System (ADS)

    Evstatiev, Evstati; Svidzinski, Vladimir; Spencer, Andy; Galkin, Sergei

    2014-10-01

    Full wave 3-D modeling of RF fields in hot magnetized nonuniform plasma requires calculation of nonlocal conductivity kernel describing the dielectric response of such plasma to the RF field. In many cases, the conductivity kernel is a localized function near the test point which significantly simplifies numerical solution of the full wave 3-D problem. Preliminary results of feasibility analysis of numerical calculation of the conductivity kernel in a 3-D hot nonuniform magnetized plasma in the electron cyclotron frequency range will be reported. This case is relevant to modeling of ECRH in ITER. The kernel is calculated by integrating the linearized Vlasov equation along the unperturbed particle's orbits. Particle's orbits in the nonuniform equilibrium magnetic field are calculated numerically by one of the Runge-Kutta methods. RF electric field is interpolated on a specified grid on which the conductivity kernel is discretized. The resulting integrals in the particle's initial velocity and time are then calculated numerically. Different optimization approaches of the integration are tested in this feasibility analysis. Work is supported by the U.S. DOE SBIR program.

  2. Ceramography of Irradiated tristructural isotropic (TRISO) Fuel from the AGR-2 Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rice, Francine Joyce; Stempien, John Dennis

    2016-09-01

    Ceramography was performed on cross sections from four tristructural isotropic (TRISO) coated particle fuel compacts taken from the AGR-2 experiment, which was irradiated between June 2010 and October 2013 in the Advanced Test Reactor (ATR). The fuel compacts examined in this study contained TRISO-coated particles with either uranium oxide (UO2) kernels or uranium oxide/uranium carbide (UCO) kernels that were irradiated to final burnup values between 9.0 and 11.1% FIMA. These examinations are intended to explore kernel and coating morphology evolution during irradiation. This includes kernel porosity, swelling, and migration, and irradiation-induced coating fracture and separation. Variations in behavior within amore » specific cross section, which could be related to temperature or burnup gradients within the fuel compact, are also explored. The criteria for categorizing post-irradiation particle morphologies developed for AGR-1 ceramographic exams, was applied to the particles in the AGR-2 compacts particles examined. Results are compared with similar investigations performed as part of the earlier AGR-1 irradiation experiment. This paper presents the results of the AGR-2 examinations and discusses the key implications for fuel irradiation performance.« less

  3. Spatial Variability of Organic Carbon in a Fractured Mudstone and Its Effect on the Retention and Release of Trichloroethene (TCE)

    NASA Astrophysics Data System (ADS)

    Sole-Mari, G.; Fernandez-Garcia, D.

    2016-12-01

    Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.

  4. Locally adaptive methods for KDE-based random walk models of reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Sole-Mari, G.; Fernandez-Garcia, D.

    2017-12-01

    Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.

  5. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing, manufacturing, packing, processing, preparing, treating...

  6. Thermochemical Assessment of Oxygen Gettering by SiC or ZrC in PuO2-x TRISO Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besmann, Theodore M

    2010-01-01

    Particulate nuclear fuel in a modular helium reactor is being considered for the consumption of excess plutonium and related transuranics. In particular, efforts to largely consume transuranics in a single-pass will require the fuel to undergo very high burnup. This deep burn concept will thus make the proposed plutonia TRISO fuel particularly likely to suffer kernel migration where carbon in the buffer layer and inner pyrolytic carbon layer is transported from the high temperature side of the particle to the low temperature side. This phenomenon is oberved to cause particle failure and therefore must be mitigated. The addition of SiCmore » or ZrC in the oxide kernel or in a layer in communication with the kernel will lower the oxygen potential and therefore prevent kernel migration, and this has been demonstrated with SiC. In this work a thermochemical analysis was performed to predict oxygen potential behavior in the plutonia TRISO fuel to burnups of 50% FIMA with and without the presence of oxygen gettering SiC and ZrC. Kernel migration is believed to be controlled by CO gas transporting carbon from the hot side to the cool side, and CO pressure is governed by the oxygen potential in the presence of carbon. The gettering phases significantly reduce the oxygen potential and thus CO pressure in an otherwise PuO2-x kernel, and prevent kernel migration by limiting CO gas diffusion through the buffer layer. The reduction in CO pressure can also reduce the peak pressure within the particles by ~50%, thus reducing the likelihood of pressure-induced particle failure. A model for kernel migration was used to semi-quantitatively assess the effect of controlling oxygen potential with SiC or ZrC and did demonstrated the dramatic effect of the addition of these phases on carbon transport.« less

  7. Jdpd: an open java simulation kernel for molecular fragment dissipative particle dynamics.

    PubMed

    van den Broek, Karina; Kuhn, Hubert; Zielesny, Achim

    2018-05-21

    Jdpd is an open Java simulation kernel for Molecular Fragment Dissipative Particle Dynamics with parallelizable force calculation, efficient caching options and fast property calculations. It is characterized by an interface and factory-pattern driven design for simple code changes and may help to avoid problems of polyglot programming. Detailed input/output communication, parallelization and process control as well as internal logging capabilities for debugging purposes are supported. The new kernel may be utilized in different simulation environments ranging from flexible scripting solutions up to fully integrated "all-in-one" simulation systems.

  8. Roofline Analysis in the Intel® Advisor to Deliver Optimized Performance for applications on Intel® Xeon Phi™ Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, Tuomas S.; Lobet, Mathieu; Deslippe, Jack

    In this session we show, in two case studies, how the roofline feature of Intel Advisor has been utilized to optimize the performance of kernels of the XGC1 and PICSAR codes in preparation for Intel Knights Landing architecture. The impact of the implemented optimizations and the benefits of using the automatic roofline feature of Intel Advisor to study performance of large applications will be presented. This demonstrates an effective optimization strategy that has enabled these science applications to achieve up to 4.6 times speed-up and prepare for future exascale architectures. # Goal/Relevance of Session The roofline model [1,2] is amore » powerful tool for analyzing the performance of applications with respect to the theoretical peak achievable on a given computer architecture. It allows one to graphically represent the performance of an application in terms of operational intensity, i.e. the ratio of flops performed and bytes moved from memory in order to guide optimization efforts. Given the scale and complexity of modern science applications, it can often be a tedious task for the user to perform the analysis on the level of functions or loops to identify where performance gains can be made. With new Intel tools, it is now possible to automate this task, as well as base the estimates of peak performance on measurements rather than vendor specifications. The goal of this session is to demonstrate how the roofline feature of Intel Advisor can be used to balance memory vs. computation related optimization efforts and effectively identify performance bottlenecks. A series of typical optimization techniques: cache blocking, structure refactoring, data alignment, and vectorization illustrated by the kernel cases will be addressed. # Description of the codes ## XGC1 The XGC1 code [3] is a magnetic fusion Particle-In-Cell code that uses an unstructured mesh for its Poisson solver that allows it to accurately resolve the edge plasma of a magnetic fusion device. After recent optimizations to its collision kernel [4], most of the computing time is spent in the electron push (pushe) kernel, where these optimization efforts have been focused. The kernel code scaled well with MPI+OpenMP but had almost no automatic compiler vectorization, in part due to indirect memory addresses and in part due to low trip counts of low-level loops that would be candidates for vectorization. Particle blocking and sorting have been implemented to increase trip counts of low-level loops and improve memory locality, and OpenMP directives have been added to vectorize compute-intensive loops that were identified by Advisor. The optimizations have improved the performance of the pushe kernel 2x on Haswell processors and 1.7x on KNL. The KNL node-for-node performance has been brought to within 30% of a NERSC Cori phase I Haswell node and we expect to bridge this gap by reducing the memory footprint of compute intensive routines to improve cache reuse. ## PICSAR is a Fortran/Python high-performance Particle-In-Cell library targeting at MIC architectures first designed to be coupled with the PIC code WARP for the simulation of laser-matter interaction and particle accelerators. PICSAR also contains a FORTRAN stand-alone kernel for performance studies and benchmarks. A MPI domain decomposition is used between NUMA domains and a tile decomposition (cache-blocking) handled by OpenMP has been added for shared-memory parallelism and better cache management. The so-called current deposition and field gathering steps that compose the PIC time loop constitute major hotspots that have been rewritten to enable more efficient vectorization. Particle communications between tiles and MPI domain has been merged and parallelized. All considered, these improvements provide speedups of 3.1 for order 1 and 4.6 for order 3 interpolation shape factors on KNL configured in SNC4 quadrant flat mode. Performance is similar between a node of cori phase 1 and KNL at order 1 and better on KNL by a factor 1.6 at order 3 with the considered test case (homogeneous thermal plasma).« less

  9. Safety Testing of AGR-2 UCO Compacts 5-2-2, 2-2-2, and 5-4-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunn, John D.; Morris, Robert Noel; Baldwin, Charles A.

    2016-08-01

    Post-irradiation examination (PIE) is being performed on tristructural-isotropic (TRISO) coated-particle fuel compacts from the Advanced Gas Reactor (AGR) Fuel Development and Qualification Program second irradiation experiment (AGR-2). This effort builds upon the understanding acquired throughout the AGR-1 PIE campaign, and is establishing a database for the different AGR-2 fuel designs. The AGR-2 irradiation experiment included TRISO fuel particles coated at BWX Technologies (BWXT) with a 150-mm-diameter engineering-scale coater. Two coating batches were tested in the AGR-2 irradiation experiment. Batch 93085 had 508-μm-diameter uranium dioxide (UO 2) kernels. Batch 93073 had 427-μm-diameter UCO kernels, which is a kernel design where somemore » of the uranium oxide is converted to uranium carbide during fabrication to provide a getter for oxygen liberated during fission and limit CO production. Fabrication and property data for the AGR-2 coating batches have been compiled and compared to those for AGR-1. The AGR-2 TRISO coatings were most like the AGR-1 Variant 3 TRISO deposited in the 50-mm-diameter ORNL lab-scale coater. In both cases argon-dilution of the hydrogen and methyltrichlorosilane coating gas mixture employed to deposit the SiC was used to produce a finer-grain, more equiaxed SiC microstructure. In addition to the fact that AGR-1 fuel had smaller, 350-μm-diameter UCO kernels, notable differences in the TRISO particle properties included the pyrocarbon anisotropy, which was slightly higher in the particles coated in the engineering-scale coater, and the exposed kernel defect fraction, which was higher for AGR-2 fuel due to the detected presence of particles with impact damage introduced during TRISO particle handling.« less

  10. Aflatoxin and nutrient contents of peanut collected from local market and their processed foods

    NASA Astrophysics Data System (ADS)

    Ginting, E.; Rahmianna, A. A.; Yusnawan, E.

    2018-01-01

    Peanut is succeptable to aflatoxin contamination and the sources of peanut as well as processing methods considerably affect aflatoxin content of the products. Therefore, the study on aflatoxin and nutrient contents of peanut collected from local market and their processed foods were performed. Good kernels of peanut were prepared into fried peanut, pressed-fried peanut, peanut sauce, peanut press cake, fermented peanut press cake (tempe) and fried tempe, while blended kernels (good and poor kernels) were processed into peanut sauce and tempe and poor kernels were only processed into tempe. The results showed that good and blended kernels which had high number of sound/intact kernels (82,46% and 62,09%), contained 9.8-9.9 ppb of aflatoxin B1, while slightly higher level was seen in poor kernels (12.1 ppb). However, the moisture, ash, protein, and fat contents of the kernels were similar as well as the products. Peanut tempe and fried tempe showed the highest increase in protein content, while decreased fat contents were seen in all products. The increase in aflatoxin B1 of peanut tempe prepared from poor kernels > blended kernels > good kernels. However, it averagely decreased by 61.2% after deep-fried. Excluding peanut tempe and fried tempe, aflatoxin B1 levels in all products derived from good kernels were below the permitted level (15 ppb). This suggests that sorting peanut kernels as ingredients and followed by heat processing would decrease the aflatoxin content in the products.

  11. A deformable particle-in-cell method for advective transport in geodynamic modeling

    NASA Astrophysics Data System (ADS)

    Samuel, Henri

    2018-06-01

    This paper presents an improvement of the particle-in-cell method commonly used in geodynamic modeling for solving pure advection of sharply varying fields. Standard particle-in-cell approaches use particle kernels to transfer the information carried by the Lagrangian particles to/from the Eulerian grid. These kernels are generally one-dimensional and non-evolutive, which leads to the development of under- and over-sampling of the spatial domain by the particles. This reduces the accuracy of the solution, and may require the use of a prohibitive amount of particles in order to maintain the solution accuracy to an acceptable level. The new proposed approach relies on the use of deformable kernels that account for the strain history in the vicinity of particles. It results in a significant improvement of the spatial sampling by the particles, leading to a much higher accuracy of the numerical solution, for a reasonable computational extra cost. Various 2D tests were conducted to compare the performances of the deformable particle-in-cell method with the particle-in-cell approach. These consistently show that at comparable accuracy, the deformable particle-in-cell method was found to be four to six times more efficient than standard particle-in-cell approaches. The method could be adapted to 3D space and generalized to cases including motionless transport.

  12. Nature and composition of fat bloom from palm kernel stearin and hydrogenated palm kernel stearin compound chocolates.

    PubMed

    Smith, Kevin W; Cain, Fred W; Talbot, Geoff

    2004-08-25

    Palm kernel stearin and hydrogenated palm kernel stearin can be used to prepare compound chocolate bars or coatings. The objective of this study was to characterize the chemical composition, polymorphism, and melting behavior of the bloom that develops on bars of compound chocolate prepared using these fats. Bars were stored for 1 year at 15, 20, or 25 degrees C. At 15 and 20 degrees C the bloom was enriched in cocoa butter triacylglycerols, with respect to the main fat phase, whereas at 25 degrees C the enrichment was with palm kernel triacylglycerols. The bloom consisted principally of solid fat and was sharper melting than was the fat in the chocolate. Polymorphic transitions from the initial beta' phase to the beta phase accompanied the formation of bloom at all temperatures.

  13. Production of LEU Fully Ceramic Microencapsulated Fuel for Irradiation Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terrani, Kurt A; Kiggans Jr, James O; McMurray, Jake W

    2016-01-01

    Fully Ceramic Microencapsulated (FCM) fuel consists of tristructural isotropic (TRISO) fuel particles embedded inside a SiC matrix. This fuel inherently possesses multiple barriers to fission product release, namely the various coating layers in the TRISO fuel particle as well as the dense SiC matrix that hosts these particles. This coupled with the excellent oxidation resistance of the SiC matrix and the SiC coating layer in the TRISO particle designate this concept as an accident tolerant fuel (ATF). The FCM fuel takes advantage of uranium nitride kernels instead of oxide or oxide-carbide kernels used in high temperature gas reactors to enhancemore » heavy metal loading in the highly moderated LWRs. Production of these kernels with appropriate density, coating layer development to produce UN TRISO particles, and consolidation of these particles inside a SiC matrix have been codified thanks to significant R&D supported by US DOE Fuel Cycle R&D program. Also, surrogate FCM pellets (pellets with zirconia instead of uranium-bearing kernels) have been neutron irradiated and the stability of the matrix and coating layer under LWR irradiation conditions have been established. Currently the focus is on production of LEU (7.3% U-235 enrichment) FCM pellets to be utilized for irradiation testing. The irradiation is planned at INL s Advanced Test Reactor (ATR). This is a critical step in development of this fuel concept to establish the ability of this fuel to retain fission products under prototypical irradiation conditions.« less

  14. A method of smoothed particle hydrodynamics using spheroidal kernels

    NASA Technical Reports Server (NTRS)

    Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.

    1995-01-01

    We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.

  15. TURBULENCE-INDUCED RELATIVE VELOCITY OF DUST PARTICLES. IV. THE COLLISION KERNEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Liubin; Padoan, Paolo, E-mail: lpan@cfa.harvard.edu, E-mail: ppadoan@icc.ub.edu

    Motivated by its importance for modeling dust particle growth in protoplanetary disks, we study turbulence-induced collision statistics of inertial particles as a function of the particle friction time, τ{sub p}. We show that turbulent clustering significantly enhances the collision rate for particles of similar sizes with τ{sub p} corresponding to the inertial range of the flow. If the friction time, τ{sub p,} {sub h}, of the larger particle is in the inertial range, the collision kernel per unit cross section increases with increasing friction time, τ{sub p,} {sub l}, of the smaller particle and reaches the maximum at τ{sub p,}more » {sub l} = τ{sub p,} {sub h}, where the clustering effect peaks. This feature is not captured by the commonly used kernel formula, which neglects the effect of clustering. We argue that turbulent clustering helps alleviate the bouncing barrier problem for planetesimal formation. We also investigate the collision velocity statistics using a collision-rate weighting factor to account for higher collision frequency for particle pairs with larger relative velocity. For τ{sub p,} {sub h} in the inertial range, the rms relative velocity with collision-rate weighting is found to be invariant with τ{sub p,} {sub l} and scales with τ{sub p,} {sub h} roughly as ∝ τ{sub p,h}{sup 1/2}. The weighting factor favors collisions with larger relative velocity, and including it leads to more destructive and less sticking collisions. We compare two collision kernel formulations based on spherical and cylindrical geometries. The two formulations give consistent results for the collision rate and the collision-rate weighted statistics, except that the spherical formulation predicts more head-on collisions than the cylindrical formulation.« less

  16. Microscopic analysis of irradiated AGR-1 coated particle fuel compacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott A. Ploger; Paul A. Demkowicz; John D. Hunn

    The AGR-1 experiment involved irradiation of 72 TRISO-coated particle fuel compacts to a peak compact-average burnup of 19.5% FIMA with no in-pile failures observed out of 3 x 105 total particles. Irradiated AGR-1 fuel compacts have been cross-sectioned and analyzed with optical microscopy to characterize kernel, buffer, and coating behavior. Six compacts have been examined, spanning a range of irradiation conditions (burnup, fast fluence, and irradiation temperature) and including all four TRISO coating variations irradiated in the AGR-1 experiment. The cylindrical specimens were sectioned both transversely and longitudinally, then polished to expose from 36 to 79 individual particles near midplanemore » on each mount. The analysis focused primarily on kernel swelling and porosity, buffer densification and fracturing, buffer–IPyC debonding, and fractures in the IPyC and SiC layers. Characteristic morphologies have been identified, 981 particles have been classified, and spatial distributions of particle types have been mapped. No significant spatial patterns were discovered in these cross sections. However, some trends were found between morphological types and certain behavioral aspects. Buffer fractures were found in 23% of the particles, and these fractures often resulted in unconstrained kernel protrusion into the open cavities. Fractured buffers and buffers that stayed bonded to IPyC layers appear related to larger pore size in kernels. Buffer–IPyC interface integrity evidently factored into initiation of rare IPyC fractures. Fractures through part of the SiC layer were found in only four classified particles, all in conjunction with IPyC–SiC debonding. Compiled results suggest that the deliberate coating fabrication variations influenced the frequencies of IPyC fractures and IPyC–SiC debonds.« less

  17. A fully-automated multiscale kernel graph cuts based particle localization scheme for temporal focusing two-photon microscopy

    NASA Astrophysics Data System (ADS)

    Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei

    2017-03-01

    The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.

  18. Conceptual design of quadriso particles with europium burnable absorber in HTRS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamo, A.; Nuclear Engineering Division

    2010-05-18

    In High Temperature Reactors, burnable absorbers are utilized to manage the excess reactivity at the early stage of the fuel cycle. In this study QUADRISO particles are proposed to manage the initial xcess reactivity of High Temperature Reactors. The QUADRISO concept synergistically couples the decrease of the burnable poison with the decrease of the fissile materials at the fuel particle level. This echanism is set up by introducing a burnable poison layer around the fuel kernel in ordinary TRISO particles or by mixing the burnable poison with any of the TRISO coated layers. At the beginning of life, the nitialmore » excess reactivity is small because some neutrons are absorbed in the burnable poison and they are prevented from entering the fuel kernel. At the end of life, when the absorber is almost depleted, ore eutrons stream into the fuel kernel of QUADRISO particles causing fission reactions. The mechanism has been applied to a prismatic High Temperature Reactor with europium or erbium burnable absorbers, showing a significant reduction in the initial excess reactivity of the core.« less

  19. A novel concept of QUADRISO particles. Part II: Utilization for excess reactivity control.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamo, A.

    2010-07-01

    In high temperature reactors, burnable absorbers are utilized to manage the excess reactivity at the early stage of the fuel cycle. In this paper QUADRISO particles are proposed to manage the initial excess reactivity of high temperature reactors. The QUADRISO concept synergistically couples the decrease of the burnable poison with the decrease of the fissile materials at the fuel particle level. This mechanism is set up by introducing a burnable poison layer around the fuel kernel in ordinary TRISO particles or by mixing the burnable poison with any of the TRISO coated layers. At the beginning of life, the initialmore » excess reactivity is small because some neutrons are absorbed in the burnable poison and they are prevented from entering the fuel kernel. At the end of life, when the absorber is almost depleted, more neutrons stream into the fuel kernel of QUADRISO particles causing fission reactions. The mechanism has been applied to a prismatic high temperature reactor with europium or erbium burnable absorbers, showing a significant reduction in the initial excess reactivity of the core.« less

  20. A novel concept of QUADRISO particles : Part II Utilization for excess reactivity control.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamo, A.

    2011-01-01

    In high temperature reactors, burnable absorbers are utilized to manage the excess reactivity at the early stage of the fuel cycle. In this paper QUADRISO particles are proposed to manage the initial excess reactivity of high temperature reactors. The QUADRISO concept synergistically couples the decrease of the burnable poison with the decrease of the fissile materials at the fuel particle level. This mechanism is set up by introducing a burnable poison layer around the fuel kernel in ordinary TRISO particles or by mixing the burnable poison with any of the TRISO coated layers. At the beginning of life, the initialmore » excess reactivity is small because some neutrons are absorbed in the burnable poison and they are prevented from entering the fuel kernel. At the end of life, when the absorber is almost depleted, more neutrons stream into the fuel kernel of QUADRISO particles causing fission reactions. The mechanism has been applied to a prismatic high temperature reactor with europium or erbium burnable absorbers, showing a significant reduction in the initial excess reactivity of the core.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaise Collin

    The Idaho National Laboraroty (INL) PARFUME (particle fuel model) code was used to assess the overall fuel performance of uranium nitride (UN) tristructural isotropic (TRISO) ceramic fuel under irradiation conditions typical of a Light Water Reactor (LWR). The dimensional changes of the fuel particle layers and kernel were calculated, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated depending on the strain behavior of the constituent materials at high fast fluence and burn up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along withmore » stress levels in the inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn up. These material properties have large uncertainties at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, a large experimental effort would be needed to establish material properties, including kernel and PyC swelling rates, under these conditions before definitive conclusions can be drawn on the behavior of UN TRISO fuel in LWRs.« less

  2. Combining neural networks and signed particles to simulate quantum systems more efficiently

    NASA Astrophysics Data System (ADS)

    Sellier, Jean Michel

    2018-04-01

    Recently a new formulation of quantum mechanics has been suggested which describes systems by means of ensembles of classical particles provided with a sign. This novel approach mainly consists of two steps: the computation of the Wigner kernel, a multi-dimensional function describing the effects of the potential over the system, and the field-less evolution of the particles which eventually create new signed particles in the process. Although this method has proved to be extremely advantageous in terms of computational resources - as a matter of fact it is able to simulate in a time-dependent fashion many-body systems on relatively small machines - the Wigner kernel can represent the bottleneck of simulations of certain systems. Moreover, storing the kernel can be another issue as the amount of memory needed is cursed by the dimensionality of the system. In this work, we introduce a new technique which drastically reduces the computation time and memory requirement to simulate time-dependent quantum systems which is based on the use of an appropriately tailored neural network combined with the signed particle formalism. In particular, the suggested neural network is able to compute efficiently and reliably the Wigner kernel without any training as its entire set of weights and biases is specified by analytical formulas. As a consequence, the amount of memory for quantum simulations radically drops since the kernel does not need to be stored anymore as it is now computed by the neural network itself, only on the cells of the (discretized) phase-space which are occupied by particles. As its is clearly shown in the final part of this paper, not only this novel approach drastically reduces the computational time, it also remains accurate. The author believes this work opens the way towards effective design of quantum devices, with incredible practical implications.

  3. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  4. 7 CFR 810.206 - Grades and grade requirements for barley.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... particles of unknown foreign substance(s) or commonly recognized harmful or toxic substance(s), 8 or more... injured-by-mold kernels are not considered damaged kernels. [61 FR 18492, Apr. 26, 1996] Special Grades...

  5. 7 CFR 810.206 - Grades and grade requirements for barley.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... particles of unknown foreign substance(s) or commonly recognized harmful or toxic substance(s), 8 or more... injured-by-mold kernels are not considered damaged kernels. [61 FR 18492, Apr. 26, 1996] Special Grades...

  6. 7 CFR 810.206 - Grades and grade requirements for barley.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... particles of unknown foreign substance(s) or commonly recognized harmful or toxic substance(s), 8 or more... injured-by-mold kernels are not considered damaged kernels. [61 FR 18492, Apr. 26, 1996] Special Grades...

  7. 7 CFR 810.206 - Grades and grade requirements for barley.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... particles of unknown foreign substance(s) or commonly recognized harmful or toxic substance(s), 8 or more... injured-by-mold kernels are not considered damaged kernels. [61 FR 18492, Apr. 26, 1996] Special Grades...

  8. Modeling and Analysis of FCM UN TRISO Fuel Using the PARFUME Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaise Collin

    2013-09-01

    The PARFUME (PARticle Fuel ModEl) modeling code was used to assess the overall fuel performance of uranium nitride (UN) tri-structural isotropic (TRISO) ceramic fuel in the frame of the design and development of Fully Ceramic Matrix (FCM) fuel. A specific modeling of a TRISO particle with UN kernel was developed with PARFUME, and its behavior was assessed in irradiation conditions typical of a Light Water Reactor (LWR). The calculations were used to access the dimensional changes of the fuel particle layers and kernel, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated dependingmore » on the strain behavior of the constituent materials at high fast fluence and burn-up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the pyrolytic carbon (PyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn-up. These material properties are unknown at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, more effort is needed to determine them and positively conclude on the applicability of FCM fuel to LWRs.« less

  9. Irradiation performance of HTGR fuel rods in HFIR experiments HRB-7 and -8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valentine, K.H.; Homan, F.J.; Long, E.L. Jr.

    1977-05-01

    The HRB-7 and -8 experiments were designed as a comprehensive test of mixed thorium-uranium oxide fissile particles with Th:U ratios from 0 to 8 for HTGR recycle application. In addition, fissile particles derived from Weak-Acid Resin (WAR) were tested as a potential backup type of fissile particle for HTGR recycle. These experiments were conducted at two temperatures (1250 and 1500/sup 0/C) to determine the influence of operating temperature on the performance parameters studied. The minor objectives were comparison of advanced coating designs where ZrC replaced SiC in the Triso design, testing of fuel coated in laboratory-scale equipment with fuel coatedmore » in production-scale coaters, comparison of the performance of /sup 233/U-bearing particles with that of /sup 235/U-bearing particles, comparison of the performance of Biso coatings with Triso coatings for particles containing the same type of kernel, and testing of multijunction tungsten-rhenium thermocouples. All objectives were accomplished. As a result of these experiments the mixed thorium-uranium oxide fissile kernel was replaced by a WAR-derived particle in the reference recycle design. A tentative decision to make this change had been reached before the HRB-7 and -8 capsules were examined, and the results of the examination confirmed the accuracy of the previous decision. Even maximum dilution (Th/U approximately equal to 8) of the mixed thorium-uranium oxide kernel was insufficient to prevent amoeba of the kernels at rates that are unacceptable in a large HTGR. Other results showed the performance of /sup 233/U-bearing particles to be identical to that of /sup 235/U-bearing particles, the performance of fuel coated in production-scale equipment to be at least as good as that of fuel coated in laboratory-scale coaters, the performance of ZrC coatings to be very promising, and Biso coatings to be inferior to Triso coatings relative to fission product retention.« less

  10. Fission Product Release and Survivability of UN-Kernel LWR TRISO Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besmann, Theodore M; Ferber, Mattison K; Lin, Hua-Tay

    2014-01-01

    A thermomechanical assessment of the LWR application of TRISO fuel with UN kernels was performed. Fission product release under operational and transient temperature conditions was determined by extrapolation from range calculations and limited data from irradiated UN pellets. Both fission recoil and diffusive release were considered and internal particle pressures computed for both 650 and 800 m diameter kernels as a function of buffer layer thickness. These pressures were used in conjunction with a finite element program to compute the radial and tangential stresses generated with a TRISO particle as a function of fluence. Creep and swelling of the innermore » and outer pyrolytic carbon layers were included in the analyses. A measure of reliability of the TRISO particle was obtained by measuring the probability of survival of the SiC barrier layer and the maximum tensile stress generated in the pyrolytic carbon layers as a function of fluence. These reliability estimates were obtained as functions of the kernel diameter, buffer layer thickness, and pyrolytic carbon layer thickness. The value of the probability of survival at the end of irradiation was inversely proportional to the maximum pressure.« less

  11. Fission product release and survivability of UN-kernel LWR TRISO fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    T. M. Besmann; M. K. Ferber; H.-T. Lin

    2014-05-01

    A thermomechanical assessment of the LWR application of TRISO fuel with UN kernels was performed. Fission product release under operational and transient temperature conditions was determined by extrapolation from fission product recoil calculations and limited data from irradiated UN pellets. Both fission recoil and diffusive release were considered and internal particle pressures computed for both 650 and 800 um diameter kernels as a function of buffer layer thickness. These pressures were used in conjunction with a finite element program to compute the radial and tangential stresses generated within a TRISO particle undergoing burnup. Creep and swelling of the inner andmore » outer pyrolytic carbon layers were included in the analyses. A measure of reliability of the TRISO particle was obtained by computing the probability of survival of the SiC barrier layer and the maximum tensile stress generated in the pyrolytic carbon layers from internal pressure and thermomechanics of the layers. These reliability estimates were obtained as functions of the kernel diameter, buffer layer thickness, and pyrolytic carbon layer thickness. The value of the probability of survival at the end of irradiation was inversely proportional to the maximum pressure.« less

  12. Diffuse correlation tomography in the transport regime: A theoretical study of the sensitivity to Brownian motion.

    PubMed

    Tricoli, Ugo; Macdonald, Callum M; Durduran, Turgut; Da Silva, Anabela; Markel, Vadim A

    2018-02-01

    Diffuse correlation tomography (DCT) uses the electric-field temporal autocorrelation function to measure the mean-square displacement of light-scattering particles in a turbid medium over a given exposure time. The movement of blood particles is here estimated through a Brownian-motion-like model in contrast to ordered motion as in blood flow. The sensitivity kernel relating the measurable field correlation function to the mean-square displacement of the particles can be derived by applying a perturbative analysis to the correlation transport equation (CTE). We derive an analytical expression for the CTE sensitivity kernel in terms of the Green's function of the radiative transport equation, which describes the propagation of the intensity. We then evaluate the kernel numerically. The simulations demonstrate that, in the transport regime, the sensitivity kernel provides sharper spatial information about the medium as compared with the correlation diffusion approximation. Also, the use of the CTE allows one to explore some additional degrees of freedom in the data such as the collimation direction of sources and detectors. Our results can be used to improve the spatial resolution of DCT, in particular, with applications to blood flow imaging in regions where the Brownian motion is dominant.

  13. Diffuse correlation tomography in the transport regime: A theoretical study of the sensitivity to Brownian motion

    NASA Astrophysics Data System (ADS)

    Tricoli, Ugo; Macdonald, Callum M.; Durduran, Turgut; Da Silva, Anabela; Markel, Vadim A.

    2018-02-01

    Diffuse correlation tomography (DCT) uses the electric-field temporal autocorrelation function to measure the mean-square displacement of light-scattering particles in a turbid medium over a given exposure time. The movement of blood particles is here estimated through a Brownian-motion-like model in contrast to ordered motion as in blood flow. The sensitivity kernel relating the measurable field correlation function to the mean-square displacement of the particles can be derived by applying a perturbative analysis to the correlation transport equation (CTE). We derive an analytical expression for the CTE sensitivity kernel in terms of the Green's function of the radiative transport equation, which describes the propagation of the intensity. We then evaluate the kernel numerically. The simulations demonstrate that, in the transport regime, the sensitivity kernel provides sharper spatial information about the medium as compared with the correlation diffusion approximation. Also, the use of the CTE allows one to explore some additional degrees of freedom in the data such as the collimation direction of sources and detectors. Our results can be used to improve the spatial resolution of DCT, in particular, with applications to blood flow imaging in regions where the Brownian motion is dominant.

  14. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    PubMed Central

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-01-01

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202

  15. Modeling and analysis of UN TRISO fuel for LWR application using the PARFUME code

    NASA Astrophysics Data System (ADS)

    Collin, Blaise P.

    2014-08-01

    The Idaho National Laboratory (INL) PARFUME (PARticle FUel ModEl) code was used to assess the overall fuel performance of uranium nitride (UN) tristructural isotropic (TRISO) ceramic fuel under irradiation conditions typical of a Light Water Reactor (LWR). The dimensional changes of the fuel particle layers and kernel were calculated, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated depending on the strain behavior of the constituent materials at high fast fluence and burn-up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn-up. These material properties have large uncertainties at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, a large experimental effort would be needed to establish material properties, including kernel and PyC swelling rates, under these conditions before definitive conclusions can be drawn on the behavior of UN TRISO fuel in LWRs.

  16. Generation of a novel phase-space-based cylindrical dose kernel for IMRT optimization.

    PubMed

    Zhong, Hualiang; Chetty, Indrin J

    2012-05-01

    Improving dose calculation accuracy is crucial in intensity-modulated radiation therapy (IMRT). We have developed a method for generating a phase-space-based dose kernel for IMRT planning of lung cancer patients. Particle transport in the linear accelerator treatment head of a 21EX, 6 MV photon beam (Varian Medical Systems, Palo Alto, CA) was simulated using the EGSnrc/BEAMnrc code system. The phase space information was recorded under the secondary jaws. Each particle in the phase space file was associated with a beamlet whose index was calculated and saved in the particle's LATCH variable. The DOSXYZnrc code was modified to accumulate the energy deposited by each particle based on its beamlet index. Furthermore, the central axis of each beamlet was calculated from the orientation of all the particles in this beamlet. A cylinder was then defined around the central axis so that only the energy deposited within the cylinder was counted. A look-up table was established for each cylinder during the tallying process. The efficiency and accuracy of the cylindrical beamlet energy deposition approach was evaluated using a treatment plan developed on a simulated lung phantom. Profile and percentage depth doses computed in a water phantom for an open, square field size were within 1.5% of measurements. Dose optimized with the cylindrical dose kernel was found to be within 0.6% of that computed with the nontruncated 3D kernel. The cylindrical truncation reduced optimization time by approximately 80%. A method for generating a phase-space-based dose kernel, using a truncated cylinder for scoring dose, in beamlet-based optimization of lung treatment planning was developed and found to be in good agreement with the standard, nontruncated scoring approach. Compared to previous techniques, our method significantly reduces computational time and memory requirements, which may be useful for Monte-Carlo-based 4D IMRT or IMAT treatment planning.

  17. Utilization of wild apricot kernel press cake for extraction of protein isolate.

    PubMed

    Sharma, P C; Tilakratne, B M K S; Gupta, Anil

    2010-12-01

    The kernels of apricot (Prunus armeniaca) stones are utilized for extraction of oil. The press cake left after extraction of oil was evaluated for preparation of protein isolate for its use in food supplementation. The apricot kernels contained 45-50% oil, 23.6-26.2% protein, 4.2% ash, 5.42% crude fibre, 8.2% carbohydrates and 90 mg HCN/100 g kernels, while press cake obtained after oil extraction contained 34.5% crude protein, which can be utilized for preparation of protein isolates. The method standardized for extraction of protein isolate broadly consisted of boiling the press cake with water in 1:20 (w/v) ratio for 1 h, raising pH to 8 and stirring for a few min followed by filtration, coagulation at pH 4 prior to sieving and pressing of coagulant for overnight and drying followed by grinding which resulted in extraction of about 71.3% of the protein contained in the press cake. The protein isolate contained 68.8% protein, 6.4% crude fat, 0.8% ash, 2.2% crude fibre and 12.7% carbohydrates. Thus the apricot kernel press cake can be utilized for preparation of protein isolate to improve the nutritional status of many food formulations.

  18. Triso coating development progress for uranium nitride kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jolly, Brian C.; Lindemer, Terrence; Terrani, Kurt A.

    2015-08-01

    In support of fully ceramic matrix (FCM) fuel development [1-2], coating development work is ongoing at the Oak Ridge National Laboratory (ORNL) to produce tri-structural isotropic (TRISO) coated fuel particles with UN kernels [3]. The nitride kernels are used to increase fissile density in these SiC-matrix fuel pellets with details described elsewhere [4]. The advanced gas reactor (AGR) program at ORNL used fluidized bed chemical vapor deposition (FBCVD) techniques for TRISO coating of UCO (two phase mixture of UO2 and UCx) kernels [5]. Similar techniques were employed for coating of the UN kernels, however significant changes in processing conditions weremore » required to maintain acceptable coating properties due to physical property and dimensional differences between the UCO and UN kernels (Table 1).« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jolly, Brian C.; Lindemer, Terrence; Terrani, Kurt A.

    In support of fully ceramic matrix (FCM) fuel development, coating development work has begun at the Oak Ridge National Laboratory (ORNL) to produce tri-isotropic (TRISO) coated fuel particles with UN kernels. The nitride kernels are used to increase heavy metal density in these SiC-matrix fuel pellets with details described elsewhere. The advanced gas reactor (AGR) program at ORNL used fluidized bed chemical vapor deposition (FBCVD) techniques for TRISO coating of UCO (two phase mixture of UO 2 and UC x) kernels. Similar techniques were employed for coating of the UN kernels, however significant changes in processing conditions were required tomore » maintain acceptable coating properties due to physical property and dimensional differences between the UCO and UN kernels.« less

  20. Balancing Particle and Mesh Computation in a Particle-In-Cell Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worley, Patrick H; D'Azevedo, Eduardo; Hager, Robert

    2016-01-01

    The XGC1 plasma microturbulence particle-in-cell simulation code has both particle-based and mesh-based computational kernels that dominate performance. Both of these are subject to load imbalances that can degrade performance and that evolve during a simulation. Each separately can be addressed adequately, but optimizing just for one can introduce significant load imbalances in the other, degrading overall performance. A technique has been developed based on Golden Section Search that minimizes wallclock time given prior information on wallclock time, and on current particle distribution and mesh cost per cell, and also adapts to evolution in load imbalance in both particle and meshmore » work. In problems of interest this doubled the performance on full system runs on the XK7 at the Oak Ridge Leadership Computing Facility compared to load balancing only one of the kernels.« less

  1. 7 CFR 51.2126 - Particles and dust.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Particles and dust. 51.2126 Section 51.2126 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2126 Particles and dust. Particles and dust means fragments of almond kernels or other material...

  2. 7 CFR 51.2126 - Particles and dust.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Particles and dust. 51.2126 Section 51.2126 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2126 Particles and dust. Particles and dust means fragments of almond kernels or other material...

  3. Discrete element method as an approach to model the wheat milling process

    USDA-ARS?s Scientific Manuscript database

    It is a well-known phenomenon that break-release, particle size, and size distribution of wheat milling are functions of machine operational parameters and grain properties. Due to the non-uniformity of characteristics and properties of wheat kernels, the kernel physical and mechanical properties af...

  4. 7 CFR 51.2126 - Particles and dust.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Particles and dust. 51.2126 Section 51.2126... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2126 Particles and dust. Particles and dust means fragments of almond kernels or other material which will pass through a round...

  5. 7 CFR 51.2126 - Particles and dust.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Particles and dust. 51.2126 Section 51.2126... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2126 Particles and dust. Particles and dust means fragments of almond kernels or other material which will pass through a round...

  6. 7 CFR 51.2126 - Particles and dust.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Particles and dust. 51.2126 Section 51.2126... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2126 Particles and dust. Particles and dust means fragments of almond kernels or other material which will pass through a round...

  7. Data Compilation for AGR-3/4 Designed-to-Fail (DTF) Fuel Particle Batch LEU04-02DTF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunn, John D; Miller, James Henry

    2008-10-01

    This document is a compilation of coating and characterization data for the AGR-3/4 designed-to-fail (DTF) particles. The DTF coating is a high density, high anisotropy pyrocarbon coating of nominal 20 {micro}m thickness that is deposited directly on the kernel. The purpose of this coating is to fail early in the irradiation, resulting in a controlled release of fission products which can be analyzed to provide data on fission product transport. A small number of DTF particles will be included with standard TRISO driver fuel particles in the AGR-3 and AGR-4 compacts. The ORNL Coated Particle Fuel Development Laboratory 50-mm diametermore » fluidized bed coater was used to coat the DTF particles. The coatings were produced using procedures and process parameters that were developed in an earlier phase of the project as documented in 'Summary Report on the Development of Procedures for the Fabrication of AGR-3/4 Design-to-Fail Particles', ORNL/TM-2008/161. Two coating runs were conducted using the approved coating parameters. NUCO425-06DTF was a final process qualification batch using natural enrichment uranium carbide/uranium oxide (UCO) kernels. After the qualification run, LEU04-02DTF was produced using low enriched UCO kernels. Both runs were inspected and determined to meet the specifications for DTF particles in section 5 of the AGR-3 & 4 Fuel Product Specification (EDF-6638, Rev.1). Table 1 provides a summary of key properties of the DTF layer. For comparison purposes, an archive sample of DTF particles produced by General Atomics was characterized using identical methods. This data is also summarized in Table 1.« less

  8. 7 CFR 51.2105 - U.S. Fancy.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... decay, rancidity, insect injury, foreign material, doubles, split or broken kernels, particles and dust... percentage shall be allowed for glass and metal; (e) For particles and dust. One-tenth of 1 percent (0.10...

  9. ELECTRON PROBE MICROANALYSIS OF IRRADIATED AND 1600°C SAFETY-TESTED AGR-1 TRISO FUEL PARTICLES WITH LOW AND HIGH RETAINED 110MAG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, Karen E.; van Rooyen, Isabella J.

    2016-11-01

    AGR-1 fuel Compact 4-3-3 achieved 18.63% FIMA and was exposed subsequently to a safety test at 1600°C. Two particles, AGR1-433-003 and AGR1-433-007, with measured-to-calculated 110mAg inventories of <22% and 100%, respectively, were selected for comparative electron microprobe analysis to determine whether the distribution or abundance of fission products differed proximally and distally from the deformed kernel in AGR1-433-003, and how this compared to fission product distribution in AGR1-433-007. On the deformed side of AGR1-433-003, Xe, Cs, I, Eu, Sr, and Te concentrations in the kernel buffer interface near the protruded kernel were up to six times higher than on themore » opposite, non-deformed side. At the SiC-inner pyrolytic carbon (IPyC) interface proximal to the deformed kernel, Pd and Ag concentrations were 1.2 wt% and 0.04 wt% respectively, whereas on the SiC-IPyC interface distal from the kernel deformation those elements measured 0.4 and 0.01 wt%, respectively. Palladium and Ag concentrations at the SiC-IPyC interface of AGR1-433-007 were 2.05 and 0.05 wt.%, respectively. Rare earth element concentrations at the SiC-IPyC interface of AGR1-433-007 were a factor of ten higher than at the SiC-IPyC interfaces measured in particle AGR1-433-003. Palladium permeated the SiC layer of AGR1-433-007 and the non-deformed SiC layer of AGR1-433-003.« less

  10. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  11. Product demand forecasts using wavelet kernel support vector machine and particle swarm optimization in manufacture system

    NASA Astrophysics Data System (ADS)

    Wu, Qi

    2010-03-01

    Demand forecasts play a crucial role in supply chain management. The future demand for a certain product is the basis for the respective replenishment systems. Aiming at demand series with small samples, seasonal character, nonlinearity, randomicity and fuzziness, the existing support vector kernel does not approach the random curve of the sales time series in the space (quadratic continuous integral space). In this paper, we present a hybrid intelligent system combining the wavelet kernel support vector machine and particle swarm optimization for demand forecasting. The results of application in car sale series forecasting show that the forecasting approach based on the hybrid PSOWv-SVM model is effective and feasible, the comparison between the method proposed in this paper and other ones is also given, which proves that this method is, for the discussed example, better than hybrid PSOv-SVM and other traditional methods.

  12. On the Asymptotic Behavior of the Kernel Function in the Generalized Langevin Equation: A One-Dimensional Lattice Model

    NASA Astrophysics Data System (ADS)

    Chu, Weiqi; Li, Xiantao

    2018-01-01

    We present some estimates for the memory kernel function in the generalized Langevin equation, derived using the Mori-Zwanzig formalism from a one-dimensional lattice model, in which the particles interactions are through nearest and second nearest neighbors. The kernel function can be explicitly expressed in a matrix form. The analysis focuses on the decay properties, both spatially and temporally, revealing a power-law behavior in both cases. The dependence on the level of coarse-graining is also studied.

  13. Generalized Langevin equation with tempered memory kernel

    NASA Astrophysics Data System (ADS)

    Liemert, André; Sandev, Trifce; Kantz, Holger

    2017-01-01

    We study a generalized Langevin equation for a free particle in presence of a truncated power-law and Mittag-Leffler memory kernel. It is shown that in presence of truncation, the particle from subdiffusive behavior in the short time limit, turns to normal diffusion in the long time limit. The case of harmonic oscillator is considered as well, and the relaxation functions and the normalized displacement correlation function are represented in an exact form. By considering external time-dependent periodic force we obtain resonant behavior even in case of a free particle due to the influence of the environment on the particle movement. Additionally, the double-peak phenomenon in the imaginary part of the complex susceptibility is observed. It is obtained that the truncation parameter has a huge influence on the behavior of these quantities, and it is shown how the truncation parameter changes the critical frequencies. The normalized displacement correlation function for a fractional generalized Langevin equation is investigated as well. All the results are exact and given in terms of the three parameter Mittag-Leffler function and the Prabhakar generalized integral operator, which in the kernel contains a three parameter Mittag-Leffler function. Such kind of truncated Langevin equation motion can be of high relevance for the description of lateral diffusion of lipids and proteins in cell membranes.

  14. 7 CFR 51.2105 - U.S. Fancy.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., split or broken kernels, particles and dust, and free from injury caused by chipped and scratched... and metal; (e) For particles and dust. One-tenth of 1 percent (0.10 percent); and, (f) For other...

  15. 7 CFR 51.2107 - U.S. No. 1.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., split or broken kernels, particles and dust, and free from damage caused by chipped and scratched... percent). No part of this percentage shall be allowed for glass and metal; (e) For particles and dust. One...

  16. 7 CFR 51.2105 - U.S. Fancy.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., split or broken kernels, particles and dust, and free from injury caused by chipped and scratched... and metal; (e) For particles and dust. One-tenth of 1 percent (0.10 percent); and, (f) For other...

  17. 7 CFR 51.2107 - U.S. No. 1.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., split or broken kernels, particles and dust, and free from damage caused by chipped and scratched... percent). No part of this percentage shall be allowed for glass and metal; (e) For particles and dust. One...

  18. 7 CFR 51.2107 - U.S. No. 1.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... decay, rancidity, insect injury, foreign material, doubles, split or broken kernels, particles and dust... shall be allowed for glass and metal; (e) For particles and dust. One-tenth of 1 percent (0.10 percent...

  19. [Spectral scatter correction of coal samples based on quasi-linear local weighted method].

    PubMed

    Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng

    2014-07-01

    The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.

  20. 7 CFR 51.2109 - U.S. Standard Sheller Run.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., doubles, split or broken kernels, particles and dust, and free from damage caused by chipped and scratched... percent (0.20 percent). No part of this percentage shall be allowed for glass and metal; (e) For particles...

  1. 7 CFR 51.2109 - U.S. Standard Sheller Run.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., insect injury, foreign material, doubles, split or broken kernels, particles and dust, and free from... glass and metal; (e) For particles and dust. One-tenth of 1 percent (0.10 percent); and, (f) For other...

  2. 7 CFR 51.2106 - U.S. Extra No. 1.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., doubles, split or broken kernels, particles and dust, and free from damage caused by chipped and scratched... percent). No part of this percentage shall be allowed for glass and metal; (e) For particles and dust. One...

  3. 7 CFR 51.2106 - U.S. Extra No. 1.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., doubles, split or broken kernels, particles and dust, and free from damage caused by chipped and scratched... percent). No part of this percentage shall be allowed for glass and metal; (e) For particles and dust. One...

  4. 7 CFR 51.2109 - U.S. Standard Sheller Run.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., insect injury, foreign material, doubles, split or broken kernels, particles and dust, and free from... glass and metal; (e) For particles and dust. One-tenth of 1 percent (0.10 percent); and, (f) For other...

  5. 7 CFR 51.2106 - U.S. Extra No. 1.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., particles and dust, and free from damage caused by chipped and scratched kernels, mold, gum, shriveling... percentage shall be allowed for glass and metal; (e) For particles and dust. One-tenth of 1 percent (0.10...

  6. HBTprogs Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, D; Danielewicz, P

    2002-03-15

    This is the manual for a collection of programs that can be used to invert angled-averaged (i.e. one dimensional) two-particle correlation functions. This package consists of several programs that generate kernel matrices (basically the relative wavefunction of the pair, squared), programs that generate test correlation functions from test sources of various types and the program that actually inverts the data using the kernel matrix.

  7. On The Cloud Processing of Aerosol Particles: An Entraining Air Parcel Model With Two-dimensional Spectral Cloud Microphysics and A New Formulation of The Collection Kernel

    NASA Astrophysics Data System (ADS)

    Bott, Andreas; Kerkweg, Astrid; Wurzler, Sabine

    A study has been made of the modification of aerosol spectra due to cloud pro- cesses and the impact of the modified aerosols on the microphysical structure of future clouds. For this purpose an entraining air parcel model with two-dimensional spectral cloud microphysics has been used. In order to treat collision/coalescence processes in the two-dimensional microphysical module, a new realistic and continuous formu- lation of the collection kernel has been developed. Based on experimental data, the kernel covers the entire investigated size range of aerosols, cloud and rain drops, that is the kernel combines all important coalescence processes such as the collision of cloud drops as well as the impaction scavenging of small aerosols by big raindrops. Since chemical reactions in the gas phase and in cloud drops have an important impact on the physico-chemical properties of aerosol particles, the parcel model has been extended by a chemical module describing gas phase and aqueous phase chemical reactions. However, it will be shown that in the numerical case studies presented in this paper the modification of aerosols by chemical reactions has a minor influence on the microphysical structure of future clouds. The major process yielding in a second cloud event an enhanced formation of rain is the production of large aerosol particles by collision/coalescence processes in the first cloud.

  8. Performance of fly ash based geopolymer incorporating palm kernel shell for lightweight concrete

    NASA Astrophysics Data System (ADS)

    Razak, Rafiza Abd; Abdullah, Mohd Mustafa Al Bakri; Yahya, Zarina; Jian, Ang Zhi; Nasri, Armia

    2017-09-01

    A concrete which cement is totally replaced by source material such as fly ash and activated by highly alkaline solutions is known as geopolymer concrete. Fly ash is the most common source material for geopolymer because it is a by-product material, so it can get easily from all around the world. An investigation has been carried out to select the most suitable ingredients of geopolymer concrete so that the geopolymer concrete can achieve the desire compressive strength. The samples were prepared to determine the suitable percentage of palm kernel shell used in geopolymer concrete and cured for 7 days in oven. After that, other samples were prepared by using the suitable percentage of palm kernel shell and cured for 3, 14, 21 and 28 days in oven. The control sample consisting of ordinary Portland cement and palm kernel shell and cured for 28 days were prepared too. The NaOH concentration of 12M, ratio Na2SiO3 to NaOH of 2.5, ratio fly ash to alkaline activator solution of 2.0 and ratio water to geopolymer of 0.35 were fixed throughout the research. The density obtained for the samples were 1.78 kg/m3, water absorption of 20.41% and the compressive strength of 14.20 MPa. The compressive strength of geopolymer concrete is still acceptable as lightweight concrete although the compressive strength is lower than OPC concrete. Therefore, the proposed method by using fly ash mixed with 10% of palm kernel shell can be used to design geopolymer concrete.

  9. Suspended liquid particle disturbance on laser-induced blast wave and low density distribution

    NASA Astrophysics Data System (ADS)

    Ukai, Takahiro; Zare-Behtash, Hossein; Kontis, Konstantinos

    2017-12-01

    The impurity effect of suspended liquid particles on the laser-induced gas breakdown was experimentally investigated in quiescent gas. The focus of this study is the investigation of the influence of the impurities on the shock wave structure as well as the low density distribution. A 532 nm Nd:YAG laser beam with an 188 mJ/pulse was focused on the chamber filled with suspended liquid particles 0.9 ± 0.63 μm in diameter. Several shock waves are generated by multiple gas breakdowns along the beam path in the breakdown with particles. Four types of shock wave structures can be observed: (1) the dual blast waves with a similar shock radius, (2) the dual blast waves with a large shock radius at the lower breakdown, (3) the dual blast waves with a large shock radius at the upper breakdown, and (4) the triple blast waves. The independent blast waves interact with each other and enhance the shock strength behind the shock front in the lateral direction. The triple blast waves lead to the strongest shock wave in all cases. The shock wave front that propagates toward the opposite laser focal spot impinges on one another, and thereafter a transmitted shock wave (TSW) appears. The TSW interacts with the low density core called a kernel; the kernel then longitudinally expands quickly due to a Richtmyer-Meshkov-like instability. The laser-particle interaction causes an increase in the kernel volume which is approximately five times as large as that in the gas breakdown without particles. In addition, the laser-particle interaction can improve the laser energy efficiency.

  10. Particle-in-cell simulations on graphic processing units

    NASA Astrophysics Data System (ADS)

    Ren, C.; Zhou, X.; Li, J.; Huang, M. C.; Zhao, Y.

    2014-10-01

    We will show our recent progress in using GPU's to accelerate the PIC code OSIRIS [Fonseca et al. LNCS 2331, 342 (2002)]. The OISRIS parallel structure is retained and the computation-intensive kernels are shipped to GPU's. Algorithms for the kernels are adapted for the GPU, including high-order charge-conserving current deposition schemes with few branching and parallel particle sorting [Kong et al., JCP 230, 1676 (2011)]. These algorithms make efficient use of the GPU shared memory. This work was supported by U.S. Department of Energy under Grant No. DE-FC02-04ER54789 and by NSF under Grant No. PHY-1314734.

  11. Enriched reproducing kernel particle method for fractional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Ying, Yuping; Lian, Yanping; Tang, Shaoqiang; Liu, Wing Kam

    2018-06-01

    The reproducing kernel particle method (RKPM) has been efficiently applied to problems with large deformations, high gradients and high modal density. In this paper, it is extended to solve a nonlocal problem modeled by a fractional advection-diffusion equation (FADE), which exhibits a boundary layer with low regularity. We formulate this method on a moving least-square approach. Via the enrichment of fractional-order power functions to the traditional integer-order basis for RKPM, leading terms of the solution to the FADE can be exactly reproduced, which guarantees a good approximation to the boundary layer. Numerical tests are performed to verify the proposed approach.

  12. A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4

    NASA Technical Reports Server (NTRS)

    Park, Young-Keun; Fahrenthold, Eric P.

    2004-01-01

    An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.

  13. Analysis and Implementation of Particle-to-Particle (P2P) Graphics Processor Unit (GPU) Kernel for Black-Box Adaptive Fast Multipole Method

    DTIC Science & Technology

    2015-06-01

    5110P and 16 dx360M4 nodes each with one NVIDIA Kepler K20M/K40M GPU. Each node contained dual Intel Xeon E5-2670 (Sandy Bridge) central processing...kernel and as such does not employ multiple processors. This work makes use of a single processing core and a single NVIDIA Kepler K40 GK110...bandwidth (2 × 16 slot), 7.877 GFloat/s; Kepler K40 peak, 4,290 × 1 billion floating-point operations (GFLOPs), and 288 GB/s Kepler K40 memory

  14. Kernel and divergence techniques in high energy physics separations

    NASA Astrophysics Data System (ADS)

    Bouř, Petr; Kůs, Václav; Franc, Jiří

    2017-10-01

    Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.

  15. Green synthesis of Pd nanoparticles at Apricot kernel shell substrate using Salvia hydrangea extract: Catalytic activity for reduction of organic dyes.

    PubMed

    Khodadadi, Bahar; Bordbar, Maryam; Nasrollahzadeh, Mahmoud

    2017-03-15

    For the first time the extract of the plant of Salvia hydrangea was used to green synthesis of Pd nanoparticles (NPs) supported on Apricot kernel shell as an environmentally benign support. The Pd NPs/Apricot kernel shell as an effective catalyst was prepared through reduction of Pd 2+ ions using Salvia hydrangea extract as the reducing and capping agent and Pd NPs immobilization on Apricot kernel shell surface in the absence of any stabilizer or surfactant. According to FT-IR analysis, the hydroxyl groups of phenolics in Salvia hydrangea extract as bioreductant agents are directly responsible for the reduction of Pd 2+ ions and formation of Pd NPs. The as-prepared catalyst was characterized by Fourier transform infrared (FT-IR) and UV-Vis spectroscopy, field emission scanning electron microscopy (FESEM) equipped with an energy dispersive X-ray spectroscopy (EDS), Elemental mapping, X-ray diffraction analysis (XRD) and transmittance electron microscopy (TEM). The synthesized catalyst was used in the reduction of 4-nitrophenol (4-NP), Methyl Orange (MO), Methylene Blue (MB), Rhodamine B (RhB), and Congo Red (CR) at room temperature. The Pd NPs/Apricot kernel shell showed excellent catalytic activity in the reduction of these organic dyes. In addition, it was found that Pd NPs/Apricot kernel shell can be recovered and reused several times without significant loss of catalytic activity. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Hollow microspheres with a tungsten carbide kernel for PEMFC application.

    PubMed

    d'Arbigny, Julien Bernard; Taillades, Gilles; Marrony, Mathieu; Jones, Deborah J; Rozière, Jacques

    2011-07-28

    Tungsten carbide microspheres comprising an outer shell and a compact kernel prepared by a simple hydrothermal method exhibit very high surface area promoting a high dispersion of platinum nanoparticles, and an exceptionally high electrochemically active surface area (EAS) stability compared to the usual Pt/C electrocatalysts used for PEMFC application.

  17. 7 CFR 51.1437 - Size classifications for halves.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... weight of half-kernels after all pieces, particles and dust, shell, center wall, and foreign material..., particles, and dust. In order to allow for variations incident to proper sizing and handling, not more than 15 percent, by weight, of any lot may consist of pieces, particles, and dust: Provided, That not more...

  18. 7 CFR 51.1437 - Size classifications for halves.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... weight of half-kernels after all pieces, particles and dust, shell, center wall, and foreign material..., particles, and dust. In order to allow for variations incident to proper sizing and handling, not more than 15 percent, by weight, of any lot may consist of pieces, particles, and dust: Provided, That not more...

  19. 7 CFR 51.1437 - Size classifications for halves.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... halves per pound shall be based upon the weight of half-kernels after all pieces, particles and dust... specified range. (d) Tolerances for pieces, particles, and dust. In order to allow for variations incident..., particles, and dust: Provided, That not more than one-third of this amount, or 5 percent, shall be allowed...

  20. 7 CFR 51.1437 - Size classifications for halves.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... weight of half-kernels after all pieces, particles and dust, shell, center wall, and foreign material..., particles, and dust. In order to allow for variations incident to proper sizing and handling, not more than 15 percent, by weight, of any lot may consist of pieces, particles, and dust: Provided, That not more...

  1. 7 CFR 51.1437 - Size classifications for halves.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... halves per pound shall be based upon the weight of half-kernels after all pieces, particles and dust... specified range. (d) Tolerances for pieces, particles, and dust. In order to allow for variations incident..., particles, and dust: Provided, That not more than one-third of this amount, or 5 percent, shall be allowed...

  2. Indetermination of particle sizing by laser diffraction in the anomalous size ranges

    NASA Astrophysics Data System (ADS)

    Pan, Linchao; Ge, Baozhen; Zhang, Fugen

    2017-09-01

    The laser diffraction method is widely used to measure particle size distributions. It is generally accepted that the scattering angle becomes smaller and the angles to the location of the main peak of scattered energy distributions in laser diffraction instruments shift to smaller values with increasing particle size. This specific principle forms the foundation of the laser diffraction method. However, this principle is not entirely correct for non-absorbing particles in certain size ranges and these particle size ranges are called anomalous size ranges. Here, we derive the analytical formulae for the bounds of the anomalous size ranges and discuss the influence of the width of the size segments on the signature of the Mie scattering kernel. This anomalous signature of the Mie scattering kernel will result in an indetermination of the particle size distribution when measured by laser diffraction instruments in the anomalous size ranges. By using the singular-value decomposition method we interpret the mechanism of occurrence of this indetermination in detail and then validate its existence by using inversion simulations.

  3. High-Performance Reactive Particle Tracking with Adaptive Representation

    NASA Astrophysics Data System (ADS)

    Schmidt, M.; Benson, D. A.; Pankavich, S.

    2017-12-01

    Lagrangian particle tracking algorithms have been shown to be effective tools for modeling chemical reactions in imperfectly-mixed media. One disadvantage of these algorithms is the possible need to employ large numbers of particles in simulations, depending on the concentration covariance structure, and these large particle numbers can lead to long computation times. Two distinct approaches have recently arisen to overcome this. One method employs spatial kernels that are related to a specified, reduced particle number; however, over-wide kernels, dictated by a very low particle number, lead to an excess of reaction calculations and cause a reduction in performance. Another formulation involves hybrid particles that carry multiple species of reactant, wherein each particle is treated as its own well-mixed volume, obviating the need for large numbers of particles for each species but still requiring a fixed number of hybrid particles. Here, we combine these two approaches and demonstrate an improved method for simulating a given system in a computationally efficient manner. Additionally, the independent nature of transport and reaction calculations in this approach allows for significant gains via parallelization in an MPI or OpenMP context. For benchmarking, we choose a CO2 injection simulation with dissolution and precipitation of calcite and dolomite, allowing us to derive the proper treatment of interaction between solid and aqueous phases.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, William R.; Lee, John C.; baxter, Alan

    Information and measured data from the intial Fort St. Vrain (FSV) high temperature gas reactor core is used to develop a benchmark configuration to validate computational methods for analysis of a full-core, commercial HTR configuration. Large uncertainties in the geometry and composition data for the FSV fuel and core are identified, including: (1) the relative numbers of fuel particles for the four particle types, (2) the distribution of fuel kernel diameters for the four particle types, (3) the Th:U ratio in the initial FSV core, (4) and the buffer thickness for the fissile and fertile particles. Sensitivity studies were performedmore » to assess each of these uncertainties. A number of methods were developed to assist in these studies, including: (1) the automation of MCNP5 input files for FSV using Python scripts, (2) a simple method to verify isotopic loadings in MCNP5 input files, (3) an automated procedure to conduct a coupled MCNP5-RELAP5 analysis for a full-core FSV configuration with thermal-hydraulic feedback, and (4) a methodology for sampling kernel diameters from arbitrary power law and Gaussian PDFs that preserved fuel loading and packing factor constraints. A reference FSV fuel configuration was developed based on having a single diameter kernel for each of the four particle types, preserving known uranium and thorium loadings and packing factor (58%). Three fuel models were developed, based on representing the fuel as a mixture of kernels with two diameters, four diameters, or a continuous range of diameters. The fuel particles were put into a fuel compact using either a lattice-bsed approach or a stochastic packing methodology from RPI, and simulated with MCNP5. The results of the sensitivity studies indicated that the uncertainties in the relative numbers and sizes of fissile and fertile kernels were not important nor were the distributions of kernel diameters within their diameter ranges. The uncertainty in the Th:U ratio in the intial FSV core was found to be important with a crude study. The uncertainty in the TRISO buffer thickness was estimated to be unimportant but the study was not conclusive. FSV fuel compacts and a regular FSV fuel element were analyzed with MCNP5 and compared with predictions using a modified version of HELIOS that is capable of analyzing TRISO fuel configurations. The HELIOS analyses were performed by SSP. The eigenvalue discrepancies between HELIOS and MCNP5 are currently on the order of 1% but these are still being evaluated. Full-core FSV configurations were developed for two initial critical configurations - a cold, clean critical loading and a critical configuration at 70% power. MCNP5 predictions are compared to experimental data and the results are mixed. Analyses were also done for the pulsed neutron experiments that were conducted by GA for the initial FSV core. MCNP5 was used to model these experiments and reasonable agreement with measured results has been observed.« less

  5. Discontinuous functional for linear-response time-dependent density-functional theory: The exact-exchange kernel and approximate forms

    NASA Astrophysics Data System (ADS)

    Hellgren, Maria; Gross, E. K. U.

    2013-11-01

    We present a detailed study of the exact-exchange (EXX) kernel of time-dependent density-functional theory with an emphasis on its discontinuity at integer particle numbers. It was recently found that this exact property leads to sharp peaks and step features in the kernel that diverge in the dissociation limit of diatomic systems [Hellgren and Gross, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.85.022514 85, 022514 (2012)]. To further analyze the discontinuity of the kernel, we here make use of two different approximations to the EXX kernel: the Petersilka Gossmann Gross (PGG) approximation and a common energy denominator approximation (CEDA). It is demonstrated that whereas the PGG approximation neglects the discontinuity, the CEDA includes it explicitly. By studying model molecular systems it is shown that the so-called field-counteracting effect in the density-functional description of molecular chains can be viewed in terms of the discontinuity of the static kernel. The role of the frequency dependence is also investigated, highlighting its importance for long-range charge-transfer excitations as well as inner-shell excitations.

  6. Carbothermic synthesis of 820 μm uranium nitride kernels: Literature review, thermodynamics, analysis, and related experiments

    NASA Astrophysics Data System (ADS)

    Lindemer, T. B.; Voit, S. L.; Silva, C. M.; Besmann, T. M.; Hunt, R. D.

    2014-05-01

    The US Department of Energy is developing a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with uranium nitride (UN) kernels with diameters near 825 μm. This effort explores factors involved in the conversion of uranium oxide-carbon microspheres into UN kernels. An analysis of previous studies with sufficient experimental details is provided. Thermodynamic calculations were made to predict pressures of carbon monoxide and other relevant gases for several reactions that can be involved in the conversion of uranium oxides and carbides into UN. Uranium oxide-carbon microspheres were heated in a microbalance with an attached mass spectrometer to determine details of calcining and carbothermic conversion in argon, nitrogen, and vacuum. A model was derived from experiments on the vacuum conversion to uranium oxide-carbide kernels. UN-containing kernels were fabricated using this vacuum conversion as part of the overall process. Carbonitride kernels of ∼89% of theoretical density were produced along with several observations concerning the different stages of the process.

  7. Discrete bivariate population balance modelling of heteroaggregation processes.

    PubMed

    Rollié, Sascha; Briesen, Heiko; Sundmacher, Kai

    2009-08-15

    Heteroaggregation in binary particle mixtures was simulated with a discrete population balance model in terms of two internal coordinates describing the particle properties. The considered particle species are of different size and zeta-potential. Property space is reduced with a semi-heuristic approach to enable an efficient solution. Aggregation rates are based on deterministic models for Brownian motion and stability, under consideration of DLVO interaction potentials. A charge-balance kernel is presented, relating the electrostatic surface potential to the property space by a simple charge balance. Parameter sensitivity with respect to the fractal dimension, aggregate size, hydrodynamic correction, ionic strength and absolute particle concentration was assessed. Results were compared to simulations with the literature kernel based on geometric coverage effects for clusters with heterogeneous surface properties. In both cases electrostatic phenomena, which dominate the aggregation process, show identical trends: impeded cluster-cluster aggregation at low particle mixing ratio (1:1), restabilisation at high mixing ratios (100:1) and formation of complex clusters for intermediate ratios (10:1). The particle mixing ratio controls the surface coverage extent of the larger particle species. Simulation results are compared to experimental flow cytometric data and show very satisfactory agreement.

  8. Evaluation of the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels using particle and heavy ion transport code system: PHITS.

    PubMed

    Shiiba, Takuro; Kuga, Naoya; Kuroiwa, Yasuyoshi; Sato, Tatsuhiko

    2017-10-01

    We assessed the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels (DPKs) calculated using the particle and heavy ion transport code system (PHITS) for patient-specific dosimetry in targeted radionuclide treatment (TRT) and compared our data with published data. All mono-energetic and beta-emitting isotope DPKs calculated using PHITS, both in water and compact bone, were in good agreement with those in literature using other MC codes. PHITS provided reliable mono-energetic electron and beta-emitting isotope scaled DPKs for patient-specific dosimetry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Development of a Radial Deconsolidation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmreich, Grant W.; Montgomery, Fred C.; Hunn, John D.

    2015-12-01

    A series of experiments have been initiated to determine the retention or mobility of fission products* in AGR fuel compacts [Petti, et al. 2010]. This information is needed to refine fission product transport models. The AGR-3/4 irradiation test involved half-inch-long compacts that each contained twenty designed-to-fail (DTF) particles, with 20-μm thick carbon-coated kernels whose coatings were deliberately fabricated such that they would crack under irradiation, providing a known source of post-irradiation isotopes. The DTF particles in these compacts were axially distributed along the compact centerline so that the diffusion of fission products released from the DTF kernels would be radiallymore » symmetric [Hunn, et al. 2012; Hunn et al. 2011; Kercher, et al. 2011; Hunn, et al. 2007]. Compacts containing DTF particles were irradiated at Idaho National Laboratory (INL) at the Advanced Test Reactor (ATR) [Collin, 2015]. Analysis of the diffusion of these various post-irradiation isotopes through the compact requires a method to radially deconsolidate the compacts so that nested-annular volumes may be analyzed for post-irradiation isotope inventory in the compact matrix, TRISO outer pyrolytic carbon (OPyC), and DTF kernels. An effective radial deconsolidation method and apparatus appropriate to this application has been developed and parametrically characterized.« less

  10. TIME-DOMAIN METHODS FOR DIFFUSIVE TRANSPORT IN SOFT MATTER

    PubMed Central

    Fricks, John; Yao, Lingxing; Elston, Timothy C.; Gregory Forest, And M.

    2015-01-01

    Passive microrheology [12] utilizes measurements of noisy, entropic fluctuations (i.e., diffusive properties) of micron-scale spheres in soft matter to infer bulk frequency-dependent loss and storage moduli. Here, we are concerned exclusively with diffusion of Brownian particles in viscoelastic media, for which the Mason-Weitz theoretical-experimental protocol is ideal, and the more challenging inference of bulk viscoelastic moduli is decoupled. The diffusive theory begins with a generalized Langevin equation (GLE) with a memory drag law specified by a kernel [7, 16, 22, 23]. We start with a discrete formulation of the GLE as an autoregressive stochastic process governing microbead paths measured by particle tracking. For the inverse problem (recovery of the memory kernel from experimental data) we apply time series analysis (maximum likelihood estimators via the Kalman filter) directly to bead position data, an alternative to formulas based on mean-squared displacement statistics in frequency space. For direct modeling, we present statistically exact GLE algorithms for individual particle paths as well as statistical correlations for displacement and velocity. Our time-domain methods rest upon a generalization of well-known results for a single-mode exponential kernel [1, 7, 22, 23] to an arbitrary M-mode exponential series, for which the GLE is transformed to a vector Ornstein-Uhlenbeck process. PMID:26412904

  11. Progress in understanding fission-product behaviour in coated uranium-dioxide fuel particles

    NASA Astrophysics Data System (ADS)

    Barrachin, M.; Dubourg, R.; Kissane, M. P.; Ozrin, V.

    2009-03-01

    Supported by results of calculations performed with two analytical tools (MFPR, which takes account of physical and chemical mechanisms in calculating the chemical forms and physical locations of fission products in UO2, and MEPHISTA, a thermodynamic database), this paper presents an investigation of some important aspects of the fuel microstructure and chemical evolutions of irradiated TRISO particles. The following main conclusions can be identified with respect to irradiated TRISO fuel: first, the relatively low oxygen potential within the fuel particles with respect to PWR fuel leads to chemical speciation that is not typical of PWR fuels, e.g., the relatively volatile behaviour of barium; secondly, the safety-critical fission-product caesium is released from the urania kernel but the buffer and pyrolytic-carbon coatings could form an important chemical barrier to further migration (i.e., formation of carbides). Finally, significant releases of fission gases from the urania kernel are expected even in nominal conditions.

  12. Fusion PIC code performance analysis on the Cori KNL system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, Tuomas S.; Deslippe, Jack; Friesen, Brian

    We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization ismore » shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.« less

  13. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zuwei; Zhao, Haibo, E-mail: klinsmannzhb@163.com; Zheng, Chuguang

    2015-01-15

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule providesmore » a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.« less

  14. Imaging and automated detection of Sitophilus oryzae (Coleoptera: Curculionidae) pupae in hard red winter wheat.

    PubMed

    Toews, Michael D; Pearson, Tom C; Campbell, James F

    2006-04-01

    Computed tomography, an imaging technique commonly used for diagnosing internal human health ailments, uses multiple x-rays and sophisticated software to recreate a cross-sectional representation of a subject. The use of this technique to image hard red winter wheat, Triticum aestivm L., samples infested with pupae of Sitophilus oryzae (L.) was investigated. A software program was developed to rapidly recognize and quantify the infested kernels. Samples were imaged in a 7.6-cm (o.d.) plastic tube containing 0, 50, or 100 infested kernels per kg of wheat. Interkernel spaces were filled with corn oil so as to increase the contrast between voids inside kernels and voids among kernels. Automated image processing, using a custom C language software program, was conducted separately on each 100 g portion of the prepared samples. The average detection accuracy in the five infested kernels per 100-g samples was 94.4 +/- 7.3% (mean +/- SD, n = 10), whereas the average detection accuracy in the 10 infested kernels per 100-g sample was 87.3 +/- 7.9% (n = 10). Detection accuracy in the 10 infested kernels per 100-g samples was slightly less than the five infested kernels per 100-g samples because of some infested kernels overlapping with each other or air bubbles in the oil. A mean of 1.2 +/- 0.9 (n = 10) bubbles (per tube) was incorrectly classed as infested kernels in replicates containing no infested kernels. In light of these positive results, future studies should be conducted using additional grains, insect species, and life stages.

  15. Particle models for discrete element modeling of bulk grain properties of wheat kernels

    USDA-ARS?s Scientific Manuscript database

    Recent research has shown the potential of discrete element method (DEM) in simulating grain flow in bulk handling systems. Research has also revealed that simulation of grain flow with DEM requires establishment of appropriate particle models for each grain type. This research completes the three-p...

  16. Bands selection and classification of hyperspectral images based on hybrid kernels SVM by evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Yan-Yan; Li, Dong-Sheng

    2016-01-01

    The hyperspectral images(HSI) consist of many closely spaced bands carrying the most object information. While due to its high dimensionality and high volume nature, it is hard to get satisfactory classification performance. In order to reduce HSI data dimensionality preparation for high classification accuracy, it is proposed to combine a band selection method of artificial immune systems (AIS) with a hybrid kernels support vector machine (SVM-HK) algorithm. In fact, after comparing different kernels for hyperspectral analysis, the approach mixed radial basis function kernel (RBF-K) with sigmoid kernel (Sig-K) and applied the optimized hybrid kernels in SVM classifiers. Then the SVM-HK algorithm used to induce the bands selection of an improved version of AIS. The AIS was composed of clonal selection and elite antibody mutation, including evaluation process with optional index factor (OIF). Experimental classification performance was on a San Diego Naval Base acquired by AVIRIS, the HRS dataset shows that the method is able to efficiently achieve bands redundancy removal while outperforming the traditional SVM classifier.

  17. Carbothermic Synthesis of 820 m UN Kernels: Literature Review, Thermodynamics, Analysis, and Related Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindemer, Terrence; Voit, Stewart L; Silva, Chinthaka M

    2014-01-01

    The U.S. Department of Energy is considering a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with large, dense uranium nitride (UN) kernels. This effort explores many factors involved in using gel-derived uranium oxide-carbon microspheres to make large UN kernels. Analysis of recent studies with sufficient experimental details is provided. Extensive thermodynamic calculations are used to predict carbon monoxide and other pressures for several different reactions that may be involved in conversion of uranium oxides and carbides to UN. Experimentally, the method for making themore » gel-derived microspheres is described. These were used in a microbalance with an attached mass spectrometer to determine details of carbothermic conversion in argon, nitrogen, or vacuum. A quantitative model is derived from experiments for vacuum conversion to an uranium oxide-carbide kernel.« less

  18. Three-Dimensional Sensitivity Kernels of Z/H Amplitude Ratios of Surface and Body Waves

    NASA Astrophysics Data System (ADS)

    Bao, X.; Shen, Y.

    2017-12-01

    The ellipticity of Rayleigh wave particle motion, or Z/H amplitude ratio, has received increasing attention in inversion for shallow Earth structures. Previous studies of the Z/H ratio assumed one-dimensional (1D) velocity structures beneath the receiver, ignoring the effects of three-dimensional (3D) heterogeneities on wave amplitudes. This simplification may introduce bias in the resulting models. Here we present 3D sensitivity kernels of the Z/H ratio to Vs, Vp, and density perturbations, based on finite-difference modeling of wave propagation in 3D structures and the scattering-integral method. Our full-wave approach overcomes two main issues in previous studies of Rayleigh wave ellipticity: (1) the finite-frequency effects of wave propagation in 3D Earth structures, and (2) isolation of the fundamental mode Rayleigh waves from Rayleigh wave overtones and converted Love waves. In contrast to the 1D depth sensitivity kernels in previous studies, our 3D sensitivity kernels exhibit patterns that vary with azimuths and distances to the receiver. The laterally-summed 3D sensitivity kernels and 1D depth sensitivity kernels, based on the same homogeneous reference model, are nearly identical with small differences that are attributable to the single period of the 1D kernels and a finite period range of the 3D kernels. We further verify the 3D sensitivity kernels by comparing the predictions from the kernels with the measurements from numerical simulations of wave propagation for models with various small-scale perturbations. We also calculate and verify the amplitude kernels for P waves. This study shows that both Rayleigh and body wave Z/H ratios provide vertical and lateral constraints on the structure near the receiver. With seismic arrays, the 3D kernels afford a powerful tool to use the Z/H ratios to obtain accurate and high-resolution Earth models.

  19. Multistage adsorption of diffusing macromolecules and viruses

    NASA Astrophysics Data System (ADS)

    Chou, Tom; D'Orsogna, Maria R.

    2007-09-01

    We derive the equations that describe adsorption of diffusing particles onto a surface followed by additional surface kinetic steps before being transported across the interface. Multistage surface kinetics occurs during membrane protein insertion, cell signaling, and the infection of cells by virus particles. For example, viral entry into healthy cells is possible only after a series of receptor and coreceptor binding events occurs at the cellular surface. We couple the diffusion of particles in the bulk phase with the multistage surface kinetics and derive an effective, integrodifferential boundary condition that contains a memory kernel embodying the delay induced by the surface reactions. This boundary condition takes the form of a singular perturbation problem in the limit where particle-surface interactions are short ranged. Moreover, depending on the surface kinetics, the delay kernel induces a nonmonotonic, transient replenishment of the bulk particle concentration near the interface. The approach generalizes that of Ward and Tordai [J. Chem. Phys. 14, 453 (1946)] and Diamant and Andelman [Colloids Surf. A 183-185, 259 (2001)] to include surface kinetics, giving rise to qualitatively new behaviors. Our analysis also suggests a simple scheme by which stochastic surface reactions may be coupled to deterministic bulk diffusion.

  20. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  1. Extraction process of palm kernel cake as a source of mannan for feed additive on poultry diet

    NASA Astrophysics Data System (ADS)

    Tafsin, M.; Hanafi, N. D.; Yusraini, E.

    2017-05-01

    Palm Kernel Cake (PKC) is a by-product of palm kernel oil extraction and found in large quantity in Indonesia. The inclusion of PKC on poultry diet are limited due to some nutritional problems such as anti-nutritional properties (mannan). On the other hand, mannan containing polysaccharides play in various biological functions particularly in enhancing the immune response and to control pathogen in poultry. The research objective to find out the extraction process of PKC and conducted at animal nutrition and Feed Science Laboratory, Agricultural Faculty, University of Sumatera Utara. Various extraction methode were used in this experiment, including fraction analysis used 7 number sieves, and followed by water and acetic acid extraction. The result indicated that PKC had different particle size according to sieve size and dominated by particle size 850 um. The analysis of sugar content indicated that each particle size had different characteristic on treatment by hot water extraction. The particle size 180—850 um had higher sugar content than coarse PKC (2000—3000 um). The total sugar content were recovered vary between 0.9—3,2% from PKC were extracted. Treatment grinding method followed by hot water extraction (100—120 °C, 1 h) increased total sugar content than previous treatments and reach 8% from PKC were extracted. Utilisation acetic acid decreased the total amount of total sugar from PKC were extracted. It is concluded that treatment by hot temperature (110—120 °C) for 1 h show highest yield to extract sugar from PKC.

  2. The utilization of endopower β in commercial feed which contains palm kernel cake on performance of broiler chicken

    NASA Astrophysics Data System (ADS)

    Purba, S. S. A.; Tafsin, M.; Ginting, S. P.; Khairani, Y.

    2018-02-01

    Palm kernel cake is an agricultural waste that can be used as raw material in the preparation of poultry rations. The design used was Completely Randomized Design (CRD) with 5 treatments and 4 replications. Level endopower β used 0 % (R0), 0.02% (R1), 0.04% (R2) and 0.06% (R3). The results showed that R0a and R0b were significantly different from R3 in terms of diet consumption, body weight gain and the conversion ratio The utilization of endopower β in commercial diets containing palm kernel cake in broilers can increase body weight gain, feed consumption, improve feed use efficiency and even energy. It is concluded that utilization endpower β improve performances of broiler chicken fed by diet containing palm kernel cake.

  3. A particle swarm optimized kernel-based clustering method for crop mapping from multi-temporal polarimetric L-band SAR observations

    NASA Astrophysics Data System (ADS)

    Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza

    2017-06-01

    Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.

  4. Production of near-full density uranium nitride microspheres with a hot isostatic press

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMurray, Jacob W.; Kiggans, Jr., Jim O.; Helmreich, Grant W.

    Depleted uranium nitride (UN) kernels with diameters ranging from 420 to 858 microns and theoretical densities (TD) between 87 and 91 percent were postprocessed using a hot isostatic press (HIP) in an argon gas media. This treatment was shown to increase the TD up to above 97%. Uranium nitride is highly reactive with oxygen. Therefore, a novel crucible design was implemented to remove impurities in the argon gas via in situ gettering to avoid oxidation of the UN kernels. The density before and after each HIP procedure was calculated from average weight, volume, and ellipticity determined with established characterization techniquesmore » for particle. Furthermore, micrographs confirmed the nearly full densification of the particles using the gettering approach and HIP processing parameters investigated in this work.« less

  5. Spectral Kernel Approach to Study Radiative Response of Climate Variables and Interannual Variability of Reflected Solar Spectrum

    NASA Technical Reports Server (NTRS)

    Jin, Zhonghai; Wielicki, Bruce A.; Loukachine, Constantin; Charlock, Thomas P.; Young, David; Noeel, Stefan

    2011-01-01

    The radiative kernel approach provides a simple way to separate the radiative response to different climate parameters and to decompose the feedback into radiative and climate response components. Using CERES/MODIS/Geostationary data, we calculated and analyzed the solar spectral reflectance kernels for various climate parameters on zonal, regional, and global spatial scales. The kernel linearity is tested. Errors in the kernel due to nonlinearity can vary strongly depending on climate parameter, wavelength, surface, and solar elevation; they are large in some absorption bands for some parameters but are negligible in most conditions. The spectral kernels are used to calculate the radiative responses to different climate parameter changes in different latitudes. The results show that the radiative response in high latitudes is sensitive to the coverage of snow and sea ice. The radiative response in low latitudes is contributed mainly by cloud property changes, especially cloud fraction and optical depth. The large cloud height effect is confined to absorption bands, while the cloud particle size effect is found mainly in the near infrared. The kernel approach, which is based on calculations using CERES retrievals, is then tested by direct comparison with spectral measurements from Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) (a different instrument on a different spacecraft). The monthly mean interannual variability of spectral reflectance based on the kernel technique is consistent with satellite observations over the ocean, but not over land, where both model and data have large uncertainty. RMS errors in kernel ]derived monthly global mean reflectance over the ocean compared to observations are about 0.001, and the sampling error is likely a major component.

  6. Carbon monoxide formation in UO 2 kerneled HTR fuel particles containing oxygen getters

    NASA Astrophysics Data System (ADS)

    Proksch, E.; Strigl, A.; Nabielek, H.

    1986-06-01

    Mass spectrometric measurements of CO in irradiated UO 2 kerneled HTR fuel particles containing various oxygen getters are summarized and evaluated. Uranium carbide addition in the 3 to 15% range reduces the CO release by factors between 25 and 80, up to burn-up levels as high as 70% FIMA. Unintentional gettering by SiC in TRISO coated particles with failed inner pyrocarbon layers results in CO reduction factors between 15 and 110. For ZrC, only somewhat ambiguous results have been obtained; most likely, ZrC results in CO reduction by a factor of about 40. Ce 2O 3 and La 2O 3 seem to be somewhat less effective than the three carbides; for Ce 2O 3, reduction factors between 3 and 15 have been found. However, these results are possibly incorrect due to premature oxidation of the getter already during fabrication. Addition of SiO 2 + Al 2O 3 has no influence on CO release at all.

  7. Assessment of Possible Cycle Lengths for Fully-Ceramic Micro-Encapsulated Fuel-Based Light Water Reactor Concepts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Sonat Sen; Michael A. Pope; Abderrafi M. Ougouag

    2012-04-01

    The tri-isotropic (TRISO) fuel developed for High Temperature reactors is known for its extraordinary fission product retention capabilities [1]. Recently, the possibility of extending the use of TRISO particle fuel to Light Water Reactor (LWR) technology, and perhaps other reactor concepts, has received significant attention [2]. The Deep Burn project [3] currently focuses on once-through burning of transuranic fissile and fissionable isotopes (TRU) in LWRs. The fuel form for this purpose is called Fully-Ceramic Micro-encapsulated (FCM) fuel, a concept that borrows the TRISO fuel particle design from high temperature reactor technology, but uses SiC as a matrix material rather thanmore » graphite. In addition, FCM fuel may also use a cladding made of a variety of possible material, again including SiC as an admissible choice. The FCM fuel used in the Deep Burn (DB) project showed promising results in terms of fission product retention at high burnup values and during high-temperature transients. In the case of DB applications, the fuel loading within a TRISO particle is constituted entirely of fissile or fissionable isotopes. Consequently, the fuel was shown to be capable of achieving reasonable burnup levels and cycle lengths, especially in the case of mixed cores (with coexisting DB and regular LWR UO2 fuels). In contrast, as shown below, the use of UO2-only FCM fuel in a LWR results in considerably shorter cycle length when compared to current-generation ordinary LWR designs. Indeed, the constraint of limited space availability for heavy metal loading within the TRISO particles of FCM fuel and the constraint of low (i.e., below 20 w/0) 235U enrichment combine to result in shorter cycle lengths compared to ordinary LWRs if typical LWR power densities are also assumed and if typical TRISO particle dimensions and UO2 kernels are specified. The primary focus of this summary is on using TRISO particles with up to 20 w/0 enriched uranium kernels loaded in Pressurized Water Reactor (PWR) assemblies. In addition to consideration of this 'naive' use of TRISO fuel in LWRs, several refined options are briefly examined and others are identified for further consideration including the use of advanced, high density fuel forms and larger kernel diameters and TRISO packing fractions. The combination of 800 {micro}m diameter kernels of 20% enriched UN and 50% TRISO packing fraction yielded reactivity sufficient to achieve comparable burnup to present-day PWR fuel.« less

  8. a Gsa-Svm Hybrid System for Classification of Binary Problems

    NASA Astrophysics Data System (ADS)

    Sarafrazi, Soroor; Nezamabadi-pour, Hossein; Barahman, Mojgan

    2011-06-01

    This paperhybridizesgravitational search algorithm (GSA) with support vector machine (SVM) and made a novel GSA-SVM hybrid system to improve the classification accuracy in binary problems. GSA is an optimization heuristic toolused to optimize the value of SVM kernel parameter (in this paper, radial basis function (RBF) is chosen as the kernel function). The experimental results show that this newapproach can achieve high classification accuracy and is comparable to or better than the particle swarm optimization (PSO)-SVM and genetic algorithm (GA)-SVM, which are two hybrid systems for classification.

  9. A 3D Ginibre Point Field

    NASA Astrophysics Data System (ADS)

    Kargin, Vladislav

    2018-06-01

    We introduce a family of three-dimensional random point fields using the concept of the quaternion determinant. The kernel of each field is an n-dimensional orthogonal projection on a linear space of quaternionic polynomials. We find explicit formulas for the basis of the orthogonal quaternion polynomials and for the kernel of the projection. For number of particles n → ∞, we calculate the scaling limits of the point field in the bulk and at the center of coordinates. We compare our construction with the previously introduced Fermi-sphere point field process.

  10. Exact Doppler broadening of tabulated cross sections. [SIGMA 1 kernel broadening method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cullen, D.E.; Weisbin, C.R.

    1976-07-01

    The SIGMA1 kernel broadening method is presented to Doppler broaden to any required accuracy a cross section that is described by a table of values and linear-linear interpolation in energy-cross section between tabulated values. The method is demonstrated to have no temperature or energy limitations and to be equally applicable to neutron or charged-particle cross sections. The method is qualitatively and quantitatively compared to contemporary approximate methods of Doppler broadening with particular emphasis on the effect of each approximation introduced.

  11. CRKSPH: A new meshfree hydrodynamics method with applications to astrophysics

    NASA Astrophysics Data System (ADS)

    Owen, John Michael; Raskin, Cody; Frontiere, Nicholas

    2018-01-01

    The study of astrophysical phenomena such as supernovae, accretion disks, galaxy formation, and large-scale structure formation requires computational modeling of, at a minimum, hydrodynamics and gravity. Developing numerical methods appropriate for these kinds of problems requires a number of properties: shock-capturing hydrodynamics benefits from rigorous conservation of invariants such as total energy, linear momentum, and mass; lack of obvious symmetries or a simplified spatial geometry to exploit necessitate 3D methods that ideally are Galilean invariant; the dynamic range of mass and spatial scales that need to be resolved can span many orders of magnitude, requiring methods that are highly adaptable in their space and time resolution. We have developed a new Lagrangian meshfree hydrodynamics method called Conservative Reproducing Kernel Smoothed Particle Hydrodynamics, or CRKSPH, in order to meet these goals. CRKSPH is a conservative generalization of the meshfree reproducing kernel method, combining the high-order accuracy of reproducing kernels with the explicit conservation of mass, linear momentum, and energy necessary to study shock-driven hydrodynamics in compressible fluids. CRKSPH's Lagrangian, particle-like nature makes it simple to combine with well-known N-body methods for modeling gravitation, similar to the older Smoothed Particle Hydrodynamics (SPH) method. Indeed, CRKSPH can be substituted for SPH in existing SPH codes due to these similarities. In comparison to SPH, CRKSPH is able to achieve substantially higher accuracy for a given number of points due to the explicitly consistent (and higher-order) interpolation theory of reproducing kernels, while maintaining the same conservation principles (and therefore applicability) as SPH. There are currently two coded implementations of CRKSPH available: one in the open-source research code Spheral, and the other in the high-performance cosmological code HACC. Using these codes we have applied CRKSPH to a number of astrophysical scenarios, such as rotating gaseous disks, supernova remnants, and large-scale cosmological structure formation. In this poster we present an overview of CRKSPH and show examples of these astrophysical applications.

  12. Revisiting the use of the immersed-boundary lattice-Boltzmann method for simulations of suspended particles

    NASA Astrophysics Data System (ADS)

    Mountrakis, L.; Lorenz, E.; Hoekstra, A. G.

    2017-07-01

    The immersed-boundary lattice-Boltzmann method (IB-LBM) is increasingly being used in simulations of dense suspensions. These systems are computationally very expensive and can strongly benefit from lower resolutions that still maintain the desired accuracy for the quantities of interest. IB-LBM has a number of free parameters that have to be defined, often without exact knowledge of the tradeoffs, since their behavior in low resolutions is not well understood. Such parameters are the lattice constant Δ x , the number of vertices Nv, the interpolation kernel ϕ , and the LBM relaxation time τ . We investigate the effect of these IB-LBM parameters on a number of straightforward but challenging benchmarks. The systems considered are (a) the flow of a single sphere in shear flow, (b) the collision of two spheres in shear flow, and (c) the lubrication interaction of two spheres. All benchmarks are performed in three dimensions. The first two systems are used for determining two effective radii: the hydrodynamic radius rhyd and the particle interaction radius rinter. The last system is used to establish the numerical robustness of the lubrication forces, used to probe the hydrodynamic interactions in the limit of small gaps. Our results show that lower spatial resolutions result in larger hydrodynamic and interaction radii, while surface densities should be chosen above two vertices per LU2 result to prevent fluid penetration in underresolved meshes. Underresolved meshes also failed to produce the migration of particles toward the center of the domain due to lift forces in Couette flow, mostly noticeable for IBM-kernel ϕ2. Kernel ϕ4, despite being more robust toward mesh resolution, produces a notable membrane thickness, leading to the breakdown of the lubrication forces in larger gaps, and its use in dense suspensions where the mean particle distances are small can result in undesired behavior. rhyd is measured to be different from rinter, suggesting that there is no consistent measure to recalibrate the radius of the suspended particle.

  13. Dispersal kernel estimation: A comparison of empirical and modelled particle dispersion in a coastal marine system

    NASA Astrophysics Data System (ADS)

    Hrycik, Janelle M.; Chassé, Joël; Ruddick, Barry R.; Taggart, Christopher T.

    2013-11-01

    Early life-stage dispersal influences recruitment and is of significance in explaining the distribution and connectivity of marine species. Motivations for quantifying dispersal range from biodiversity conservation to the design of marine reserves and the mitigation of species invasions. Here we compare estimates of real particle dispersion in a coastal marine environment with similar estimates provided by hydrodynamic modelling. We do so by using a system of magnetically attractive particles (MAPs) and a magnetic-collector array that provides measures of Lagrangian dispersion based on the time-integration of MAPs dispersing through the array. MAPs released as a point source in a coastal marine location dispersed through the collector array over a 5-7 d period. A virtual release and observed (real-time) environmental conditions were used in a high-resolution three-dimensional hydrodynamic model to estimate the dispersal of virtual particles (VPs). The number of MAPs captured throughout the collector array and the number of VPs that passed through each corresponding model location were enumerated and compared. Although VP dispersal reflected several aspects of the observed MAP dispersal, the comparisons demonstrated model sensitivity to the small-scale (random-walk) particle diffusivity parameter (Kp). The one-dimensional dispersal kernel for the MAPs had an e-folding scale estimate in the range of 5.19-11.44 km, while those from the model simulations were comparable at 1.89-6.52 km, and also demonstrated sensitivity to Kp. Variations among comparisons are related to the value of Kp used in modelling and are postulated to be related to MAP losses from the water column and (or) shear dispersion acting on the MAPs; a process that is constrained in the model. Our demonstration indicates a promising new way of 1) quantitatively and empirically estimating the dispersal kernel in aquatic systems, and 2) quantitatively assessing and (or) improving regional hydrodynamic models.

  14. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    PubMed Central

    Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.

    2013-01-01

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507

  15. The influence of sub-grid scale motions on particle collision in homogeneous isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Li, Jing; Liu, Zhaohui; Zheng, Chuguang

    2018-02-01

    The absence of sub-grid scale (SGS) motions leads to severe errors in particle pair dynamics, which represents a great challenge to the large eddy simulation of particle-laden turbulent flow. In order to address this issue, data from direct numerical simulation (DNS) of homogenous isotropic turbulence coupled with Lagrangian particle tracking are used as a benchmark to evaluate the corresponding results of filtered DNS (FDNS). It is found that the filtering process in FDNS will lead to a non-monotonic variation of the particle collision statistics, including radial distribution function, radial relative velocity, and the collision kernel. The peak of radial distribution function shifts to the large-inertia region due to the lack of SGS motions, and the analysis of the local flowstructure characteristic variable at particle position indicates that the most effective interaction scale between particles and fluid eddies is increased in FDNS. Moreover, this scale shifting has an obvious effect on the odd-order moments of the probability density function of radial relative velocity, i.e. the skewness, which exhibits a strong correlation to the variance of radial distribution function in FDNS. As a whole, the radial distribution function, together with radial relative velocity, can compensate the SGS effects for the collision kernel in FDNS when the Stokes number based on the Kolmogorov time scale is greater than 3.0. However, it still leaves considerable errors for { St}_k <3.0.

  16. Handling Density Conversion in TPS.

    PubMed

    Isobe, Tomonori; Mori, Yutaro; Takei, Hideyuki; Sato, Eisuke; Tadano, Kiichi; Kobayashi, Daisuke; Tomita, Tetsuya; Sakae, Takeji

    2016-01-01

    Conversion from CT value to density is essential to a radiation treatment planning system. Generally CT value is converted to the electron density in photon therapy. In the energy range of therapeutic photon, interactions between photons and materials are dominated with Compton scattering which the cross-section depends on the electron density. The dose distribution is obtained by calculating TERMA and kernel using electron density where TERMA is the energy transferred from primary photons and kernel is a volume considering spread electrons. Recently, a new method was introduced which uses the physical density. This method is expected to be faster and more accurate than that using the electron density. As for particle therapy, dose can be calculated with CT-to-stopping power conversion since the stopping power depends on the electron density. CT-to-stopping power conversion table is also called as CT-to-water-equivalent range and is an essential concept for the particle therapy.

  17. Production of Low Enriched Uranium Nitride Kernels for TRISO Particle Irradiation Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMurray, J. W.; Silva, C. M.; Helmreich, G. W.

    2016-06-01

    A large batch of UN microspheres to be used as kernels for TRISO particle fuel was produced using carbothermic reduction and nitriding of a sol-gel feedstock bearing tailored amounts of low-enriched uranium (LEU) oxide and carbon. The process parameters, established in a previous study, produced phasepure NaCl structure UN with dissolved C on the N sublattice. The composition, calculated by refinement of the lattice parameter from X-ray diffraction, was determined to be UC 0.27N 0.73. The final accepted product weighed 197.4 g. The microspheres had an average diameter of 797±1.35 μm and a composite mean theoretical density of 89.9±0.5% formore » a solid solution of UC and UN with the same atomic ratio; both values are reported with their corresponding calculated standard error.« less

  18. Utilization of spectral-spatial characteristics in shortwave infrared hyperspectral images to classify and identify fungi-contaminated peanuts.

    PubMed

    Qiao, Xiaojun; Jiang, Jinbao; Qi, Xiaotong; Guo, Haiqiang; Yuan, Deshuai

    2017-04-01

    It's well-known fungi-contaminated peanuts contain potent carcinogen. Efficiently identifying and separating the contaminated can help prevent aflatoxin entering in food chain. In this study, shortwave infrared (SWIR) hyperspectral images for identifying the prepared contaminated kernels. Feature selection method of analysis of variance (ANOVA) and feature extraction method of nonparametric weighted feature extraction (NWFE) were used to concentrate spectral information into a subspace where contaminated and healthy peanuts can have favorable separability. Then, peanut pixels were classified using SVM. Moreover, image segmentation method of region growing was applied to segment the image as kernel-scale patches and meanwhile to number the kernels. The result shows that pixel-wise classification accuracies are 99.13% for breed A, 96.72% for B and 99.73% for C in learning images, and are 96.32%, 94.2% and 97.51% in validation images. Contaminated peanuts were correctly marked as aberrant kernels in both learning images and validation images. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. [Study on application of SVM in prediction of coronary heart disease].

    PubMed

    Zhu, Yue; Wu, Jianghua; Fang, Ying

    2013-12-01

    Base on the data of blood pressure, plasma lipid, Glu and UA by physical test, Support Vector Machine (SVM) was applied to identify coronary heart disease (CHD) in patients and non-CHD individuals in south China population for guide of further prevention and treatment of the disease. Firstly, the SVM classifier was built using radial basis kernel function, liner kernel function and polynomial kernel function, respectively. Secondly, the SVM penalty factor C and kernel parameter sigma were optimized by particle swarm optimization (PSO) and then employed to diagnose and predict the CHD. By comparison with those from artificial neural network with the back propagation (BP) model, linear discriminant analysis, logistic regression method and non-optimized SVM, the overall results of our calculation demonstrated that the classification performance of optimized RBF-SVM model could be superior to other classifier algorithm with higher accuracy rate, sensitivity and specificity, which were 94.51%, 92.31% and 96.67%, respectively. So, it is well concluded that SVM could be used as a valid method for assisting diagnosis of CHD.

  20. Pixel-based meshfree modelling of skeletal muscles.

    PubMed

    Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu

    2016-01-01

    This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.

  1. Acceptance Test Data for BWXT Coated Particle Batch 93164A Defective IPyC Fraction and Pyrocarbon Anisotropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmreich, Grant W.; Hunn, John D.; Skitt, Darren J.

    2017-02-01

    Coated particle fuel batch J52O-16-93164 was produced by Babcock and Wilcox Technologies (BWXT) for possible selection as fuel for the Advanced Gas Reactor Fuel Development and Qualification (AGR) Program’s AGR-5/6/7 irradiation test in the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), or may be used as demonstration production-scale coated particle fuel for other experiments. The tristructural-isotropic (TRISO) coatings were deposited in a 150-mm-diameter production-scale fluidizedbed chemical vapor deposition (CVD) furnace onto 425-μm-nominal-diameter spherical kernels from BWXT lot J52L-16-69316. Each kernel contained a mixture of 15.5%-enriched uranium carbide and uranium oxide (UCO) and was coated with four consecutive CVD layers:more » a ~50% dense carbon buffer layer with 100-μm-nominal thickness, a dense inner pyrolytic carbon (IPyC) layer with 40-μm-nominal thickness, a silicon carbide (SiC) layer with 35-μm-nominal thickness, and a dense outer pyrolytic carbon (OPyC) layer with 40-μm-nominal thickness. The TRISO-coated particle batch was sieved to upgrade the particles by removing over-sized and under-sized material, and the upgraded batch was designated by appending the letter A to the end of the batch number (i.e., 93164A).« less

  2. Study of the mechanisms for flame stabilization in gas turbine model combustors using kHz laser diagnostics

    NASA Astrophysics Data System (ADS)

    Boxx, Isaac; Carter, Campbell D.; Stöhr, Michael; Meier, Wolfgang

    2013-05-01

    An image-processing routine was developed to autonomously identify and statistically characterize flame-kernel events, wherein OH (from a planar laser-induced fluorescence, PLIF, measurement) appears in the probe region away from the contiguous OH layer. This routine was applied to datasets from two gas turbine model combustors that consist of thousands of joint OH-velocity images from kHz framerate OH-PLIF and particle image velocimetry (PIV). Phase sorting of the kernel centroids with respect to the dominant fluid-dynamic structure of the combustors (a helical precessing vortex core, PVC) indicates through-plane transport of reacting fluid best explains their sudden appearance in the PLIF images. The concentration of flame-kernel events around the periphery of the mean location of the PVC indicates they are likely the result of wrinkling and/or breakup of the primary flame sheet associated with the passage of the PVC as it circumscribes the burner centerline. The prevailing through-plane velocity of the swirling flow-field transports these fragments into the imaging plane of the OH-PLIF system. The lack of flame-kernel events near the center of the PVC (in which there is lower strain and longer fluid-dynamic residence times) indicates that auto-ignition is not a likely explanation for these flame kernels in a majority of cases. The lack of flame-kernel centroid variation in one flame in which there is no PVC further supports this explanation.

  3. Transmission blocking effects of neem (Azadirachta indica) seed kernel limonoids on Plasmodium berghei early sporogonic development.

    PubMed

    Tapanelli, Sofia; Chianese, Giuseppina; Lucantoni, Leonardo; Yerbanga, Rakiswendé Serge; Habluetzel, Annette; Taglialatela-Scafati, Orazio

    2016-10-01

    Azadirachta indica, known as neem tree and traditionally called "nature's drug store" makes part of several African pharmacopeias and is widely used for the preparation of homemade remedies and commercial preparations against various illnesses, including malaria. Employing a bio-guided fractionation approach, molecules obtained from A. indica ripe and green fruit kernels were tested for activity against early sporogonic stages of Plasmodium berghei, the parasite stages that develop in the mosquito mid gut after an infective blood meal. The limonoid deacetylnimbin (3) was identified as one the most active compounds of the extract, with a considerably higher activity compared to that of the close analogue nimbin (2). Pure deacetylnimbin (3) appeared to interfere with transmissible Plasmodium stages at a similar potency as azadirachtin A. Considering its higher thermal and chemical stability, deacetylnimbin could represent a suitable alternative to azadirachtin A for the preparation of transmission blocking antimalarials. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Application of Tools to Measure PCB Microbial Dechlorination and Flux into Water During In-situ Treatment of Sediments

    DTIC Science & Technology

    2011-08-01

    flocs within a radius of 2 flocs’ centerline would be intercepted by the settling particle . The curvilinear kernel assumes only smaller particle hit...Aerobic Sediment Slurry……………………………………………………………...11 Study 4. Modeling the Impact of Flocculation on the Fate of Organic and Inorganic Particles ...suspended particles at the beginning of free settling period………....………46 Figure 4.2: Three fOC distribution trends: small, uniform, and size-variable

  5. Evaluation of various carbon blacks and dispersing agents for use in the preparation of uranium microspheres with carbon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, Rodney Dale; Johnson, Jared A.; Collins, Jack Lee

    A comparison study on carbon blacks and dispersing agents was performed to determine their impacts on the final properties of uranium fuel kernels with carbon. The main target compositions in this internal gelation study were 10 and 20 mol % uranium dicarbide (UC 2), which is UC 1.86, with the balance uranium dioxide. After heat treatment at 1900 K in flowing carbon monoxide in argon for 12 h, the density of the kernels produced using a X-energy proprietary carbon suspension, which is commercially available, ranged from 96% to 100% of theoretical density (TD), with full conversion of UC to UCmore » 2 at both carbon concentrations. However, higher carbon concentrations such as a 2.5 mol ratio of carbon to uranium in the feed solutions failed to produce gel spheres with the proprietary carbon suspension. The kernels using our former baseline of Mogul L carbon black and Tamol SN were 90–92% of TD with full conversion of UC to UC 2 at a variety of carbon levels. Raven 5000 carbon black and Tamol SN were used to produce 10 mol % UC2 kernels with 95% of TD. However, an increase in the Raven 5000 concentration led to a kernel density below 90% of TD. Raven 3500 carbon black and Tamol SN were used to make very dense kernels without complete conversion to UC 2. Lastly, the selection of the carbon black and dispersing agent is highly dependent on the desired final properties of the target kernels.« less

  6. Evaluation of various carbon blacks and dispersing agents for use in the preparation of uranium microspheres with carbon

    NASA Astrophysics Data System (ADS)

    Hunt, R. D.; Johnson, J. A.; Collins, J. L.; McMurray, J. W.; Reif, T. J.; Brown, D. R.

    2018-01-01

    A comparison study on carbon blacks and dispersing agents was performed to determine their impacts on the final properties of uranium fuel kernels with carbon. The main target compositions in this internal gelation study were 10 and 20 mol % uranium dicarbide (UC2), which is UC1.86, with the balance uranium dioxide. After heat treatment at 1900 K in flowing carbon monoxide in argon for 12 h, the density of the kernels produced using a X-energy proprietary carbon suspension, which is commercially available, ranged from 96% to 100% of theoretical density (TD), with full conversion of UC to UC2 at both carbon concentrations. However, higher carbon concentrations such as a 2.5 mol ratio of carbon to uranium in the feed solutions failed to produce gel spheres with the proprietary carbon suspension. The kernels using our former baseline of Mogul L carbon black and Tamol SN were 90-92% of TD with full conversion of UC to UC2 at a variety of carbon levels. Raven 5000 carbon black and Tamol SN were used to produce 10 mol % UC2 kernels with 95% of TD. However, an increase in the Raven 5000 concentration led to a kernel density below 90% of TD. Raven 3500 carbon black and Tamol SN were used to make very dense kernels without complete conversion to UC2. The selection of the carbon black and dispersing agent is highly dependent on the desired final properties of the target kernels.

  7. Evaluation of various carbon blacks and dispersing agents for use in the preparation of uranium microspheres with carbon

    DOE PAGES

    Hunt, Rodney Dale; Johnson, Jared A.; Collins, Jack Lee; ...

    2017-10-12

    A comparison study on carbon blacks and dispersing agents was performed to determine their impacts on the final properties of uranium fuel kernels with carbon. The main target compositions in this internal gelation study were 10 and 20 mol % uranium dicarbide (UC 2), which is UC 1.86, with the balance uranium dioxide. After heat treatment at 1900 K in flowing carbon monoxide in argon for 12 h, the density of the kernels produced using a X-energy proprietary carbon suspension, which is commercially available, ranged from 96% to 100% of theoretical density (TD), with full conversion of UC to UCmore » 2 at both carbon concentrations. However, higher carbon concentrations such as a 2.5 mol ratio of carbon to uranium in the feed solutions failed to produce gel spheres with the proprietary carbon suspension. The kernels using our former baseline of Mogul L carbon black and Tamol SN were 90–92% of TD with full conversion of UC to UC 2 at a variety of carbon levels. Raven 5000 carbon black and Tamol SN were used to produce 10 mol % UC2 kernels with 95% of TD. However, an increase in the Raven 5000 concentration led to a kernel density below 90% of TD. Raven 3500 carbon black and Tamol SN were used to make very dense kernels without complete conversion to UC 2. Lastly, the selection of the carbon black and dispersing agent is highly dependent on the desired final properties of the target kernels.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besmann, Theodore M; Shin, Dongwon

    TRISO coated particle fuel is envisioned as a next generation replacement for current urania pellet fuel in LWR applications. To obtain adequate fissile loading the kernel of the TRISO particle will need to be UN. In support of the fuel development effort, an assessment of phase regions of interest in the U-C-N system was undertaken as the fuel will be prepared by the carbothermic reduction of the oxide and it will be in equilibrium with carbon within the TRISO particle. The phase equilibria and thermochemistry of the U-C-N system is reviewed, including nitrogen pressure measurements above various phase fields. Selectedmore » measurements were used to fit a first order model of the UC1-xNx phase, represented by the inter-solution of UN and UC. Fit to the data was significantly improved by also adjusting the heat of formation for UN by ~12 kJ/mol and the phase equilbria was best reproduced by also adjusting the heat for U2N3 by +XXX. The determined interaction parameters yielded a slightly positive deviation from ideality, which agrees with lattice parameter measurements which show positive deviation from Vegard s law. The resultant model together with reported values for other phases in the system were used to generate isothermal sections of the U-C-N phase diagram. Nitrogen partial pressures were also computed for regions of interest.« less

  9. Dissolution of Lipid-Based Matrices in Simulated Gastrointestinal Solutions to Evaluate Their Potential for the Encapsulation of Bioactive Ingredients for Foods.

    PubMed

    Raymond, Yves; Champagne, Claude P

    2014-01-01

    The goal of the study was to compare the dissolution of chocolate to other lipid-based matrices suitable for the microencapsulation of bioactive ingredients in simulated gastrointestinal solutions. Particles having approximately 750 μm or 2.5 mm were prepared from the following lipid-based matrices: cocoa butter, fractionated palm kernel oil (FPKO), chocolate, beeswax, carnauba wax, and paraffin. They were added to solutions designed to simulate gastric secretions (GS) or duodenum secretions (DS) at 37°C. Paraffin, carnauba wax, and bees wax did not dissolve in either the GS or DS media. Cocoa butter, FPKO, and chocolate dissolved in the DS medium. Cocoa butter, and to a lesser extent chocolate, also dissolved in the GS medium. With chocolate, dissolution was twice as fast as that with small particles (750 μm) as compared to the larger (2.5 mm) ones. With 750 μm particle sizes, 90% dissolution of chocolate beads was attained after only 60 minutes in the DS medium, while it took 120 minutes for 70% of FPKO beads to dissolve in the same conditions. The data are discussed from the perspective of controlled release in the gastrointestinal tract of encapsulated ingredients (minerals, oils, probiotic bacteria, enzymes, vitamins, and peptides) used in the development of functional foods.

  10. Dissolution of Lipid-Based Matrices in Simulated Gastrointestinal Solutions to Evaluate Their Potential for the Encapsulation of Bioactive Ingredients for Foods

    PubMed Central

    Champagne, Claude P.

    2014-01-01

    The goal of the study was to compare the dissolution of chocolate to other lipid-based matrices suitable for the microencapsulation of bioactive ingredients in simulated gastrointestinal solutions. Particles having approximately 750 μm or 2.5 mm were prepared from the following lipid-based matrices: cocoa butter, fractionated palm kernel oil (FPKO), chocolate, beeswax, carnauba wax, and paraffin. They were added to solutions designed to simulate gastric secretions (GS) or duodenum secretions (DS) at 37°C. Paraffin, carnauba wax, and bees wax did not dissolve in either the GS or DS media. Cocoa butter, FPKO, and chocolate dissolved in the DS medium. Cocoa butter, and to a lesser extent chocolate, also dissolved in the GS medium. With chocolate, dissolution was twice as fast as that with small particles (750 μm) as compared to the larger (2.5 mm) ones. With 750 μm particle sizes, 90% dissolution of chocolate beads was attained after only 60 minutes in the DS medium, while it took 120 minutes for 70% of FPKO beads to dissolve in the same conditions. The data are discussed from the perspective of controlled release in the gastrointestinal tract of encapsulated ingredients (minerals, oils, probiotic bacteria, enzymes, vitamins, and peptides) used in the development of functional foods. PMID:26904647

  11. Uranium nitride as LWR TRISO fuel: Thermodynamic modeling of U-C-N

    NASA Astrophysics Data System (ADS)

    Besmann, Theodore M.; Shin, Dongwon; Lindemer, Terrence B.

    2012-08-01

    TRISO coated particle fuel is envisioned as a next generation replacement for current urania pellet fuel in LWR applications. To obtain adequate fissile loading the kernel of the TRISO particle will likely need to be UN instead of UO2. In support of the necessary development effort for this new fuel system, an assessment of phase regions of interest in the U-C-N system was undertaken as the fuel will be prepared by the carbothermic reduction of the oxide followed by nitriding, will be in equilibrium with carbon within the TRISO particle, and will react with minor actinides and fission products. The phase equilibria and thermochemistry of the U-C-N system is reviewed, including nitrogen pressure measurements above various phase fields. Measurements were used to confirm an ideal solution model of UN and UC adequately represents the UC1-xNx phase. Agreement with the data was significantly improved by effectively adjusting the Gibbs free energy of UN by +12 kJ/mol. This also required adjustment of the value for the sesquinitride by +17 kJ/mol to obtain agreement with phase equilibria. The resultant model together with reported values for other phases in the system was used to generate isothermal sections of the U-C-N phase diagram. Nitrogen partial pressures were also computed for regions of interest.

  12. Difference in postprandial GLP-1 response despite similar glucose kinetics after consumption of wheat breads with different particle size in healthy men.

    PubMed

    Eelderink, Coby; Noort, Martijn W J; Sozer, Nesli; Koehorst, Martijn; Holst, Jens J; Deacon, Carolyn F; Rehfeld, Jens F; Poutanen, Kaisa; Vonk, Roel J; Oudhuis, Lizette; Priebe, Marion G

    2017-04-01

    Underlying mechanisms of the beneficial health effects of low glycemic index starchy foods are not fully elucidated yet. We varied the wheat particle size to obtain fiber-rich breads with a high and low glycemic response and investigated the differences in postprandial glucose kinetics and metabolic response after their consumption. Ten healthy male volunteers participated in a randomized, crossover study, consuming 13 C-enriched breads with different structures; a control bread (CB) made from wheat flour combined with wheat bran, and a kernel bread (KB) where 85 % of flour was substituted with broken wheat kernels. The structure of the breads was characterized extensively. The use of stable isotopes enabled calculation of glucose kinetics: rate of appearance of exogenous glucose, endogenous glucose production, and glucose clearance rate. Additionally, postprandial plasma concentrations of glucose, insulin, glucagon, incretins, cholecystokinin, and bile acids were analyzed. Despite the attempt to obtain a bread with a low glycemic response by replacing flour by broken kernels, the glycemic response and glucose kinetics were quite similar after consumption of CB and KB. Interestingly, the glucagon-like peptide-1 (GLP-1) response was much lower after KB compared to CB (iAUC, P < 0.005). A clear postprandial increase in plasma conjugated bile acids was observed after both meals. Substitution of 85 % wheat flour by broken kernels in bread did not result in a difference in glucose response and kinetics, but in a pronounced difference in GLP-1 response. Thus, changing the processing conditions of wheat for baking bread can influence the metabolic response beyond glycemia and may therefore influence health.

  13. Modeling reactive transport with particle tracking and kernel estimators

    NASA Astrophysics Data System (ADS)

    Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-04-01

    Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.

  14. From the Weyl quantization of a particle on the circle to number–phase Wigner functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Przanowski, Maciej, E-mail: maciej.przanowski@p.lodz.pl; Brzykcy, Przemysław, E-mail: 800289@edu.p.lodz.pl; Tosiek, Jaromir, E-mail: jaromir.tosiek@p.lodz.pl

    2014-12-15

    A generalized Weyl quantization formalism for a particle on the circle is shown to supply an effective method for defining the number–phase Wigner function in quantum optics. A Wigner function for the state ϱ{sup ^} and the kernel K for a particle on the circle is defined and its properties are analysed. Then it is shown how this Wigner function can be easily modified to give the number–phase Wigner function in quantum optics. Some examples of such number–phase Wigner functions are considered.

  15. Symmetry preserving truncations of the gap and Bethe-Salpeter equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binosi, Daniele; Chang, Lei; Papavassiliou, Joannis

    2016-05-01

    Ward-Green-Takahashi (WGT) identities play a crucial role in hadron physics, e.g. imposing stringent relationships between the kernels of the one-and two-body problems, which must be preserved in any veracious treatment of mesons as bound states. In this connection, one may view the dressed gluon-quark vertex, Gamma(alpha)(mu), as fundamental. We use a novel representation of Gamma(alpha)(mu), in terms of the gluon-quark scattering matrix, to develop a method capable of elucidating the unique quark-antiquark Bethe-Salpeter kernel, K, that is symmetry consistent with a given quark gap equation. A strength of the scheme is its ability to expose and capitalize on graphic symmetriesmore » within the kernels. This is displayed in an analysis that reveals the origin of H-diagrams in K, which are two-particle-irreducible contributions, generated as two-loop diagrams involving the three-gluon vertex, that cannot be absorbed as a dressing of Gamma(alpha)(mu) in a Bethe-Salpeter kernel nor expressed as a member of the class of crossed-box diagrams. Thus, there are no general circumstances under which the WGT identities essential for a valid description of mesons can be preserved by a Bethe-Salpeter kernel obtained simply by dressing both gluon-quark vertices in a ladderlike truncation; and, moreover, adding any number of similarly dressed crossed-box diagrams cannot improve the situation.« less

  16. A shock-capturing SPH scheme based on adaptive kernel estimation

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime

    2006-02-01

    Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.

  17. An investigation of the generation and properties of laboratory-produced ball lightning

    NASA Astrophysics Data System (ADS)

    Oreshko, A. G.

    2015-06-01

    The experiments revealed that ball lightning is a self-confining quasi-neutral in a whole plasma system that rotates around its axis. Ball lightning has a structure of a spherical electric domain, consisting of a kernel with excess negative charge and an external spherical layer with excess positive charge. The excess of charges of one sort and the lack of charges of the other sort in the kernel or in the external spherical layer significantly reduces the possibility of electron capture by means of an electric field, created by the nearest ions and leads to a drastic slowdown of recombination process. Direct proof has been obtained that inside of ball lightning - in an external spherical layer that rotates around the axis - there is a circular current of sub-relativistic particles. This current creates and maintains its own poloidal magnetic field of ball lightning, i.e. it carries out the function of magnetic dynamo. The kernel of ball lightning is situated in a region with minimum values of induction of the magnetic field. The inequality of positive and negative charges in elements of ball lightning also significantly reduces losses of the charged plasma on bremsstrahlung. Ball lightning generation occurs in a plasmic vortex. The ball lightning energy in the region of its generation significantly differs from the ball lightning energy, which is drifting in space. The axial component of kinetic energy of particles slightly exceeds 100 keV and the rotational component of the ions energy is a bit greater than 1 MeV. Ball lightning is `embedded' in atmosphere autonomous accelerator of charged particles of a cyclotron type due to self-generation of strong crossed electric and magnetic fields. A discussion of the conditions of stability and long-term existence of ball lightning is given.

  18. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine

    PubMed Central

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir

    2017-01-01

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080

  19. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  20. Santalbic acid from quandong kernels and oil fed to rats affects kidney and liver P450.

    PubMed

    Jones, G P; Watson, T G; Sinclair, A J; Birkett, A; Dunt, N; Nair, S S; Tonkin, S Y

    1999-09-01

    Kernels of the plant Santalum acuminatum (quandong) are eaten as Australian 'bush foods'. They are rich in oil and contain relatively large amounts of the acetylenic fatty acid, santalbic acid (trans-11-octadecen-9-ynoic acid), whose chemical structure is unlike that of normal dietary fatty acids. When rats were fed high fat diets in which oil from quandong kernels supplied 50% of dietary energy, the proportion of santalbic acid absorbed was more than 90%. Feeding quandong oil elevated not only total hepatic cytochrome P450 but also the cytochrome P450 4A subgroup of enzymes as shown by a specific immunoblotting technique. A purified methyl santalbate preparation isolated from quandong oil was fed to rats at 9% of dietary energy for 4 days and this also elevated cytochrome P450 4A in both kidney and liver microsomes in comparison with methyl esters from canola oil. Santalbic acid appears to be metabolized differently from the usual dietary fatty acids and the consumption of oil from quandong kernels may cause perturbations in normal fatty acid biochemistry.

  1. Initial Simulations of RF Waves in Hot Plasmas Using the FullWave Code

    NASA Astrophysics Data System (ADS)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo

    2017-10-01

    FullWave is a simulation tool that models RF fields in hot inhomogeneous magnetized plasmas. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. In an rf field, the hot plasma dielectric response is limited to the distance of a few particles' Larmor radii, near the magnetic field line passing through the test point. The localization of the hot plasma dielectric response results in a sparse matrix of the problem thus significantly reduces the size of the problem and makes the simulations faster. We will present the initial results of modeling of rf waves using the Fullwave code, including calculation of nonlocal conductivity kernel in 2D Tokamak geometry; the interpolation of conductivity kernel from test points to adaptive cloud of computational points; and the results of self-consistent simulations of 2D rf fields using calculated hot plasma conductivity kernel in a tokamak plasma with reduced parameters. Work supported by the US DOE ``SBIR program.

  2. Electron Microscopic Examination of Irradiated TRISO Coated Particles of Compact 6-3-2 of AGR-1 Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Rooyen, Isabella Johanna; Demkowicz, Paul Andrew; Riesterer, Jessica Lori

    2012-12-01

    The electron microscopic examination of selected irradiated TRISO coated particles of the AGR-1 experiment of fuel compact 6-3-2 are presented in this report. Compact 6-3-2 refers to the compact in Capsule 6 at level 3 of Stack 2. The fuel used in capsule 6 compacts, are called the “baseline” fuel as it is fabricated with refined coating process conditions used to fabricate historic German fuel, because of its excellent irradiation performance with UO2 kernels. The AGR-1 fuel is however made of low-enriched uranium oxycarbide (UCO). Kernel diameters are approximately 350 µm with a U-235 enrichment of approximately 19.7%. Compact 6-3-2more » has been irradiated to 11.3% FIMA compact average burn-up with a time average, volume average temperature of 1070.2°C and with a compact average fast fluence of 2.38E21 n/cm« less

  3. Electron Microscopic Examination of Irradiated TRISO Coated Particles of Compact 6-3-2 of AGR-1 Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Rooyen, Isabella Johanna; Demkowicz, Paul Andrew; Riesterer, Jessica Lori

    2012-12-01

    The electron microscopic examination of selected irradiated TRISO coated particles of the AGR-1 experiment of fuel compact 6-3-2 are presented in this report. Compact 6-3-2 refers to the compact in Capsule 6 at level 3 of Stack 2. The fuel used in capsule 6 compacts, are called the “baseline” fuel as it is fabricated with refined coating process conditions used to fabricate historic German fuel, because of its excellent irradiation performance with UO 2 kernels. The AGR-1 fuel is however made of low-enriched uranium oxycarbide (UCO). Kernel diameters are approximately 350 µm with a U-235 enrichment of approximately 19.7%. Compactmore » 6-3-2 has been irradiated to 11.3% FIMA compact average burn-up with a time average, volume average temperature of 1070.2°C and with a compact average fast fluence of 2.38E21 n/cm« less

  4. The complex variable reproducing kernel particle method for bending problems of thin plates on elastic foundations

    NASA Astrophysics Data System (ADS)

    Chen, L.; Cheng, Y. M.

    2018-07-01

    In this paper, the complex variable reproducing kernel particle method (CVRKPM) for solving the bending problems of isotropic thin plates on elastic foundations is presented. In CVRKPM, one-dimensional basis function is used to obtain the shape function of a two-dimensional problem. CVRKPM is used to form the approximation function of the deflection of the thin plates resting on elastic foundation, the Galerkin weak form of thin plates on elastic foundation is employed to obtain the discretized system equations, the penalty method is used to apply the essential boundary conditions, and Winkler and Pasternak foundation models are used to consider the interface pressure between the plate and the foundation. Then the corresponding formulae of CVRKPM for thin plates on elastic foundations are presented in detail. Several numerical examples are given to discuss the efficiency and accuracy of CVRKPM in this paper, and the corresponding advantages of the present method are shown.

  5. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less

  6. Improvement of efficiency of oil extraction from wild apricot kernels by using enzymes.

    PubMed

    Bisht, Tejpal Singh; Sharma, Satish Kumar; Sati, Ramesh Chandra; Rao, Virendra Kumar; Yadav, Vijay Kumar; Dixit, Anil Kumar; Sharma, Ashok Kumar; Chopra, Chandra Shekhar

    2015-03-01

    An experiment was conducted to evaluate and standardize the protocol for enhancing recovery of oil and quality from cold pressed wild apricot kernels by using various enzymes. Wild apricot kernels were ground into powder in a grinder. Different lots of 3 kg powdered kernel were prepared and treated with different concentrations of enzyme solutions viz. Pectazyme (Pectinase), Mashzyme (Cellulase) and Pectazyme + Mashzyme. Kernel powder mixed with enzyme solutions were kept for 2 h at 50(±2) °C temperature for enzymatic treatment before its use for oil extraction through oil expeller. Results indicate that use of enzymes resulted in enhancement of oil recovery by 9.00-14.22 %. Maximum oil recovery was observed at 0.3-0.4 % enzyme concentration for both the enzymes individually, as well as in combination. All the three enzymatic treatments resulted in increasing oil yield. However, with 0.3 % (Pectazyme + Mashzyme) combination, maximum oil recovery of 47.33 % could be observed against were 33.11 % in control. The oil content left (wasted) in the cake and residue were reduced from 11.67 and 11.60 % to 7.31 and 2.72 % respectively, thus showing a high increase in efficiency of oil recovery from wild apricot kernels. Quality characteristics indicate that the oil quality was not adversely affected by enzymatic treatment. It was concluded treatment of powdered wild apricot kernels with 0.3 % (Pectazyme + Mashzyme) combination was highly effective in increasing oil recovery by 14.22 % without adversely affecting the quality and thus may be commercially used by the industry for reducing wastage of highly precious oil in the cake.

  7. Effect of one step KOH activation and CaO modified carbon in transesterification reaction

    NASA Astrophysics Data System (ADS)

    Yacob, Abd Rahim; Zaki, Muhammad Azam Muhammad

    2017-11-01

    In this work, one step activation was introduced using potassium hydroxide (KOH) and calcium oxide (CaO) modified palm kernel shells. Various concentration of calcium oxide was used as catalyst while maintaining the same concentration of potassium hydroxide to activate and impregnate the palm kernel shell before calcined at 500°C for 5 hours. All the prepared samples were characterized using Fourier Transform Infrared (FTIR) and Field Emission Scanning Electron Microscope (FESEM). FTIR analysis of raw palm kernel shell showed the presence of various functional groups. However, after activation, most of the functional groups were eliminated. The basic strength of the prepared samples were determined using back titration method. The samples were then used as base heterogeneous catalyst for the transesterification reaction of rice bran oil with methanol. Analysis of the products were performed using Gas Chromatography Flame Ionization Detector (GC-FID) to calculate the percentage conversion of the biodiesel products. This study shows, as the percentage of one step activation potassium and calcium oxide doped carbon increases thus, the basic strength also increases followed by the increase in biodiesel production. Optimization study shows that the optimum biodiesel production was at 8 wt% catalyst loading, 9:1 methanol: oil molar ratio at 65°C and 6 hours which gives a conversion up to 95%.

  8. AGR-5/6/7 LEUCO Kernel Fabrication Readiness Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, Douglas W.; Bailey, Kirk W.

    2015-02-01

    In preparation for forming low-enriched uranium carbide/oxide (LEUCO) fuel kernels for the Advanced Gas Reactor (AGR) fuel development and qualification program, Idaho National Laboratory conducted an operational readiness review of the Babcock & Wilcox Nuclear Operations Group – Lynchburg (B&W NOG-L) procedures, processes, and equipment from January 14 – January 16, 2015. The readiness review focused on requirements taken from the American Society Mechanical Engineers (ASME) Nuclear Quality Assurance Standard (NQA-1-2008, 1a-2009), a recent occurrence at the B&W NOG-L facility related to preparation of acid-deficient uranyl nitrate solution (ADUN), and a relook at concerns noted in a previous review. Topicmore » areas open for the review were communicated to B&W NOG-L in advance of the on-site visit to facilitate the collection of objective evidences attesting to the state of readiness.« less

  9. Preparation and Characterization of Activated Carbon from Palm Kernel Shell

    NASA Astrophysics Data System (ADS)

    Andas, J.; Rahman, M. L. A.; Yahya, M. S. M.

    2017-08-01

    In this study, a high quality of activated carbon (AC) was successfully synthesized from palm kernel shell (PKS) via single step KOH activation. Several optimal conditions such as impregnation ratio and activation temperature were investigated. The prepared activated carbon under the optimum condition of impregnation ratio (1:1.5 raw/KOH) and activation temperature (800 °C) was characterized using Na2S2O3 volumetric method, CHNS/O analysis and Scanning Electron Microscope (SEM). Na2S2O3 volumetric showed an iodine number of 994.83 mgg-1 with yield % of 8.931 %. CHNS/O analysis verified an increase in C content for KOH-AC (61.10 %) in comparison to the raw PKS (47.28 %). Well-formation of porous structure was evidenced through SEM for KOH-AC. From this study, it showed a successful conversion of agricultural waste into value added porous material under benign condition.

  10. 7 CFR 51.2738 - Foreign material.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Foreign material. 51.2738 Section 51.2738 Agriculture... Standards for Grades of Shelled Spanish Type Peanuts Definitions § 51.2738 Foreign material. Foreign material means pieces or loose particles of any substance other than peanut kernels or skins. ...

  11. 7 CFR 51.2738 - Foreign material.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Foreign material. 51.2738 Section 51.2738 Agriculture... Standards for Grades of Shelled Spanish Type Peanuts Definitions § 51.2738 Foreign material. Foreign material means pieces or loose particles of any substance other than peanut kernels or skins. ...

  12. End-to-end plasma bubble PIC simulations on GPUs

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Matteucci, Jackson; Bhattacharjee, Amitava

    2017-10-01

    Accelerator technologies play a crucial role in eventually achieving exascale computing capabilities. The current and upcoming leadership machines at ORNL (Titan and Summit) employ Nvidia GPUs, which provide vast computational power but also need specifically adapted computational kernels to fully exploit them. In this work, we will show end-to-end particle-in-cell simulations of the formation, evolution and coalescence of laser-generated plasma bubbles. This work showcases the GPU capabilities of the PSC particle-in-cell code, which has been adapted for this problem to support particle injection, a heating operator and a collision operator on GPUs.

  13. Norm overlap between many-body states: Uncorrelated overlap between arbitrary Bogoliubov product states

    NASA Astrophysics Data System (ADS)

    Bally, B.; Duguet, T.

    2018-02-01

    Background: State-of-the-art multi-reference energy density functional calculations require the computation of norm overlaps between different Bogoliubov quasiparticle many-body states. It is only recently that the efficient and unambiguous calculation of such norm kernels has become available under the form of Pfaffians [L. M. Robledo, Phys. Rev. C 79, 021302 (2009), 10.1103/PhysRevC.79.021302]. Recently developed particle-number-restored Bogoliubov coupled-cluster (PNR-BCC) and particle-number-restored Bogoliubov many-body perturbation (PNR-BMBPT) ab initio theories [T. Duguet and A. Signoracci, J. Phys. G 44, 015103 (2017), 10.1088/0954-3899/44/1/015103] make use of generalized norm kernels incorporating explicit many-body correlations. In PNR-BCC and PNR-BMBPT, the Bogoliubov states involved in the norm kernels differ specifically via a global gauge rotation. Purpose: The goal of this work is threefold. We wish (i) to propose and implement an alternative to the Pfaffian method to compute unambiguously the norm overlap between arbitrary Bogoliubov quasiparticle states, (ii) to extend the first point to explicitly correlated norm kernels, and (iii) to scrutinize the analytical content of the correlated norm kernels employed in PNR-BMBPT. Point (i) constitutes the purpose of the present paper while points (ii) and (iii) are addressed in a forthcoming paper. Methods: We generalize the method used in another work [T. Duguet and A. Signoracci, J. Phys. G 44, 015103 (2017), 10.1088/0954-3899/44/1/015103] in such a way that it is applicable to kernels involving arbitrary pairs of Bogoliubov states. The formalism is presently explicated in detail in the case of the uncorrelated overlap between arbitrary Bogoliubov states. The power of the method is numerically illustrated and benchmarked against known results on the basis of toy models of increasing complexity. Results: The norm overlap between arbitrary Bogoliubov product states is obtained under a closed-form expression allowing its computation without any phase ambiguity. The formula is physically intuitive, accurate, and versatile. It equally applies to norm overlaps between Bogoliubov states of even or odd number parity. Numerical applications illustrate these features and provide a transparent representation of the content of the norm overlaps. Conclusions: The complex norm overlap between arbitrary Bogoliubov states is computed, without any phase ambiguity, via elementary linear algebra operations. The method can be used in any configuration mixing of orthogonal and non-orthogonal product states. Furthermore, the closed-form expression extends naturally to correlated overlaps at play in PNR-BCC and PNR-BMBPT. As such, the straight overlap between Bogoliubov states is the zero-order reduction of more involved norm kernels to be studied in a forthcoming paper.

  14. Wilson loops and QCD/string scattering amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makeenko, Yuri; Olesen, Poul; Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen O

    2009-07-15

    We generalize modern ideas about the duality between Wilson loops and scattering amplitudes in N=4 super Yang-Mills theory to large N QCD by deriving a general relation between QCD meson scattering amplitudes and Wilson loops. We then investigate properties of the open-string disk amplitude integrated over reparametrizations. When the Wilson-loop is approximated by the area behavior, we find that the QCD scattering amplitude is a convolution of the standard Koba-Nielsen integrand and a kernel. As usual poles originate from the first factor, whereas no (momentum-dependent) poles can arise from the kernel. We show that the kernel becomes a constant whenmore » the number of external particles becomes large. The usual Veneziano amplitude then emerges in the kinematical regime, where the Wilson loop can be reliably approximated by the area behavior. In this case, we obtain a direct duality between Wilson loops and scattering amplitudes when spatial variables and momenta are interchanged, in analogy with the N=4 super Yang-Mills theory case.« less

  15. A dry-inoculation method for nut kernels.

    PubMed

    Blessington, Tyann; Theofel, Christopher G; Harris, Linda J

    2013-04-01

    A dry-inoculation method for almonds and walnuts was developed to eliminate the need for the postinoculation drying required for wet-inoculation methods. The survival of Salmonella enterica Enteritidis PT 30 on wet- and dry-inoculated almond and walnut kernels stored under ambient conditions (average: 23 °C; 41 or 47% RH) was then compared over 14 weeks. For wet inoculation, an aqueous Salmonella preparation was added directly to almond or walnut kernels, which were then dried under ambient conditions (3 or 7 days, respectively) to initial nut moisture levels. For the dry inoculation, liquid inoculum was mixed with sterilized sand and dried for 24 h at 40 °C. The dried inoculated sand was mixed with kernels, and the sand was removed by shaking the mixture in a sterile sieve. Mixing procedures to optimize the bacterial transfer from sand to kernel were evaluated; in general, similar levels were achieved on walnuts (4.8-5.2 log CFU/g) and almonds (4.2-5.1 log CFU/g). The decline of Salmonella Enteritidis populations was similar during ambient storage (98 days) for both wet-and dry-inoculation methods for both almonds and walnuts. The dry-inoculation method mimics some of the suspected routes of contamination for tree nuts and may be appropriate for some postharvest challenge studies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Detection and classification of virus from electron micrograms

    NASA Astrophysics Data System (ADS)

    Strömberg, Jan-Olov

    2010-04-01

    I will present a PhD project were Diffusion Geometry is used in classification of virus particles in cell kernels from electron micrograms. I will give a very short introduction to Diffusion Geometry and discuss the main classification steps. Some preliminary result from a Master Thesis will be presented.

  17. Blood flow problem in the presence of magnetic particles through a circular cylinder using Caputo-Fabrizio fractional derivative

    NASA Astrophysics Data System (ADS)

    Uddin, Salah; Mohamad, Mahathir; Khalid, Kamil; Abdulhammed, Mohammed; Saifullah Rusiman, Mohd; Che – Him, Norziha; Roslan, Rozaini

    2018-04-01

    In this paper, the flow of blood mixed with magnetic particles subjected to uniform transverse magnetic field and pressure gradient in an axisymmetric circular cylinder is studied by using a new trend of fractional derivative without singular kernel. The governing equations are fractional partial differential equations derived based on the Caputo-Fabrizio time-fractional derivatives NFDt. The current result agrees considerably well with that of the previous Caputo fractional derivatives UFDt.

  18. NUMERICAL CONVERGENCE IN SMOOTHED PARTICLE HYDRODYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Qirong; Li, Yuexing; Hernquist, Lars

    2015-02-10

    We study the convergence properties of smoothed particle hydrodynamics (SPH) using numerical tests and simple analytic considerations. Our analysis shows that formal numerical convergence is possible in SPH only in the joint limit N → ∞, h → 0, and N{sub nb} → ∞, where N is the total number of particles, h is the smoothing length, and N{sub nb} is the number of neighbor particles within the smoothing volume used to compute smoothed estimates. Previous work has generally assumed that the conditions N → ∞ and h → 0 are sufficient to achieve convergence, while holding N{sub nb} fixed.more » We demonstrate that if N{sub nb} is held fixed as the resolution is increased, there will be a residual source of error that does not vanish as N → ∞ and h → 0. Formal numerical convergence in SPH is possible only if N{sub nb} is increased systematically as the resolution is improved. Using analytic arguments, we derive an optimal compromise scaling for N{sub nb} by requiring that this source of error balance that present in the smoothing procedure. For typical choices of the smoothing kernel, we find N{sub nb} ∝N {sup 0.5}. This means that if SPH is to be used as a numerically convergent method, the required computational cost does not scale with particle number as O(N), but rather as O(N {sup 1} {sup +} {sup δ}), where δ ≈ 0.5, with a weak dependence on the form of the smoothing kernel.« less

  19. Computational investigation of intense short-wavelength laser interaction with rare gas clusters

    NASA Astrophysics Data System (ADS)

    Bigaouette, Nicolas

    Current Very High Temperature Reactor designs incorporate TRi-structural ISOtropic (TRISO) particle fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel by dropping a cold precursor solution into a column of hot trichloroethylene (TCE). The temperature difference drives the liquid precursor solution to precipitate the metal solution into gel spheres before reaching the bottom of a production column. Over time, gelation byproducts inhibit complete gelation and the TCE must be purified or discarded. The resulting mixed-waste stream is expensive to dispose of or recycle, and changing the forming fluid to a non-hazardous alternative could greatly improve the economics of kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacements. The physical properties of the alternatives were measured as a function of temperature between 25 °C and 80 °C. Calculated terminal velocities and heat transfer rates provided an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane were selected for further testing, and surrogate yttria-stabilized zirconia (YSZ) kernels were produced using these selected fluids. The kernels were characterized for density, geometry, composition, and crystallinity and compared to a control group of kernels produced in silicone oil. Production in 1-bromotetradecane showed positive results, producing dense (93.8 %TD) and spherical (1.03 aspect ratio) kernels, but proper gelation did not occur in the other alternative forming fluids. With many of the YSZ kernels not properly gelling within the length of the column, this project further investigated the heat transfer properties of the forming fluids and precursor solution. A sensitivity study revealed that the heat transfer properties of the precursor solution have the strongest impact on gelation time. A COMSOL heat transfer model estimated an effective thermal diffusivity range for the YSZ precursor solution as 1.13x10 -8 m2/s to 3.35x10-8 m 2/s, which is an order of magnitude smaller than the value used in previous studies. 1-bromotetradecane is recommended for further investigation with the production of uranium-based kernels.

  20. Reproducing Kernel Particle Method in Plasticity of Pressure-Sensitive Material with Reference to Powder Forming Process

    NASA Astrophysics Data System (ADS)

    Khoei, A. R.; Samimi, M.; Azami, A. R.

    2007-02-01

    In this paper, an application of the reproducing kernel particle method (RKPM) is presented in plasticity behavior of pressure-sensitive material. The RKPM technique is implemented in large deformation analysis of powder compaction process. The RKPM shape function and its derivatives are constructed by imposing the consistency conditions. The essential boundary conditions are enforced by the use of the penalty approach. The support of the RKPM shape function covers the same set of particles during powder compaction, hence no instability is encountered in the large deformation computation. A double-surface plasticity model is developed in numerical simulation of pressure-sensitive material. The plasticity model includes a failure surface and an elliptical cap, which closes the open space between the failure surface and hydrostatic axis. The moving cap expands in the stress space according to a specified hardening rule. The cap model is presented within the framework of large deformation RKPM analysis in order to predict the non-uniform relative density distribution during powder die pressing. Numerical computations are performed to demonstrate the applicability of the algorithm in modeling of powder forming processes and the results are compared to those obtained from finite element simulation to demonstrate the accuracy of the proposed model.

  1. Modeling RF Fields in Hot Plasmas with Parallel Full Wave Code

    NASA Astrophysics Data System (ADS)

    Spencer, Andrew; Svidzinski, Vladimir; Zhao, Liangji; Galkin, Sergei; Kim, Jin-Soo

    2016-10-01

    FAR-TECH, Inc. is developing a suite of full wave RF plasma codes. It is based on a meshless formulation in configuration space with adapted cloud of computational points (CCP) capability and using the hot plasma conductivity kernel to model the nonlocal plasma dielectric response. The conductivity kernel is calculated by numerically integrating the linearized Vlasov equation along unperturbed particle trajectories. Work has been done on the following calculations: 1) the conductivity kernel in hot plasmas, 2) a monitor function based on analytic solutions of the cold-plasma dispersion relation, 3) an adaptive CCP based on the monitor function, 4) stencils to approximate the wave equations on the CCP, 5) the solution to the full wave equations in the cold-plasma model in tokamak geometry for ECRH and ICRH range of frequencies, and 6) the solution to the wave equations using the calculated hot plasma conductivity kernel. We will present results on using a meshless formulation on adaptive CCP to solve the wave equations and on implementing the non-local hot plasma dielectric response to the wave equations. The presentation will include numerical results of wave propagation and absorption in the cold and hot tokamak plasma RF models, using DIII-D geometry and plasma parameters. Work is supported by the U.S. DOE SBIR program.

  2. Transient and asymptotic behaviour of the binary breakage problem

    NASA Astrophysics Data System (ADS)

    Mantzaris, Nikos V.

    2005-06-01

    The general binary breakage problem with power-law breakage functions and two families of symmetric and asymmetric breakage kernels is studied in this work. A useful transformation leads to an equation that predicts self-similar solutions in its asymptotic limit and offers explicit knowledge of the mean size and particle density at each point in dimensionless time. A novel moving boundary algorithm in the transformed coordinate system is developed, allowing the accurate prediction of the full transient behaviour of the system from the initial condition up to the point where self-similarity is achieved, and beyond if necessary. The numerical algorithm is very rapid and its results are in excellent agreement with known analytical solutions. In the case of the symmetric breakage kernels only unimodal, self-similar number density functions are obtained asymptotically for all parameter values and independent of the initial conditions, while in the case of asymmetric breakage kernels, bimodality appears for high degrees of asymmetry and sharp breakage functions. For symmetric and discrete breakage kernels, self-similarity is not achieved. The solution exhibits sustained oscillations with amplitude that depends on the initial condition and the sharpness of the breakage mechanism, while the period is always fixed and equal to ln 2 with respect to dimensionless time.

  3. Kinetic study of Chromium VI adsorption onto palm kernel shell activated carbon

    NASA Astrophysics Data System (ADS)

    Mohammad, Masita; Sadeghi Louyeh, Shiva; Yaakob, Zahira

    2018-04-01

    Heavy metal contamination of industrial effluent is one of the significant environmental problems due to their toxicity and its accumulation throughout the food chain. Adsorption is one of the promising methods for removal of heavy metals from aqua solution because of its simple technique, efficient, reliable and low-cost due to the utilization of residue from the agricultural industry. In this study, activated carbon from palm kernel shells has been produced through chemical activation process using zinc chloride as an activating agent and carbonized at 800 °C. Palm kernel shell activated carbon, PAC was assessed for its efficiency to remove Chromium (VI) ions from aqueous solutions through a batch adsorption process. The kinetic mechanisms have been analysed using Lagergren first-order kinetics model, second-order kinetics model and intra-particle diffusion model. The characterizations such as BET surface area, surface morphology, SEM-EDX have been done. The result shows that the activation process by ZnCl2 was successfully improved the porosity and modified the functional group of palm kernel shell. The result shows that the maximum adsorption capacity of Cr is 11.40mg/g at 30ppm initial metal ion concentration and 0.1g/50mL of adsorbent concentration. The adsorption process followed the pseudo second orders kinetic model.

  4. Brownian motion of a nano-colloidal particle: the role of the solvent.

    PubMed

    Torres-Carbajal, Alexis; Herrera-Velarde, Salvador; Castañeda-Priego, Ramón

    2015-07-15

    Brownian motion is a feature of colloidal particles immersed in a liquid-like environment. Usually, it can be described by means of the generalised Langevin equation (GLE) within the framework of the Mori theory. In principle, all quantities that appear in the GLE can be calculated from the molecular information of the whole system, i.e., colloids and solvent molecules. In this work, by means of extensive Molecular Dynamics simulations, we study the effects of the microscopic details and the thermodynamic state of the solvent on the movement of a single nano-colloid. In particular, we consider a two-dimensional model system in which the mass and size of the colloid are two and one orders of magnitude, respectively, larger than the ones associated with the solvent molecules. The latter ones interact via a Lennard-Jones-type potential to tune the nature of the solvent, i.e., it can be either repulsive or attractive. We choose the linear momentum of the Brownian particle as the observable of interest in order to fully describe the Brownian motion within the Mori framework. We particularly focus on the colloid diffusion at different solvent densities and two temperature regimes: high and low (near the critical point) temperatures. To reach our goal, we have rewritten the GLE as a second kind Volterra integral in order to compute the memory kernel in real space. With this kernel, we evaluate the momentum-fluctuating force correlation function, which is of particular relevance since it allows us to establish when the stationarity condition has been reached. Our findings show that even at high temperatures, the details of the attractive interaction potential among solvent molecules induce important changes in the colloid dynamics. Additionally, near the critical point, the dynamical scenario becomes more complex; all the correlation functions decay slowly in an extended time window, however, the memory kernel seems to be only a function of the solvent density. Thus, the explicit inclusion of the solvent in the description of Brownian motion allows us to better understand the behaviour of the memory kernel at those thermodynamic states near the critical region without any further approximation. This information is useful to elaborate more realistic descriptions of Brownian motion that take into account the particular details of the host medium.

  5. Dynamic particle refinement in SPH: application to free surface flow and non-cohesive soil simulations

    NASA Astrophysics Data System (ADS)

    Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos

    2013-05-01

    In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.

  6. End-use quality of CIMMYT-derived soft kernel durum wheat germplasm. II. Dough strength and pan bread quality

    USDA-ARS?s Scientific Manuscript database

    Durum wheat (Triticum turgidum ssp. durum) is considered unsuitable for the majority of commercial bread production because its weak gluten strength combined with flour particle size and flour starch damage after milling are not commensurate with hexaploid wheat flours. Recently a new durum cultivar...

  7. Full Wave Parallel Code for Modeling RF Fields in Hot Plasmas

    NASA Astrophysics Data System (ADS)

    Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo

    2015-11-01

    FAR-TECH, Inc. is developing a suite of full wave RF codes in hot plasmas. It is based on a formulation in configuration space with grid adaptation capability. The conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating the linearized Vlasov equation along unperturbed test particle orbits. For Tokamak applications a 2-D version of the code is being developed. Progress of this work will be reported. This suite of codes has the following advantages over existing spectral codes: 1) It utilizes the localized nature of plasma dielectric response to the RF field and calculates this response numerically without approximations. 2) It uses an adaptive grid to better resolve resonances in plasma and antenna structures. 3) It uses an efficient sparse matrix solver to solve the formulated linear equations. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel is calculated. Work is supported by the U.S. DOE SBIR program.

  8. A fast object-oriented Matlab implementation of the Reproducing Kernel Particle Method

    NASA Astrophysics Data System (ADS)

    Barbieri, Ettore; Meo, Michele

    2012-05-01

    Novel numerical methods, known as Meshless Methods or Meshfree Methods and, in a wider perspective, Partition of Unity Methods, promise to overcome most of disadvantages of the traditional finite element techniques. The absence of a mesh makes meshfree methods very attractive for those problems involving large deformations, moving boundaries and crack propagation. However, meshfree methods still have significant limitations that prevent their acceptance among researchers and engineers, namely the computational costs. This paper presents an in-depth analysis of computational techniques to speed-up the computation of the shape functions in the Reproducing Kernel Particle Method and Moving Least Squares, with particular focus on their bottlenecks, like the neighbour search, the inversion of the moment matrix and the assembly of the stiffness matrix. The paper presents numerous computational solutions aimed at a considerable reduction of the computational times: the use of kd-trees for the neighbour search, sparse indexing of the nodes-points connectivity and, most importantly, the explicit and vectorized inversion of the moment matrix without using loops and numerical routines.

  9. Utilization of expeller pressed partially defatted peanut cake meal in the preparation of bakery products.

    PubMed

    Chavan, J K; Shinde, V S; Kadam, S S

    1991-07-01

    Expeller pressed partially defatted peanut cake obtained from skin-free kernels was used as graded supplements in the preparation of breads, sweet buns, cupcakes and yeast-raised doughnuts. Incorporation of cake meal lowered the specific volume and sensory properties, but improved the fresh weight, water holding capacity and protein content of the products. The products containing 10% peanut cake meal were found to be acceptable.

  10. Apricot DNA as an indicator for persipan: detection and quantitation in marzipan using ligation-dependent probe amplification.

    PubMed

    Luber, Florian; Demmel, Anja; Hosken, Anne; Busch, Ulrich; Engel, Karl-Heinz

    2012-06-13

    The confectionery ingredient marzipan is exclusively prepared from almond kernels and sugar. The potential use of apricot kernels, so-called persipan, is an important issue for the quality assessment of marzipan. Therefore, a ligation-dependent probe amplification (LPA) assay was developed that enables a specific and sensitive detection of apricot DNA, as an indicator for the presence of persipan. The limit of detection was determined to be 0.1% persipan in marzipan. The suitability of the method was confirmed by the analysis of 20 commercially available food samples. The integration of a Prunus -specific probe in the LPA assay as a reference allowed for the relative quantitation of persipan in marzipan. The limit of quantitation was determined to be 0.5% persipan in marzipan. The analysis of two self-prepared mixtures of marzipan and persipan demonstrated the applicability of the quantitation method at concentration levels of practical relevance for quality control.

  11. Data consistency-driven scatter kernel optimization for x-ray cone-beam CT

    NASA Astrophysics Data System (ADS)

    Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong

    2015-08-01

    Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.

  12. SPHYNX: an accurate density-based SPH method for astrophysical applications

    NASA Astrophysics Data System (ADS)

    Cabezón, R. M.; García-Senz, D.; Figueira, J.

    2017-10-01

    Aims: Hydrodynamical instabilities and shocks are ubiquitous in astrophysical scenarios. Therefore, an accurate numerical simulation of these phenomena is mandatory to correctly model and understand many astrophysical events, such as supernovas, stellar collisions, or planetary formation. In this work, we attempt to address many of the problems that a commonly used technique, smoothed particle hydrodynamics (SPH), has when dealing with subsonic hydrodynamical instabilities or shocks. To that aim we built a new SPH code named SPHYNX, that includes many of the recent advances in the SPH technique and some other new ones, which we present here. Methods: SPHYNX is of Newtonian type and grounded in the Euler-Lagrange formulation of the smoothed-particle hydrodynamics technique. Its distinctive features are: the use of an integral approach to estimating the gradients; the use of a flexible family of interpolators called sinc kernels, which suppress pairing instability; and the incorporation of a new type of volume element which provides a better partition of the unity. Unlike other modern formulations, which consider volume elements linked to pressure, our volume element choice relies on density. SPHYNX is, therefore, a density-based SPH code. Results: A novel computational hydrodynamic code oriented to Astrophysical applications is described, discussed, and validated in the following pages. The ensuing code conserves mass, linear and angular momentum, energy, entropy, and preserves kernel normalization even in strong shocks. In our proposal, the estimation of gradients is enhanced using an integral approach. Additionally, we introduce a new family of volume elements which reduce the so-called tensile instability. Both features help to suppress the damp which often prevents the growth of hydrodynamic instabilities in regular SPH codes. Conclusions: On the whole, SPHYNX has passed the verification tests described below. For identical particle setting and initial conditions the results were similar (or better in some particular cases) than those obtained with other SPH schemes such as GADGET-2, PSPH or with the recent density-independent formulation (DISPH) and conservative reproducing kernel (CRKSPH) techniques.

  13. Practicable group testing method to evaluate weight/weight GMO content in maize grains.

    PubMed

    Mano, Junichi; Yanaka, Yuka; Ikezu, Yoko; Onishi, Mari; Futo, Satoshi; Minegishi, Yasutaka; Ninomiya, Kenji; Yotsuyanagi, Yuichi; Spiegelhalter, Frank; Akiyama, Hiroshi; Teshima, Reiko; Hino, Akihiro; Naito, Shigehiro; Koiwa, Tomohiro; Takabatake, Reona; Furui, Satoshi; Kitta, Kazumi

    2011-07-13

    Because of the increasing use of maize hybrids with genetically modified (GM) stacked events, the established and commonly used bulk sample methods for PCR quantification of GM maize in non-GM maize are prone to overestimate the GM organism (GMO) content, compared to the actual weight/weight percentage of GM maize in the grain sample. As an alternative method, we designed and assessed a group testing strategy in which the GMO content is statistically evaluated based on qualitative analyses of multiple small pools, consisting of 20 maize kernels each. This approach enables the GMO content evaluation on a weight/weight basis, irrespective of the presence of stacked-event kernels. To enhance the method's user-friendliness in routine application, we devised an easy-to-use PCR-based qualitative analytical method comprising a sample preparation step in which 20 maize kernels are ground in a lysis buffer and a subsequent PCR assay in which the lysate is directly used as a DNA template. This method was validated in a multilaboratory collaborative trial.

  14. Nonrelativistic trace and diffeomorphism anomalies in particle number background

    NASA Astrophysics Data System (ADS)

    Auzzi, Roberto; Baiguera, Stefano; Nardelli, Giuseppe

    2018-04-01

    Using the heat kernel method, we compute nonrelativistic trace anomalies for Schrödinger theories in flat spacetime, with a generic background gauge field for the particle number symmetry, both for a free scalar and a free fermion. The result is genuinely nonrelativistic, and it has no counterpart in the relativistic case. Contrary to naive expectations, the anomaly is not gauge invariant; this is similar to the nongauge covariance of the non-Abelian relativistic anomaly. We also show that, in the same background, the gravitational anomaly for a nonrelativistic scalar vanishes.

  15. Mathematical inference in one point microrheology

    NASA Astrophysics Data System (ADS)

    Hohenegger, Christel; McKinley, Scott

    2016-11-01

    Pioneered by the work of Mason and Weitz, one point passive microrheology has been successfully applied to obtaining estimates of the loss and storage modulus of viscoelastic fluids when the mean-square displacement obeys a local power law. Using numerical simulations of a fluctuating viscoelastic fluid model, we study the problem of recovering the mechanical parameters of the fluid's memory kernel using statistical inference like mean-square displacements and increment auto-correlation functions. Seeking a better understanding of the influence of the assumptions made in the inversion process, we mathematically quantify the uncertainty in traditional one point microrheology for simulated data and demonstrate that a large family of memory kernels yields the same statistical signature. We consider both simulated data obtained from a full viscoelastic fluid simulation of the unsteady Stokes equations with fluctuations and from a Generalized Langevin Equation of the particle's motion described by the same memory kernel. From the theory of inverse problems, we propose an alternative method that can be used to recover information about the loss and storage modulus and discuss its limitations and uncertainties. NSF-DMS 1412998.

  16. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less

  17. Performance modeling of Deep Burn TRISO fuel using ZrC as a load-bearing layer and an oxygen getter

    NASA Astrophysics Data System (ADS)

    Wongsawaeng, Doonyapong

    2010-01-01

    The effects of design choices for the TRISO particle fuel were explored in order to determine their contribution to attaining high-burnup in Deep Burn modular helium reactor fuels containing transuranics from light water reactor spent fuel. The new design features were: (1) ZrC coating substituted for the SiC, allowing the fuel to survive higher accident temperatures; (2) pyrocarbon/SiC "alloy" substituted for the inner pyrocarbon coating to reduce layer failure and (3) pyrocarbon seal coat and thin ZrC oxygen getter coating on the kernel to eliminate CO. Fuel performance was evaluated using General Atomics Company's PISA code. The only acceptable design has a 200-μm kernel diameter coupled with at least 150-μm thick, 50% porosity buffer, a 15-μm ZrC getter over a 10-μm pyrocarbon seal coat on the kernel, an alloy inner pyrocarbon, and ZrC substituted for SiC. The code predicted that during a 1600 °C postulated accident at 70% FIMA, the ZrC failure probability is <10-4.

  18. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population.

    PubMed

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  19. Properties of Particle Size Distribution from Milled White Nixtamalized Corn Kernels as a Function of Steeping Time

    PubMed Central

    Fernández-Muñoz, J. L.; Zapata-Torrez, M.; Márquez-Herrera, A.; Sánchez-Sinencio, F.; Mendoza-Álvarez, J. G.; Meléndez-Lira, M.; Zelaya-Ángel, O.

    2016-01-01

    This paper focuses on the particle size distribution (PSD) changes during nixtamalized corn kernels (NCK) as a function of the steeping time (ST). The process to obtain powder or corn flour from NCK was as follows: (i) the NCK with different STs were wet-milled in a stone mill, (ii) dehydrated by a Flash type dryer, and (iii) pulverized with a hammer mill and sieved with a 20 mesh. The powder was characterized by measuring the PSD percentage, calcium percentage (CP), peak viscosity at 90°C (PV), and crystallinity percentage (CP). The PSD of the powder as a function of ST was determined by sieving in Ro-TAP equipment. By sieving, five fractions of powder were obtained employing meshes 30, 40, 60, 80, and 100. The final weight of the PSD obtained from the sieving process follows a Gaussian profile with the maximum corresponding to the average particle obtained with mesh 60. The calcium percentage as a function of ST follows a behavior similar to the weight of the PSD. The study of crystallinity versus the mesh number shows that it decreases for smaller mesh number. A similar behavior is observed as steeping time increases, except around ST = 8 h where the gelatinization of starch is observed. The trend of increasing viscosity values of the powder samples occurs when increasing ST and decreasing particle size. The ST significantly changes the crystallinity and viscosity values of the powder and, in both cases, a minimum value is observed in the region 7–9 h. The experimental results show that the viscosity increases (decreases) if the particle size decreases (increases). PMID:27375921

  20. Urea adsorption by activated carbon prepared from palm kernel shell

    NASA Astrophysics Data System (ADS)

    Ooi, Chee-Heong; Sim, Yoke-Leng; Yeoh, Fei-Yee

    2017-07-01

    Dialysis treatment is crucial for patients suffer from renal failure. The dialysis system removes the uremic toxin to a safe level in a patient's body. One of the major limitations of the current hemodialysis system is the capability to efficiently remove uremic toxins from patient's body. Nanoporous materials can be applied to improve the treatment. Palm kernel shell (PKS) biomass generated from palm oil mills can be utilized to prepare high quality nanoporous activated carbon (AC) and applied for urea adsorption in the dialysis system. In this study, AC was prepared from PKS via different carbonization temperatures and followed by carbon dioxide gas activation processes. The physical and chemical properties of the samples were studied. The results show that the porous AC with BET surface areas ranging from 541 to 622 m2g-1 and with total pore volumes varying from 0.254 to 0.297 cm3g-1, are formed with different carbonization temperatures. The equilibrium constant for urea adsorption by AC samples carbonized at 400, 500 and 600 °C are 0.091, 0.287 and 0.334, respectively. The increase of carbonization temperatures from 400 to 600 °C resulted in the increase in urea adsorption by AC predominantly due to increase in surface area. The present study reveals the feasibility of preparing AC with good porosity from PKS and potentially applied in urea adsorption application.

  1. Fission product palladium-silicon carbide interaction in htgr fuel particles

    NASA Astrophysics Data System (ADS)

    Minato, Kazuo; Ogawa, Toru; Kashimura, Satoru; Fukuda, Kousaku; Shimizu, Michio; Tayama, Yoshinobu; Takahashi, Ishio

    1990-07-01

    Interaction of fission product palladium (Pd) with the silicon carbide (SiC) layer was observed in irradiated Triso-coated uranium dioxide particles for high temperature gas-cooled reactors (HTGR) with an optical microscope and electron probe microanalyzers. The SiC layers were attacked locally or the reaction product formed nodules at the attack site. Although the main element concerned with the reaction was palladium, rhodium and ruthenium were also detected at the corroded areas in some particles. Palladium was detected on both the hot and cold sides of the particles, but the corroded areas and the palladium accumulations were distributed particularly on the cold side of the particles. The observed Pd-SiC reaction depths were analyzed on the assumption that the release of palladium from the fuel kernel controls the whole Pd-SiC reaction.

  2. Immunochemical characterization of alkaline-soluble polysaccharide, P-1, from the kernels of Prunus mume Sieb. et Zucc.

    PubMed

    Dogasaki, C; Nishijima, M; Ohno, N; Yadomae, T; Miyazaki, T

    1996-07-01

    Polyclonal antibodies against P-1, a pectic polysaccharide fraction extracted with 0.5 M NaOH from the kernels of Prunus mume and consisted of arabino-galacturonan, and I-3, the partial acid (0.1 M trifluoroacetic acid) hydrolysate of P-1, were prepared in Japanese white rabbits. Competitive ELISA experiments strongly suggested that anti P-1 and anti I-3 antibodies were different but P-1 and I-3 cross-reacted with each other to recognize a partly similar epitope structure. The reactivities of polysaccharide fractions from the raw flesh of P. mume, and the kernels of apricot and peach extracted with either water or sodium hydroxide were examined using both antisera by the indirect competitive ELISA method. The polysaccharide fractions extracted with sodium hydroxide solutions had the reactivities but not those extracted with cold and hot water. These facts suggested that the similar structure of polysaccharides to P-1 was present in the flesh of P. mume and the kernels of apricot and peach. However, neither pectin of apple nor citrus had reactivity with each antiserum. P-1 would be different in chemical structure from a commercially available pectin, a water-soluble polysaccharide from apple and citrus.

  3. Nutritional composition of shea products and chemical properties of shea butter: a review.

    PubMed

    Honfo, Fernande G; Akissoe, Noel; Linnemann, Anita R; Soumanou, Mohamed; Van Boekel, Martinus A J S

    2014-01-01

    Increasing demand of shea products (kernels and butter) has led to the assessment of the state-of-the-art of these products. In this review, attention has been focused on macronutrients and micronutrients of pulp, kernels, and butter of shea tree and also the physicochemical properties of shea butter. Surveying the literature revealed that the pulp is rich in vitamin C (196.1 mg/100 g); consumption of 50 g covers 332% and 98% of the recommended daily intake (RDI) of children (4-8 years old) and pregnant women, respectively. The kernels contain a high level of fat (17.4-59.1 g/100 g dry weight). Fat extraction is mainly done by traditional methods that involve roasting and pressing of the kernels, churning the obtained liquid with water, boiling, sieving, and cooling. The fat (butter) is used in food preparation and medicinal and cosmetics industries. Its biochemical properties indicate some antioxidant and anti-inflammatory activities. Large variations are observed in the reported values for the composition of shea products. Recommendations for future research are presented to improve the quality and the shelf-life of the butter. In addition, more attention should be given to the accuracy and precision in experimental analyses to obtain more reliable information about biological variation.

  4. Fermentation profile and identification of lactic acid bacteria and yeasts of rehydrated corn kernel silage.

    PubMed

    Carvalho, B F; Ávila, C L S; Bernardes, T F; Pereira, M N; Santos, C; Schwan, R F

    2017-03-01

    The aim of this study was to evaluate the chemical and microbiological characteristics and to identify the lactic acid bacteria (LAB) and yeasts involved in rehydrated corn kernel silage. Four replicates for each fermentation time: 5, 15, 30, 60, 90, 150, 210 and 280 days were prepared. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry and PCR-based identification were utilized to identify LAB and yeasts. Eighteen bacteria and four yeast species were identified. The bacteria population reached maximum growth after 15 days and moulds were detected up to this time. The highest dry matter (DM) loss was 7·6% after 280 days. The low concentration of water-soluble carbohydrates (20 g kg -1 of DM) was not limiting for fermentation, although the reduction in pH and acid production occurred slowly. Storage of the rehydrated corn kernel silage increased digestibility up to day 280. This silage was dominated by LAB but showed a slow decrease in pH values. This technique of corn storage on farms increased the DM digestibility. This study was the first to evaluate the rehydrated corn kernel silage fermentation dynamics and our findings are relevant to optimization of this silage fermentation. © 2016 The Society for Applied Microbiology.

  5. Generalized and efficient algorithm for computing multipole energies and gradients based on Cartesian tensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Dejun, E-mail: dejun.lin@gmail.com

    2015-09-21

    Accurate representation of intermolecular forces has been the central task of classical atomic simulations, known as molecular mechanics. Recent advancements in molecular mechanics models have put forward the explicit representation of permanent and/or induced electric multipole (EMP) moments. The formulas developed so far to calculate EMP interactions tend to have complicated expressions, especially in Cartesian coordinates, which can only be applied to a specific kernel potential function. For example, one needs to develop a new formula each time a new kernel function is encountered. The complication of these formalisms arises from an intriguing and yet obscured mathematical relation between themore » kernel functions and the gradient operators. Here, I uncover this relation via rigorous derivation and find that the formula to calculate EMP interactions is basically invariant to the potential kernel functions as long as they are of the form f(r), i.e., any Green’s function that depends on inter-particle distance. I provide an algorithm for efficient evaluation of EMP interaction energies, forces, and torques for any kernel f(r) up to any arbitrary rank of EMP moments in Cartesian coordinates. The working equations of this algorithm are essentially the same for any kernel f(r). Recently, a few recursive algorithms were proposed to calculate EMP interactions. Depending on the kernel functions, the algorithm here is about 4–16 times faster than these algorithms in terms of the required number of floating point operations and is much more memory efficient. I show that it is even faster than a theoretically ideal recursion scheme, i.e., one that requires 1 floating point multiplication and 1 addition per recursion step. This algorithm has a compact vector-based expression that is optimal for computer programming. The Cartesian nature of this algorithm makes it fit easily into modern molecular simulation packages as compared with spherical coordinate-based algorithms. A software library based on this algorithm has been implemented in C++11 and has been released.« less

  6. AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics

    NASA Astrophysics Data System (ADS)

    Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.

    2017-05-01

    We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.

  7. SU-E-T-510: Calculation of High Resolution and Material-Specific Photon Energy Deposition Kernels.

    PubMed

    Huang, J; Childress, N; Kry, S

    2012-06-01

    To calculate photon energy deposition kernels (EDKs) used for convolution/superposition dose calculation at a higher resolution than the original Mackie et al. 1988 kernels and to calculate material-specific kernels that describe how energy is transported and deposited by secondary particles when the incident photon interacts in a material other than water. The high resolution EDKs for various incident photon energies were generated using the EGSnrc user-code EDKnrc, which forces incident photons to interact at the center of a 60 cm radius sphere of water. The simulation geometry is essentially the same as the original Mackie calculation but with a greater number of scoring voxels (48 radial, 144 angular bins). For the material-specific EDKs, incident photons were forced to interact at the center of a 1 mm radius sphere of material (lung, cortical bone, silver, or titanium) surrounded by a 60 cm radius water sphere, using the original scoring voxel geometry implemented by Mackie et al. 1988 (24 radial, 48 angular bins). Our Monte Carlo-calculated high resolution EDKs showed excellent agreement with the Mackie kernels, with our kernels providing more information about energy deposition close to the interaction site. Furthermore, our EDKs resulted in smoother dose deposition functions due to the finer resolution and greater number of simulation histories. The material-specific EDK results show that the angular distribution of energy deposition is different for incident photons interacting in different materials. Calculated from the angular dose distribution for 300 keV incident photons, the expected polar angle for dose deposition () is 28.6° for water, 33.3° for lung, 36.0° for cortical bone, 44.6° for titanium, and 58.1° for silver, showing a dependence on the material in which the primary photon interacts. These high resolution and material-specific EDKs have implications for convolution/superposition dose calculations in heterogeneous patient geometries, especially at material interfaces. © 2012 American Association of Physicists in Medicine.

  8. Implementation of radiation shielding calculation methods. Volume 2: Seminar/Workshop notes

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    Detailed descriptions are presented of the input data for each of the MSFC computer codes applied to the analysis of a realistic nuclear propelled vehicle. The analytical techniques employed include cross section data, preparation, one and two dimensional discrete ordinates transport, point kernel, and single scatter methods.

  9. Trichothecene-Genotypes Play a Role in Fusarium Head Blight Disease Spread and Trichothecene Accumulation in Wheat

    USDA-ARS?s Scientific Manuscript database

    In the current study, we evaluated the impact of the observed North American evolutionary shift in the Fusarium graminearum complex on disease spread, kernel damage, and trichothecene accumulation in resistant and susceptible wheat genotypes. Four inocula were prepared using composites of F. gramin...

  10. PST-Gold nanoparticle as an effective anticancer agent with immunomodulatory properties.

    PubMed

    Joseph, Manu M; Aravind, S R; Varghese, Sheeja; Mini, S; Sreelekha, T T

    2013-04-01

    Polysaccharide PST001, which is isolated from the seed kernels of Tamarindus indica (Ti), is an antitumor and immunomodulatory compound. Gold nanoparticles have been used for various applications in cancer. In the present report, a novel strategy for the synthesis and stabilization of gold nanoparticles using anticancer polysaccharide PST001 was employed and the nanoparticles' antitumor activity was evaluated. PST-Gold nanoparticles were prepared such that PST001 acted both as a reducing agent and as a capping agent. PST-Gold nanoparticles showed high stability, no obvious aggregation for months and a wide range of pH tolerance. PST-Gold nanoparticles not only retained the antitumor effect of PST001 but also showed an enhanced effect even at a low concentration. It was also found that the nanoparticles exerted their antitumor effects through the induction of apoptosis. In vivo assays on BALB/c mice revealed that PST-Gold nanoparticles exhibited immunomodulatory effects. Evaluation of biochemical, hematological and histopathological features of mice revealed that PST-Gold nanoparticles could be administered safely without toxicity. Using the polysaccharide PST001 for the reduction and stabilization of gold nanoparticles does not introduce any environmental toxicity or biological hazards, and these particles are more effective than the parent polysaccharide. Further studies should be employed to exploit these particles as anticancer agents with imaging properties. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Combining Lactic Acid Spray with Near-Infrared Radiation Heating To Inactivate Salmonella enterica Serovar Enteritidis on Almond and Pine Nut Kernels

    PubMed Central

    Ha, Jae-Won

    2015-01-01

    The aim of this study was to investigate the efficacy of near-infrared radiation (NIR) heating combined with lactic acid (LA) sprays for inactivating Salmonella enterica serovar Enteritidis on almond and pine nut kernels and to elucidate the mechanisms of the lethal effect of the NIR-LA combined treatment. Also, the effect of the combination treatment on product quality was determined. Separately prepared S. Enteritidis phage type (PT) 30 and non-PT 30 S. Enteritidis cocktails were inoculated onto almond and pine nut kernels, respectively, followed by treatments with NIR or 2% LA spray alone, NIR with distilled water spray (NIR-DW), and NIR with 2% LA spray (NIR-LA). Although surface temperatures of nuts treated with NIR were higher than those subjected to NIR-DW or NIR-LA treatment, more S. Enteritidis survived after NIR treatment alone. The effectiveness of NIR-DW and NIR-LA was similar, but significantly more sublethally injured cells were recovered from NIR-DW-treated samples. We confirmed that the enhanced bactericidal effect of the NIR-LA combination may not be attributable to cell membrane damage per se. NIR heat treatment might allow S. Enteritidis cells to become permeable to applied LA solution. The NIR-LA treatment (5 min) did not significantly (P > 0.05) cause changes in the lipid peroxidation parameters, total phenolic contents, color values, moisture contents, and sensory attributes of nut kernels. Given the results of the present study, NIR-LA treatment may be a potential intervention for controlling food-borne pathogens on nut kernel products. PMID:25911473

  12. Evaluation of the influence of double and triple Gaussian proton kernel models on accuracy of dose calculations for spot scanning technique.

    PubMed

    Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki

    2016-03-01

    The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference between double and triple Gaussian kernel models. The authors found that the difference between the two studied kernel models appeared at mid-depths and the accuracy of predicting the double Gaussian model deteriorated at the low-dose bump that appeared at mid-depths. When the authors employed the double Gaussian kernel model, the accuracy of calculations for the absolute dose at the center of the SOBP varied with irradiation conditions and the maximum difference was 3.4%. In contrast, the results obtained from calculations with the triple Gaussian kernel model indicated good agreement with the measurements within ±1.1%, regardless of the irradiation conditions. The difference between the results obtained with the two types of studied kernel models was distinct in the high energy region. The accuracy of calculations with the double Gaussian kernel model varied with the field size and SOBP width because the accuracy of prediction with the double Gaussian model was insufficient at the low-dose bump. The evaluation was only qualitative under limited volumetric irradiation conditions. Further accumulation of measured data would be needed to quantitatively comprehend what influence the double and triple Gaussian kernel models had on the accuracy of dose calculations.

  13. Evaluation of design parameters for TRISO-coated fuel particles to establish manufacturing critical limits using PARFUME

    DOE PAGES

    Skerjanc, William F.; Maki, John T.; Collin, Blaise P.; ...

    2015-12-02

    The success of modular high temperature gas-cooled reactors is highly dependent on the performance of the tristructural-isotopic (TRISO) coated fuel particle and the quality to which it can be manufactured. During irradiation, TRISO-coated fuel particles act as a pressure vessel to contain fission gas and mitigate the diffusion of fission products to the coolant boundary. The fuel specifications place limits on key attributes to minimize fuel particle failure under irradiation and postulated accident conditions. PARFUME (an integrated mechanistic coated particle fuel performance code developed at the Idaho National Laboratory) was used to calculate fuel particle failure probabilities. By systematically varyingmore » key TRISO-coated particle attributes, failure probability functions were developed to understand how each attribute contributes to fuel particle failure. Critical manufacturing limits were calculated for the key attributes of a low enriched TRISO-coated nuclear fuel particle with a kernel diameter of 425 μm. As a result, these critical manufacturing limits identify ranges beyond where an increase in fuel particle failure probability is expected to occur.« less

  14. A Primer on Vibrational Ball Bearing Feature Generation for Prognostics and Diagnostics Algorithms

    DTIC Science & Technology

    2015-03-01

    Atlas -Marks (Cone-Shaped Kernel) ........................................................36 8.7.7 Hilbert-Huang Transform...bearing surface and eventually progress to the surface where the material will separate. Also known as pitting, spalling, or flaking. • Wear ...normal degradation caused by dirt and foreign particles causing abrasion of the contact surfaces over time resulting in alterations in the raceway and

  15. Retrieval of the aerosol size distribution in the complex anomalous diffraction approximation

    NASA Astrophysics Data System (ADS)

    Franssens, Ghislain R.

    This contribution reports some recently achieved results in aerosol size distribution retrieval in the complex anomalous diffraction approximation (ADA) to MIE scattering theory. This approximation is valid for spherical particles that are large compared to the wavelength and have a refractive index close to 1. The ADA kernel is compared with the exact MIE kernel. Despite being a simple approximation, the ADA seems to have some practical value for the retrieval of the larger modes of tropospheric and lower stratospheric aerosols. The ADA has the advantage over MIE theory that an analytic inversion of the associated Fredholm integral equation becomes possible. In addition, spectral inversion in the ADA can be formulated as a well-posed problem. In this way, a new inverse formula was obtained, which allows the direct computation of the size distribution as an integral over the spectral extinction function. This formula is valid for particles that both scatter and absorb light and it also takes the spectral dispersion of the refractive index into account. Some details of the numerical implementation of the inverse formula are illustrated using a modified gamma test distribution. Special attention is given to the integration of spectrally truncated discrete extinction data with errors.

  16. Neutronics Studies of Uranium-bearing Fully Ceramic Micro-encapsulated Fuel for PWRs

    DOE PAGES

    George, Nathan M.; Maldonado, G. Ivan; Terrani, Kurt A.; ...

    2014-12-01

    Our study evaluated the neutronics and some of the fuel cycle characteristics of using uranium-based fully ceramic microencapsulated (FCM) fuel in a pressurized water reactor (PWR). Specific PWR lattice designs with FCM fuel have been developed that are expected to achieve higher specific burnup levels in the fuel while also increasing the tolerance to reactor accidents. The SCALE software system was the primary analysis tool used to model the lattice designs. A parametric study was performed by varying tristructural isotropic particle design features (e.g., kernel diameter, coating layer thicknesses, and packing fraction) to understand the impact on reactivity and resultingmore » operating cycle length. Moreover, to match the lifetime of an 18-month PWR cycle, the FCM particle fuel design required roughly 10% additional fissile material at beginning of life compared with that of a standard uranium dioxide (UO 2) rod. Uranium mononitride proved to be a favorable fuel for the fuel kernel due to its higher heavy metal loading density compared with UO 2. The FCM fuel designs evaluated maintain acceptable neutronics design features for fuel lifetime, lattice peaking factors, and nonproliferation figure of merit.« less

  17. Arabinoxylan-lipids-based edible films and coatings. 2. Influence of sucroester nature on the emulsion structure and film properties.

    PubMed

    Phan The, D; Péroval, C; Debeaufort, F; Despré, D; Courthaudon, J L; Voilley, A

    2002-01-16

    This work is a contribution to better knowledge of the influence of the structure of films on their functional properties obtained from emulsions based on arabinoxylans, hydrogenated palm kernel oil (HPKO), and emulsifiers. The sucroesters (emulsifiers) have a great effect on the stabilization of the emulsified film structure containing arabinoxylans and hydrogenated palm kernel oil. They improve the moisture barrier properties. Several sucroesters having different esterification degrees were tested. Both lipophilic (90% of di and tri-ester) and hydrophilic (70% of mono-ester) sucrose esters can ensure the stability of the emulsion used to form the film, especially during preparation and drying. These emulsifiers confer good moisture barrier properties to emulsified films.

  18. Classification of Astrocytomas and Oligodendrogliomas from Mass Spectrometry Data Using Sparse Kernel Machines

    PubMed Central

    Huang, Jacob; Gholami, Behnood; Agar, Nathalie Y. R.; Norton, Isaiah; Haddad, Wassim M.; Tannenbaum, Allen R.

    2013-01-01

    Glioma histologies are the primary factor in prognostic estimates and are used in determining the proper course of treatment. Furthermore, due to the sensitivity of cranial environments, real-time tumor-cell classification and boundary detection can aid in the precision and completeness of tumor resection. A recent improvement to mass spectrometry known as desorption electrospray ionization operates in an ambient environment without the application of a preparation compound. This allows for a real-time acquisition of mass spectra during surgeries and other live operations. In this paper, we present a framework using sparse kernel machines to determine a glioma sample’s histopathological subtype by analyzing its chemical composition acquired by desorption electrospray ionization mass spectrometry. PMID:22256188

  19. The Vortex of Burgers in Protoplanetary Disc

    NASA Astrophysics Data System (ADS)

    Abrahamyan, M. G.

    2017-07-01

    The effect of a Burgers vortex on formation of planetesimals in a protoplanetary disc in local approach is considered. It is shown that there is not any circular orbit for rigid particles in centrifugal balance; only stable position in Burgers vortex under the influence of centrifugal, Coriolis, pressure gradient and Stokes drag forces is the center of vortex. The two-dimensional anticyclonic Burgers vortex with homogeneously rotating kernel and a converging radial stream of substance can effectively accumulate in its nuclear area the meter- sized rigid particles of total mass ˜1028g for characteristic time ˜106yr.

  20. Relationship Between Integro-Differential Schrodinger Equation with a Symmetric Kernel and Position-Dependent Effective Mass

    NASA Astrophysics Data System (ADS)

    Khosropour, B.; Moayedi, S. K.; Sabzali, R.

    2018-07-01

    The solution of integro-differential Schrodinger equation (IDSE) which was introduced by physicists has a great role in the fields of science. The purpose of this paper comes in two parts. First, studying the relationship between integro-differential Schrodinger equation with a symmetric non-local potential and one-dimensional Schrodinger equation with a position-dependent effective mass. Second, we show that the quantum Hamiltonian for a particle with position-dependent mass after applying Liouville-Green transformations will be converted to a quantum Hamiltonian for a particle with constant mass.

  1. NIR Reflectance Spectroscopic Method for Nondestructive Moisture Content Determination of In Peanut Kernels

    USDA-ARS?s Scientific Manuscript database

    Most of the commercial instruments presently available to determine the moisture content (MC) of peanuts need shelling and cleaning of the peanut samples, and in some cases some sort of sample preparation such as grinding. This is cumbersome, time consuming and destructive. It would be useful if t...

  2. IMPLEMENTATION OF THE SMOKE EMISSION DATA PROCESSOR AND SMOKE TOOL INPUT DATA PROCESSOR IN MODELS-3

    EPA Science Inventory

    The U.S. Environmental Protection Agency has implemented Version 1.3 of SMOKE (Sparse Matrix Object Kernel Emission) processor for preparation of area, mobile, point, and biogenic sources emission data within Version 4.1 of the Models-3 air quality modeling framework. The SMOK...

  3. Modeling and optimization by particle swarm embedded neural network for adsorption of zinc (II) by palm kernel shell based activated carbon from aqueous environment.

    PubMed

    Karri, Rama Rao; Sahu, J N

    2018-01-15

    Zn (II) is one the common pollutant among heavy metals found in industrial effluents. Removal of pollutant from industrial effluents can be accomplished by various techniques, out of which adsorption was found to be an efficient method. Applications of adsorption limits itself due to high cost of adsorbent. In this regard, a low cost adsorbent produced from palm oil kernel shell based agricultural waste is examined for its efficiency to remove Zn (II) from waste water and aqueous solution. The influence of independent process variables like initial concentration, pH, residence time, activated carbon (AC) dosage and process temperature on the removal of Zn (II) by palm kernel shell based AC from batch adsorption process are studied systematically. Based on the design of experimental matrix, 50 experimental runs are performed with each process variable in the experimental range. The optimal values of process variables to achieve maximum removal efficiency is studied using response surface methodology (RSM) and artificial neural network (ANN) approaches. A quadratic model, which consists of first order and second order degree regressive model is developed using the analysis of variance and RSM - CCD framework. The particle swarm optimization which is a meta-heuristic optimization is embedded on the ANN architecture to optimize the search space of neural network. The optimized trained neural network well depicts the testing data and validation data with R 2 equal to 0.9106 and 0.9279 respectively. The outcomes indicates that the superiority of ANN-PSO based model predictions over the quadratic model predictions provided by RSM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Analysis and Development of A Robust Fuel for Gas-Cooled Fast Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knight, Travis W.

    2010-01-31

    The focus of this effort was on the development of an advanced fuel for gas-cooled fast reactor (GFR) applications. This composite design is based on carbide fuel kernels dispersed in a ZrC matrix. The choice of ZrC is based on its high temperature properties and good thermal conductivity and improved retention of fission products to temperatures beyond that of traditional SiC based coated particle fuels. A key component of this study was the development and understanding of advanced fabrication techniques for GFR fuels that have potential to reduce minor actinide (MA) losses during fabrication owing to their higher vapor pressuresmore » and greater volatility. The major accomplishments of this work were the study of combustion synthesis methods for fabrication of the ZrC matrix, fabrication of high density UC electrodes for use in the rotating electrode process, production of UC particles by rotating electrode method, integration of UC kernels in the ZrC matrix, and the full characterization of each component. Major accomplishments in the near-term have been the greater characterization of the UC kernels produced by the rotating electrode method and their condition following the integration in the composite (ZrC matrix) following the short time but high temperature combustion synthesis process. This work has generated four journal publications, one conference proceeding paper, and one additional journal paper submitted for publication (under review). The greater significance of the work can be understood in that it achieved an objective of the DOE Generation IV (GenIV) roadmap for GFR Fuel—namely the demonstration of a composite carbide fuel with 30% volume fuel. This near-term accomplishment is even more significant given the expected or possible time frame for implementation of the GFR in the years 2030 -2050 or beyond.« less

  5. Coalescence of repelling colloidal droplets: a route to monodisperse populations.

    PubMed

    Roger, Kevin; Botet, Robert; Cabane, Bernard

    2013-05-14

    Populations of droplets or particles dispersed in a liquid may evolve through Brownian collisions, aggregation, and coalescence. We have found a set of conditions under which these populations evolve spontaneously toward a narrow size distribution. The experimental system consists of poly(methyl methacrylate) (PMMA) nanodroplets dispersed in a solvent (acetone) + nonsolvent (water) mixture. These droplets carry electrical charges, located on the ionic end groups of the macromolecules. We used time-resolved small angle X-ray scattering to determine their size distribution. We find that the droplets grow through coalescence events: the average radius (R) increases logarithmically with elapsed time while the relative width σR/(R) of the distribution decreases as the inverse square root of (R). We interpret this evolution as resulting from coalescence events that are hindered by ionic repulsions between droplets. We generalize this evolution through a simulation of the Smoluchowski kinetic equation, with a kernel that takes into account the interactions between droplets. In the case of vanishing or attractive interactions, all droplet encounters lead to coalescence. The corresponding kernel leads to the well-known "self-preserving" particle distribution of the coalescence process, where σR/(R) increases to a plateau value. However, for droplets that interact through long-range ionic repulsions, "large + small" droplet encounters are more successful at coalescence than "large + large" encounters. We show that the corresponding kernel leads to a particular scaling of the droplet-size distribution-known as the "second-scaling law" in the theory of critical phenomena, where σR/(R) decreases as 1/√(R) and becomes independent of the initial distribution. We argue that this scaling explains the narrow size distributions of colloidal dispersions that have been synthesized through aggregation processes.

  6. Transition of phenolics and cyanogenic glycosides from apricot and cherry fruit kernels into liqueur.

    PubMed

    Senica, Mateja; Stampar, Franci; Veberic, Robert; Mikulic-Petkovsek, Maja

    2016-07-15

    Popular liqueurs made from apricot/cherry pits were evaluated in terms of their phenolic composition and occurrence of cyanogenic glycosides (CGG). Analyses consisted of detailed phenolic and cyanogenic profiles of cherry and apricot seeds as well as beverages prepared from crushed kernels. Phenolic groups and cyanogenic glycosides were analyzed with the aid of high-performance liquid chromatography (HPLC) and mass spectrophotometry (MS). Lower levels of cyanogenic glycosides and phenolics have been quantified in liqueurs compared to fruit kernels. During fruit pits steeping in the alcohol, the phenolics/cyanogenic glycosides ratio increased and at the end of beverage manufacturing process higher levels of total analyzed phenolics were detected compared to cyanogenic glycosides (apricot liqueur: 38.79 μg CGG per ml and 50.57 μg phenolics per ml; cherry liqueur 16.08 μg CGG per ml and 27.73 μg phenolics per ml). Although higher levels of phenolics are characteristic for liqueurs made from apricot and cherry pits these beverages nevertheless contain considerable amounts of cyanogenic glycosides. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Custom controls

    NASA Astrophysics Data System (ADS)

    Butell, Bart

    1996-02-01

    Microsoft's Visual Basic (VB) and Borland's Delphi provide an extremely robust programming environment for delivering multimedia solutions for interactive kiosks, games and titles. Their object oriented use of standard and custom controls enable a user to build extremely powerful applications. A multipurpose, database enabled programming environment that can provide an event driven interface functions as a multimedia kernel. This kernel can provide a variety of authoring solutions (e.g. a timeline based model similar to Macromedia Director or a node authoring model similar to Icon Author). At the heart of the kernel is a set of low level multimedia components providing object oriented interfaces for graphics, audio, video and imaging. Data preparation tools (e.g., layout, palette and Sprite Editors) could be built to manage the media database. The flexible interface for VB allows the construction of an infinite number of user models. The proliferation of these models within a popular, easy to use environment will allow the vast developer segment of 'producer' types to bring their ideas to the market. This is the key to building exciting, content rich multimedia solutions. Microsoft's VB and Borland's Delphi environments combined with multimedia components enable these possibilities.

  8. Heat kernel and Weyl anomaly of Schrödinger invariant theory

    NASA Astrophysics Data System (ADS)

    Pal, Sridip; Grinstein, Benjamín

    2017-12-01

    We propose a method inspired by discrete light cone quantization to determine the heat kernel for a Schrödinger field theory (Galilean boost invariant with z =2 anisotropic scaling symmetry) living in d +1 dimensions, coupled to a curved Newton-Cartan background, starting from a heat kernel of a relativistic conformal field theory (z =1 ) living in d +2 dimensions. We use this method to show that the Schrödinger field theory of a complex scalar field cannot have any Weyl anomalies. To be precise, we show that the Weyl anomaly Ad+1 G for Schrödinger theory is related to the Weyl anomaly of a free relativistic scalar CFT Ad+2 R via Ad+1 G=2 π δ (m )Ad+2 R , where m is the charge of the scalar field under particle number symmetry. We provide further evidence of the vanishing anomaly by evaluating Feynman diagrams in all orders of perturbation theory. We present an explicit calculation of the anomaly using a regulated Schrödinger operator, without using the null cone reduction technique. We generalize our method to show that a similar result holds for theories with a single time-derivative and with even z >2 .

  9. Numerical integration of the extended variable generalized Langevin equation with a positive Prony representable memory kernel.

    PubMed

    Baczewski, Andrew D; Bond, Stephen D

    2013-07-28

    Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.

  10. Calculation of electron Dose Point Kernel in water with GEANT4 for medical application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guimaraes, C. C.; Sene, F. F.; Martinelli, J. R.

    2009-06-03

    The rapid insertion of new technologies in medical physics in the last years, especially in nuclear medicine, has been followed by a great development of faster Monte Carlo algorithms. GEANT4 is a Monte Carlo toolkit that contains the tools to simulate the problems of particle transport through matter. In this work, GEANT4 was used to calculate the dose-point-kernel (DPK) for monoenergetic electrons in water, which is an important reference medium for nuclear medicine. The three different physical models of electromagnetic interactions provided by GEANT4 - Low Energy, Penelope and Standard - were employed. To verify the adequacy of these models,more » the results were compared with references from the literature. For all energies and physical models, the agreement between calculated DPKs and reported values is satisfactory.« less

  11. The moving-least-squares-particle hydrodynamics method (MLSPH)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dilts, G.

    1997-12-31

    An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for themore » SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.« less

  12. Electron Microscopic Evaluation and Fission Product Identification of Irradiated TRISO Coated Particles from the AGR-1 Experiment: A Preliminary Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    IJ van Rooyen; DE Janney; BD Miller

    2014-05-01

    Post-irradiation examination of coated particle fuel from the AGR-1 experiment is in progress at Idaho National Laboratory and Oak Ridge National Laboratory. In this paper a brief summary of results from characterization of microstructures in the coating layers of selected irradiated fuel particles with burnup of 11.3% and 19.3% FIMA will be given. The main objectives of the characterization were to study irradiation effects, fuel kernel porosity, layer debonding, layer degradation or corrosion, fission-product precipitation, grain sizes, and transport of fission products from the kernels across the TRISO layers. Characterization techniques such as scanning electron microscopy, transmission electron microscopy, energymore » dispersive spectroscopy, and wavelength dispersive spectroscopy were used. A new approach to microscopic quantification of fission-product precipitates is also briefly demonstrated. Microstructural characterization focused on fission-product precipitates in the SiC-IPyC interface, the SiC layer and the fuel-buffer interlayer. The results provide significant new insights into mechanisms of fission-product transport. Although Pd-rich precipitates were identified at the SiC-IPyC interlayer, no significant SiC-layer thinning was observed for the particles investigated. Characterization of these precipitates highlighted the difficulty of measuring low concentrations of Ag in precipitates with significantly higher concentrations of Pd and U. Different approaches to resolving this problem are discussed. An initial hypothesis is provided to explain fission-product precipitate compositions and locations. No SiC phase transformations were observed and no debonding of the SiC-IPyC interlayer as a result of irradiation was observed for the samples investigated. Lessons learned from the post-irradiation examination are described and future actions are recommended.« less

  13. Large scale Brownian dynamics of confined suspensions of rigid particles

    NASA Astrophysics Data System (ADS)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar

    2017-12-01

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose height above the wall is set by a combination of thermal noise and active flows. We find the existence of two populations of active particles, slower ones closer to the bottom and faster ones above them, and demonstrate that our method provides quantitative accuracy even with relatively coarse resolutions of the particle geometry.

  14. Poly (lactic-co-glycolic acid) particles prepared by microfluidics and conventional methods. Modulated particle size and rheology.

    PubMed

    Perez, Aurora; Hernández, Rebeca; Velasco, Diego; Voicu, Dan; Mijangos, Carmen

    2015-03-01

    Microfluidic techniques are expected to provide narrower particle size distribution than conventional methods for the preparation of poly (lactic-co-glycolic acid) (PLGA) microparticles. Besides, it is hypothesized that the particle size distribution of poly (lactic-co-glycolic acid) microparticles influences the settling behavior and rheological properties of its aqueous dispersions. For the preparation of PLGA particles, two different methods, microfluidic and conventional oil-in-water emulsification methods were employed. The particle size and particle size distribution of PLGA particles prepared by microfluidics were studied as a function of the flow rate of the organic phase while particles prepared by conventional methods were studied as a function of stirring rate. In order to study the stability and structural organization of colloidal dispersions, settling experiments and oscillatory rheological measurements were carried out on aqueous dispersions of PLGA particles with different particle size distributions. Microfluidics technique allowed the control of size and size distribution of the droplets formed in the process of emulsification. This resulted in a narrower particle size distribution for samples prepared by MF with respect to samples prepared by conventional methods. Polydisperse samples showed a larger tendency to aggregate, thus confirming the advantages of microfluidics over conventional methods, especially if biomedical applications are envisaged. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Y2O3:Eu phosphor particles prepared by spray pyrolysis from a solution containing citric acid and polyethylene glycol

    NASA Astrophysics Data System (ADS)

    Roh, H. S.; Kang, Y. C.; Park, H. D.; Park, S. B.

    Y2O3:Eu phosphor particles were prepared by large-scale spray pyrolysis. The morphological control of Y2O3:Eu particles in spray pyrolysis was attempted by adding polymeric precursors to the spray solution. The effect of composition and amount of polymeric precursors on the morphology, crystallinity and photoluminescence characteristics of Y2O3:Eu particles was investigated. Particles prepared from a solution containing polyethylene glycol (PEG) with an average molecular weight of 200 had a hollow structure, while those prepared from solutions containing adequate amounts of citric acid (CA) and PEG had a spherical shape, filled morphology and clean surfaces after post-treatment at high temperature. Y2O3:Eu particles prepared from an aqueous solution with no polymeric precursors had a hollow structure and rough surfaces after post-treatment. The phosphor particles prepared from solutions with inadequate amounts of CA and/or PEG also had hollow and/or fragmented structures. The particles prepared from the solution containing 0.3 M CA and 0.3 M PEG had the highest photoluminescence emission intensity, which was 56% higher than that of the particles prepared from aqueous solution without polymeric precursors.

  16. [Utilization of gossypol-free cottonseed and its by-products as human food].

    PubMed

    Cornu, A; Delpeuch, F; Favier, J C

    1977-01-01

    Trials have principally turned on a glandless cottonseed flour, with 56 p. 100 of proteins. It is possible to blend it with millet or sorghum flour, and so to prepare the main meals of the local cooking. Trial of acceptability and long-dated consumption have shown that this flour is rather well appreciated especially in sauces. The growth of young children has been better thanks to the consumption of a cottonseed flour pap during six months. Trials to manufacture biscuits and noodles have been attempted. Kernels of cottonseed with 32 p. 100 of proteins and 33 p. 100 of lipids have been consumed with success. Four tons of kernels have been sold at the same price as sorghum in the area where the glandless cotton plant is under cultivation.

  17. Wet-chemical dissolution of TRISO-coated simulated high-temperature-reactor fuel particles

    NASA Astrophysics Data System (ADS)

    Skolo, K. P.; Jacobs, P.; Venter, J. H.; Klopper, W.; Crouse, P. L.

    2012-01-01

    Chemical etching with different mixtures of acidic solutions has been investigated to disintegrate the two outermost coatings from tri-structural isotropic coated particles containing zirconia kernels, which are used in simulated particles instead of uranium dioxide. A scanning electron microscope (SEM) was used to study the morphology of the particles after the first etching step as well as at different stages of the second etching step. SEM examination shows that the outer carbon layer can be readily removed with a CrO 3-HNO 3/H 2SO 4 solution. This finding was verified by energy dispersive spectroscopy (EDS) analysis. Etching of the silicon carbide layer in a hydrofluoric-nitric solution yielded partial removal of the coating and localized attack of the underlying coating layers. The SEM results provide evidence that the etching of the silicon carbide layer is strongly influenced by its microstructure.

  18. Extension of moment projection method to the fragmentation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Shaohua; Yapp, Edward K.Y.; Akroyd, Jethro

    2017-04-15

    The method of moments is a simple but efficient method of solving the population balance equation which describes particle dynamics. Recently, the moment projection method (MPM) was proposed and validated for particle inception, coagulation, growth and, more importantly, shrinkage; here the method is extended to include the fragmentation process. The performance of MPM is tested for 13 different test cases for different fragmentation kernels, fragment distribution functions and initial conditions. Comparisons are made with the quadrature method of moments (QMOM), hybrid method of moments (HMOM) and a high-precision stochastic solution calculated using the established direct simulation algorithm (DSA) and advantagesmore » of MPM are drawn.« less

  19. Spatio-Temporal Process Simulation of Dam-Break Flood Based on SPH

    NASA Astrophysics Data System (ADS)

    Wang, H.; Ye, F.; Ouyang, S.; Li, Z.

    2018-04-01

    On the basis of introducing the SPH (Smooth Particle Hydrodynamics) simulation method, the key research problems were given solutions in this paper, which ere the spatial scale and temporal scale adapting to the GIS(Geographical Information System) application, the boundary condition equations combined with the underlying surface, and the kernel function and parameters applicable to dam-break flood simulation. In this regards, a calculation method of spatio-temporal process emulation with elaborate particles for dam-break flood was proposed. Moreover the spatio-temporal process was dynamic simulated by using GIS modelling and visualization. The results show that the method gets more information, objectiveness and real situations.

  20. Smoothed-particle-hydrodynamics modeling of dissipation mechanisms in gravity waves.

    PubMed

    Colagrossi, Andrea; Souto-Iglesias, Antonio; Antuono, Matteo; Marrone, Salvatore

    2013-02-01

    The smoothed-particle-hydrodynamics (SPH) method has been used to study the evolution of free-surface Newtonian viscous flows specifically focusing on dissipation mechanisms in gravity waves. The numerical results have been compared with an analytical solution of the linearized Navier-Stokes equations for Reynolds numbers in the range 50-5000. We found that a correct choice of the number of neighboring particles is of fundamental importance in order to obtain convergence towards the analytical solution. This number has to increase with higher Reynolds numbers in order to prevent the onset of spurious vorticity inside the bulk of the fluid, leading to an unphysical overdamping of the wave amplitude. This generation of spurious vorticity strongly depends on the specific kernel function used in the SPH model.

  1. Size reduction of submicron magnesium particles prepared by pulsed wire discharge

    NASA Astrophysics Data System (ADS)

    Duy Hieu, Nguyen; Tokoi, Yoshinori; Tanaka, Kenta; Sasaki, Toru; Suzuki, Tsuneo; Nakayama, Tadachika; Suematsu, Hisayuki; Niihara, Koichi

    2018-02-01

    In this study, the submicron magnesium particle size was reduced by adjusting ambient gas pressure and input energy. The mean diameter of the prepared particles was determined from transmission electron microscopy images. The geometric mean particle diameter decreased with increasing relative energy, which was defined as the charging energy divided by the evaporation energy of a wire. By this method, Mg particles with a geometric mean diameter of 41.9 nm were prepared. To our knowledge, they are the smallest passivated Mg particles prepared by any method.

  2. An improved numerical method for the kernel density functional estimation of disperse flow

    NASA Astrophysics Data System (ADS)

    Smith, Timothy; Ranjan, Reetesh; Pantano, Carlos

    2014-11-01

    We present an improved numerical method to solve the transport equation for the one-point particle density function (pdf), which can be used to model disperse flows. The transport equation, a hyperbolic partial differential equation (PDE) with a source term, is derived from the Lagrangian equations for a dilute particle system by treating position and velocity as state-space variables. The method approximates the pdf by a discrete mixture of kernel density functions (KDFs) with space and time varying parameters and performs a global Rayleigh-Ritz like least-square minimization on the state-space of velocity. Such an approximation leads to a hyperbolic system of PDEs for the KDF parameters that cannot be written completely in conservation form. This system is solved using a numerical method that is path-consistent, according to the theory of non-conservative hyperbolic equations. The resulting formulation is a Roe-like update that utilizes the local eigensystem information of the linearized system of PDEs. We will present the formulation of the base method, its higher-order extension and further regularization to demonstrate that the method can predict statistics of disperse flows in an accurate, consistent and efficient manner. This project was funded by NSF Project NSF-DMS 1318161.

  3. Compressive strength, flexural strength and water absorption of concrete containing palm oil kernel shell

    NASA Astrophysics Data System (ADS)

    Noor, Nurazuwa Md; Xiang-ONG, Jun; Noh, Hamidun Mohd; Hamid, Noor Azlina Abdul; Kuzaiman, Salsabila; Ali, Adiwijaya

    2017-11-01

    Effect of inclusion of palm oil kernel shell (PKS) and palm oil fibre (POF) in concrete was investigated on the compressive strength and flexural strength. In addition, investigation of palm oil kernel shell on concrete water absorption was also conducted. Total of 48 concrete cubes and 24 concrete prisms with the size of 100mm × 100mm × 100mm and 100mm × 100mm × 500mm were prepared, respectively. Four (4) series of concrete mix consists of coarse aggregate was replaced by 0%, 25%, 50% and 75% palm kernel shell and each series were divided into two (2) main group. The first group is without POF, while the second group was mixed with the 5cm length of 0.25% of the POF volume fraction. All specimen were tested after 7 and 28 days of water curing for a compression test, and flexural test at 28 days of curing period. Water absorption test was conducted on concrete cube age 28 days. The results showed that the replacement of PKS achieves lower compressive and flexural strength in comparison with conventional concrete. However, the 25% replacement of PKS concrete showed acceptable compressive strength which within the range of requirement for structural concrete. Meanwhile, the POF which should act as matrix reinforcement showed no enhancement in flexural strength due to the balling effect in concrete. As expected, water absorption was increasing with the increasing of PKS in the concrete cause by the porous characteristics of PKS

  4. Toward lattice fractional vector calculus

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2014-09-01

    An analog of fractional vector calculus for physical lattice models is suggested. We use an approach based on the models of three-dimensional lattices with long-range inter-particle interactions. The lattice analogs of fractional partial derivatives are represented by kernels of lattice long-range interactions, where the Fourier series transformations of these kernels have a power-law form with respect to wave vector components. In the continuum limit, these lattice partial derivatives give derivatives of non-integer order with respect to coordinates. In the three-dimensional description of the non-local continuum, the fractional differential operators have the form of fractional partial derivatives of the Riesz type. As examples of the applications of the suggested lattice fractional vector calculus, we give lattice models with long-range interactions for the fractional Maxwell equations of non-local continuous media and for the fractional generalization of the Mindlin and Aifantis continuum models of gradient elasticity.

  5. Meshfree truncated hierarchical refinement for isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Atri, H. R.; Shojaee, S.

    2018-05-01

    In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.

  6. An arbitrary boundary with ghost particles incorporated in coupled FEM-SPH model for FSI problems

    NASA Astrophysics Data System (ADS)

    Long, Ting; Hu, Dean; Wan, Detao; Zhuang, Chen; Yang, Gang

    2017-12-01

    It is important to treat the arbitrary boundary of Fluid-Structure Interaction (FSI) problems in computational mechanics. In order to ensure complete support condition and restore the first-order consistency near the boundary of Smoothed Particle Hydrodynamics (SPH) method for coupling Finite Element Method (FEM) with SPH model, a new ghost particle method is proposed by dividing the interceptive area of kernel support domain into subareas corresponding to boundary segments of structure. The ghost particles are produced automatically for every fluid particle at each time step, and the properties of ghost particles, such as density, mass and velocity, are defined by using the subareas to satisfy the boundary condition. In the coupled FEM-SPH model, the normal and shear forces from a boundary segment of structure to a fluid particle are calculated through the corresponding ghost particles, and its opposite forces are exerted on the corresponding boundary segment, then the momentum of the present method is conservation and there is no matching requirements between the size of elements and the size of particles. The performance of the present method is discussed and validated by several FSI problems with complex geometry boundary and moving boundary.

  7. Investigation of Preparation and Mechanisms of a Dispersed Particle Gel Formed from a Polymer Gel at Room Temperature

    PubMed Central

    Zhao, Guang; Dai, Caili; Zhao, Mingwei; You, Qing; Chen, Ang

    2013-01-01

    A dispersed particle gel (DPG) was successfully prepared from a polymer gel at room temperature. The polymer gel system, morphology, viscosity changes, size distribution, and zeta potential of DPG particles were investigated. The results showed that zirconium gel systems with different strengths can be cross-linked within 2.5 h at low temperature. Scanning electron microscopy (SEM), transmission electron microscopy (TEM), and atomic force microscopy (AFM) results showed that the particles were polygonal particles with nano-size distribution. According to the viscosity changes, the whole preparation process can be divided into two major stages: the bulk gel cross-linking reaction period and the DPG particle preparation period. A polymer gel with a 3-dimensional network was formed in the bulk gel cross-linking reaction period whereas shearing force and frictional force were the main driving forces for the preparation of DPG particles, and thus affected the morphology of DPG particles. High shearing force and frictional force reduced the particle size distribution, and then decreased the zeta potential (absolute value). The whole preparation process could be completed within 3 h at room temperature. It could be an efficient and energy-saving technology for preparation of DPG particles. PMID:24324817

  8. Attenuation of the NMR signal in a field gradient due to stochastic dynamics with memory

    NASA Astrophysics Data System (ADS)

    Lisý, Vladimír; Tóthová, Jana

    2017-03-01

    The attenuation function S(t) for an ensemble of spins in a magnetic-field gradient is calculated by accumulation of the phase shifts in the rotating frame resulting from the displacements of spin-bearing particles. The found S(t), expressed through the particle mean square displacement, is applicable for any kind of stationary stochastic motion of spins, including their non-markovian dynamics with memory. The known expressions valid for normal and anomalous diffusion are obtained as special cases in the long time approximation. The method is also applicable to the NMR pulse sequences based on the refocusing principle. This is demonstrated by describing the Hahn spin echo experiment. The attenuation of the NMR signal is also evaluated providing that the random motion of particle is modeled by the generalized Langevin equation with the memory kernel exponentially decaying in time. The models considered in our paper assume massive particles driven by much smaller particles.

  9. Detection of subjects and brain regions related to Alzheimer's disease using 3D MRI scans based on eigenbrain and machine learning

    PubMed Central

    Zhang, Yudong; Dong, Zhengchao; Phillips, Preetha; Wang, Shuihua; Ji, Genlin; Yang, Jiquan; Yuan, Ti-Fei

    2015-01-01

    Purpose: Early diagnosis or detection of Alzheimer's disease (AD) from the normal elder control (NC) is very important. However, the computer-aided diagnosis (CAD) was not widely used, and the classification performance did not reach the standard of practical use. We proposed a novel CAD system for MR brain images based on eigenbrains and machine learning with two goals: accurate detection of both AD subjects and AD-related brain regions. Method: First, we used maximum inter-class variance (ICV) to select key slices from 3D volumetric data. Second, we generated an eigenbrain set for each subject. Third, the most important eigenbrain (MIE) was obtained by Welch's t-test (WTT). Finally, kernel support-vector-machines with different kernels that were trained by particle swarm optimization, were used to make an accurate prediction of AD subjects. Coefficients of MIE with values higher than 0.98 quantile were highlighted to obtain the discriminant regions that distinguish AD from NC. Results: The experiments showed that the proposed method can predict AD subjects with a competitive performance with existing methods, especially the accuracy of the polynomial kernel (92.36 ± 0.94) was better than the linear kernel of 91.47 ± 1.02 and the radial basis function (RBF) kernel of 86.71 ± 1.93. The proposed eigenbrain-based CAD system detected 30 AD-related brain regions (Anterior Cingulate, Caudate Nucleus, Cerebellum, Cingulate Gyrus, Claustrum, Inferior Frontal Gyrus, Inferior Parietal Lobule, Insula, Lateral Ventricle, Lentiform Nucleus, Lingual Gyrus, Medial Frontal Gyrus, Middle Frontal Gyrus, Middle Occipital Gyrus, Middle Temporal Gyrus, Paracentral Lobule, Parahippocampal Gyrus, Postcentral Gyrus, Posterial Cingulate, Precentral Gyrus, Precuneus, Subcallosal Gyrus, Sub-Gyral, Superior Frontal Gyrus, Superior Parietal Lobule, Superior Temporal Gyrus, Supramarginal Gyrus, Thalamus, Transverse Temporal Gyrus, and Uncus). The results were coherent with existing literatures. Conclusion: The eigenbrain method was effective in AD subject prediction and discriminant brain-region detection in MRI scanning. PMID:26082713

  10. AGR-1 Post Irradiation Examination Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demkowicz, Paul Andrew

    The post-irradiation examination (PIE) of the Advanced Gas Reactor (AGR)-1 experiment was a multi-year, collaborative effort between Idaho National Laboratory (INL) and Oak Ridge National Laboratory (ORNL) to study the performance of UCO (uranium carbide, uranium oxide) tristructural isotropic (TRISO) coated particle fuel fabricated in the U.S. and irradiated at the Advanced Test Reactor at INL to a peak burnup of 19.6% fissions per initial metal atom. This work involved a broad array of experiments and analyses to evaluate the level of fission product retention by the fuel particles and compacts (both during irradiation and during post-irradiation heating tests tomore » simulate reactor accident conditions), investigate the kernel and coating layer morphology evolution and the causes of coating failure, and explore the migration of fission products through the coating layers. The results have generally confirmed the excellent performance of the AGR-1 fuel, first indicated during the irradiation by the observation of zero TRISO coated particle failures out of 298,000 particles in the experiment. Overall release of fission products was determined by PIE to have been relatively low during the irradiation. A significant finding was the extremely low levels of cesium released through intact coatings. This was true both during the irradiation and during post-irradiation heating tests to temperatures as high as 1800°C. Post-irradiation safety test fuel performance was generally excellent. Silver release from the particles and compacts during irradiation was often very high. Extensive microanalysis of fuel particles was performed after irradiation and after high-temperature safety testing. The results of particle microanalysis indicate that the UCO fuel is effective at controlling the oxygen partial pressure within the particle and limiting kernel migration. Post-irradiation examination has provided the final body of data that speaks to the quality of the AGR-1 fuel, building on the as-fabricated fuel characterization and irradiation data. In addition to the extensive volume of results generated, the work also resulted in a number of novel analysis techniques and lessons learned that are being applied to the examination of fuel from subsequent TRISO fuel irradiations. This report provides a summary of the results obtained as part of the AGR-1 PIE campaign over its approximately 5-year duration.« less

  11. Mango kernel starch as a novel edible coating for enhancing shelf- life of tomato (Solanum lycopersicum) fruit.

    PubMed

    Nawab, Anjum; Alam, Feroz; Hasnain, Abid

    2017-10-01

    Mango kernel starch (MKS) coatings containing different plasticizers were used to extend the shelf life of tomato. The coating slurry was prepared by gelatinizing 4% mango kernel starch, plasticized with glycerol, sorbitol and their 1:1 mixture (50% of starch weight; db). The samples were kept at room temperature (20°C) and analyzed for shelf life. Significant difference in coated and control fruits were observed and all the coated fruits delayed ripening process that was characterized by reduction in weight loss and restricted changes in soluble solids concentration, titratable acidity, ascorbic acid content, firmness and decay percentage compared to uncoated sample. The formulations containing sorbitol were found to be the most effective followed by combined plasticizers (glycerol: sorbitol) and glycerol. Sensory evaluation conducted to monitor the change in color, texture and aroma also proved the efficacy of MKS coating containing sorbitol by retaining the overall postharvest quality of tomato during the storage period. The results showed that MKS could be a promising coating material for tomatoes that delayed the ripening process up to 20days during storage at 20°C with no adverse effect on postharvest quality. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Gamma irradiation of peanut kernel to control mold growth and to diminish aflatoxin contamination

    NASA Astrophysics Data System (ADS)

    Y.-Y. Chiou, R.

    1996-09-01

    Peanut kernel inoculated with Aspergillus parasiticus conidia were gamma irradiated with 0, 2.5, 5.0 and 10 kGy using Co60. Levels higher than 2.5 kGy were effective in retarding the outgrowth of A. parasiticus and reducing the population of natural mold contaminants. However, complete elimination of these molds was not achieved even at the dose of 10 kGy. After 4 wk incubation of the inoculated kernels in a humidified condition, aflatoxins produced by the surviving A. parasiticus were 69.12, 2.42, 57.36 and 22.28 μ/g, corresponding to the original irradiation levels. Peroxide content of peanut oils prepared from the irradiated peanuts increased with increased irradiation dosage. After storage, at each irradiation level, peroxide content in peanuts stored at -14°C was lower than that in peanuts stored at an ambient temperature. TBA values and CDHP contents of the oil increased with increased irradiation dosage and changed slightly after storage. However, fatty acid contents of the peanut oil varied in a limited range as affected by the irradiation dosage and storage temperature. The SDS-PAGE protein pattern of peanuts revealed no noticeable variation of protein subunits resulting from irradiation and storage.

  13. Blending of palm oil, palm stearin and palm kernel oil in the preparation of table and pastry margarine.

    PubMed

    Norlida, H M; Md Ali, A R; Muhadhir, I

    1996-01-01

    Palm oil (PO ; iodin value = 52), palm stearin (POs1; i.v. = 32 and POs2; i.v. = 40) and palm kernel oil (PKO; i.v. = 17) were blended in ternary systems. The blends were then studied for their physical properties such as melting point (m.p.), solid fat content (SFC), and cooling curve. Results showed that palm stearin increased the blends melting point while palm kernel oil reduced it. To produce table margarine with melting point (m.p.) below 40 degrees C, the POs1 should be added at level of < or = 16%, while POs2 at level of < or = 20%. At 10 degrees C, eutectic interaction occur between PO and PKO which reach their maximum at about 60:40 blending ratio. Within the eutectic region, to maintain the SFC at 10 degrees C to be < or = 50%, POs1 may be added at level of < or = 7%, while POs2 at level of < or = 12%. The addition of palm stearin increased the blends solidification Tmin and Tmax values, while PKO reduced them. Blends which contained high amount of palm stearin showed melting point and cooling curves quite similar to that of pastry margarine.

  14. Development and analysis of composite flour bread.

    PubMed

    Menon, Lakshmi; Majumdar, Swarnali Dutta; Ravi, Usha

    2015-07-01

    The study elucidates the effect of utilizing cereal-pulse-fruit seed composite flour in the development and quality analysis of leavened bread. The composite flour was prepared using refined wheat flour (WF), high protein soy flour (SF), sprouted mung bean flour (MF) and mango kernel flour (MKF). Three variations were formulated such as V-I (WF: SF: MF: MKF = 85:5:5:5), V-II (WF: SF: MF: MKF = 70:10:10:10), and V-III (WF: SF: MF: MKF = 60:14:13:13). Pertinent functional, physico-chemical and organoleptic attributes were studied in composite flour variations and their bread preparations. Physical characteristics of the bread variations revealed a percentage decrease in loaf height (14 %) and volume (25 %) and 20 % increase in loaf weight with increased substitution of composite flour. The sensory evaluation of experimental breads on a nine-point hedonic scale revealed that V-I score was 5 % higher than the standard bread. Hence, the present study highlighted the nutrient enrichment of bread on incorporation of a potential waste material mango kernel, soy and sprouted legume. Relevant statistical tests were done to analyze the significance of means for all tested parameters.

  15. Organizing for ontological change: The kernel of an AIDS research infrastructure

    PubMed Central

    Polk, Jessica Beth

    2015-01-01

    Is it possible to prepare and plan for emergent and changing objects of research? Members of the Multicenter AIDS Cohort Study have been investigating AIDS for over 30 years, and in that time, the disease has been repeatedly transformed. Over the years and across many changes, members have continued to study HIV disease while in the process regenerating an adaptable research organization. The key to sustaining this technoscientific flexibility has been what we call the kernel of a research infrastructure: ongoing efforts to maintain the availability of resources and services that may be brought to bear in the investigation of new objects. In the case of the Multicenter AIDS Cohort Study, these resources are as follows: specimens and data, calibrated instruments, heterogeneous experts, and participating cohorts of gay and bisexual men. We track three ontological transformations, examining how members prepared for and responded to changes: the discovery of a novel retroviral agent (HIV), the ability to test for that agent, and the transition of the disease from fatal to chronic through pharmaceutical intervention. Respectively, we call the work, ‘technologies’, and techniques of adapting to these changes, ‘repurposing’, ‘elaborating’, and ‘extending the kernel’. PMID:26477206

  16. Increased acetylcholine esterase activity produced by the administration of an aqueous extract of the seed kernel of Thevetia peruviana and its role on acute and subchronic intoxication in mice

    PubMed Central

    Marroquín-Segura, Rubén; Calvillo-Esparza, Ricardo; Mora-Guevara, José Luis Alfredo; Tovalín-Ahumada, José Horacio; Aguilar-Contreras, Abigail; Hernández-Abad, Vicente Jesús

    2014-01-01

    Background: The real mechanism for Thevetia peruviana poisoning remains unclear. Cholinergic activity is important for cardiac function regulation, however, the effect of T. peruviana on cholinergic activity is not well-known. Objective: To study the effect of the acute administration of an aqueous extract of the seed kernel of T. peruviana on the acetylcholine esterase (AChE) activity in CD1 mice as well its implications in the sub-chronic toxicity of the extract. Materials and Methods: A dose of 100 mg/kg of the extract was administered to CD1 mice and after 7 days, serum was obtained for ceruloplasmin (CP) quantitation and liver function tests. Another group of mice received a 50 mg/kg dose of the extract 3 times within 1 h time interval and AChE activity was determined for those animals. Heart tissue histological preparation was obtained from a group of mice that received a daily 50 mg/kg dose of the extract by a 30-days period. Results: CP levels for the treated group were higher than those for the control group (Student's t-test, P ≤ 0.001). AChE activity in the treated group was significantly higher than the control group (Tukey test, control vs. T. peruviana, P ≤ 0.001). Heart tissue histological preparations showed leukocyte infiltrates and necrotic areas, consistent with infarcts. Conclusion: The increased levels of AChE and the hearth tissue infiltrative lesions induced by the aqueous seed kernel extract of T. peruviana explains in part the poisoning caused by this plant, which can be related to an inflammatory process. PMID:24914300

  17. Increased acetylcholine esterase activity produced by the administration of an aqueous extract of the seed kernel of Thevetia peruviana and its role on acute and subchronic intoxication in mice.

    PubMed

    Marroquín-Segura, Rubén; Calvillo-Esparza, Ricardo; Mora-Guevara, José Luis Alfredo; Tovalín-Ahumada, José Horacio; Aguilar-Contreras, Abigail; Hernández-Abad, Vicente Jesús

    2014-01-01

    The real mechanism for Thevetia peruviana poisoning remains unclear. Cholinergic activity is important for cardiac function regulation, however, the effect of T. peruviana on cholinergic activity is not well-known. To study the effect of the acute administration of an aqueous extract of the seed kernel of T. peruviana on the acetylcholine esterase (AChE) activity in CD1 mice as well its implications in the sub-chronic toxicity of the extract. A dose of 100 mg/kg of the extract was administered to CD1 mice and after 7 days, serum was obtained for ceruloplasmin (CP) quantitation and liver function tests. Another group of mice received a 50 mg/kg dose of the extract 3 times within 1 h time interval and AChE activity was determined for those animals. Heart tissue histological preparation was obtained from a group of mice that received a daily 50 mg/kg dose of the extract by a 30-days period. CP levels for the treated group were higher than those for the control group (Student's t-test, P ≤ 0.001). AChE activity in the treated group was significantly higher than the control group (Tukey test, control vs. T. peruviana, P ≤ 0.001). Heart tissue histological preparations showed leukocyte infiltrates and necrotic areas, consistent with infarcts. The increased levels of AChE and the hearth tissue infiltrative lesions induced by the aqueous seed kernel extract of T. peruviana explains in part the poisoning caused by this plant, which can be related to an inflammatory process.

  18. X-ray Analysis of Defects and Anomalies in AGR-5/6/7 TRISO Particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmreich, Grant W.; Hunn, John D.; Skitt, Darren J.

    2017-06-01

    Coated particle fuel batches J52O-16-93164, 93165, 93166, 93168, 93169, 93170, and 93172 were produced by Babcock and Wilcox Technologies (BWXT) for possible selection as fuel for the Advanced Gas Reactor Fuel Development and Qualification (AGR) Program’s AGR-5/6/7 irradiation test in the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), or may be used for other tests. Each batch was coated in a 150-mm-diameter production-scale fluidized-bed chemical vapor deposition (CVD) furnace. Tristructural isotropic (TRISO) coatings were deposited on 425-μm-nominal-diameter spherical kernels from BWXT lot J52R-16-69317 containing a mixture of 15.4%-enriched uranium carbide and uranium oxide (UCO), with the exception of Batchmore » 93164, which used similar kernels from BWXT lot J52L-16-69316. The TRISO-coatings consisted of a ~50% dense carbon buffer layer with 100-μmnominal thickness, a dense inner pyrolytic carbon (IPyC) layer with 40-μm-nominal thickness, a silicon carbide (SiC) layer with 35-μm-nominal thickness, and a dense outer pyrolytic carbon (OPyC) layer with 40-μm-nominal thickness. Each coated particle batch was sieved to upgrade the particles by removing over-sized and under-sized material, and the upgraded batch was designated by appending the letter A to the end of the batch number (e.g., 93164A). Secondary upgrading by sieving was performed on the upgraded batches to remove specific anomalies identified during analysis for Defective IPyC, and the upgraded batches were designated by appending the letter B to the end of the batch number (e.g., 93165B). Following this secondary upgrading, coated particle composite J52R-16-98005 was produced by BWXT as fuel for the AGR Program’s AGR-5/6/7 irradiation test in the INL ATR. This composite was comprised of coated particle fuel batches J52O-16-93165B, 93168B, 93169B, and 93170B.« less

  19. SEM-EDX analysis of an unknown "known" white powder found in a shipping container from Peru

    NASA Astrophysics Data System (ADS)

    Albright, Douglas C.

    2009-05-01

    In 2008, an unknown white powder was discovered spilled inside of a shipping container of whole kernel corn during an inspection by federal inspectors in the port of Baltimore, Maryland. The container was detained and quarantined while a sample of the powder was collected and sent to a federal laboratory where it was screened using chromatography for the presence of specific poisons and pesticides with negative results. Samples of the corn kernels and the white powder were forwarded to the Food and Drug Administration, Forensic Chemistry Center for further analysis. Stereoscopic Light Microscopy (SLM), Scanning Electron Microscopy/Energy Dispersive X-ray Spectrometry (SEM/EDX), and Polarized Light Microscopy/Infrared Spectroscopy (PLM-IR) were used in the analysis of the kernels and the unknown powder. Based on the unique particle analysis by SLM and SEM as well as the detection of the presence of aluminum and phosphorous by EDX, the unknown was determined to be consistent with reacted aluminum phosphide (AlP). While commonly known in the agricultural industry, aluminum phosphide is relatively unknown in the forensic community. A history of the use and acute toxicity of this compound along with some very unique SEM/EDX analysis characteristics of aluminum phosphide will be discussed.

  20. Intelligent Control via Wireless Sensor Networks for Advanced Coal Combustion Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aman Behal; Sunil Kumar; Goodarz Ahmadi

    2007-08-05

    Numerical Modeling of Solid Gas Flow, System Identification for purposes of modeling and control, and Wireless Sensor and Actor Network design were pursued as part of this project. Time series input-output data was obtained from NETL's Morgantown CFB facility courtesy of Dr. Lawrence Shadle. It was run through a nonlinear kernel estimator and nonparametric models were obtained for the system. Linear and first-order nonlinear kernels were then utilized to obtain a state-space description of the system. Neural networks were trained that performed better at capturing the plant dynamics. It is possible to use these networks to find a plant modelmore » and the inversion of this model can be used to control the system. These models allow one to compare with physics based models whose parameters can then be determined by comparing them against the available data based model. On a parallel track, Dr. Kumar designed an energy-efficient and reliable transport protocol for wireless sensor and actor networks, where the sensors could be different types of wireless sensors used in CFB based coal combustion systems and actors are more powerful wireless nodes to set up a communication network while avoiding the data congestion. Dr. Ahmadi's group studied gas solid flow in a duct. It was seen that particle concentration clearly shows a preferential distribution. The particles strongly interact with the turbulence eddies and are concentrated in narrow bands that are evolving with time. It is believed that observed preferential concentration is due to the fact that these particles are flung out of eddies by centrifugal force.« less

  1. Interactions of non-spherical particles in simple flows

    NASA Astrophysics Data System (ADS)

    Niazi, Mehdi; Brandt, Luca; Costa, Pedro; Breugem, Wim-Paul

    2015-11-01

    The behavior of particles in a flow affects the global transport and rheological properties of the mixture. In recent years much effort has been therefore devoted to the development of an efficient method for the direct numerical simulation (DNS) of the motion of spherical rigid particles immersed in an incompressible fluid. However, the literature on non-spherical particle suspensions is quite scarce despite the fact that these are more frequent. We develop a numerical algorithm to simulate finite-size spheroid particles in shear flows to gain new understanding of the flow of particle suspensions. In particular, we wish to understand the role of inertia and its effect on the flow behavior. For this purpose, DNS simulations with a direct-forcing immersed boundary method are used, with collision and lubrication models for particle-particle and particle-wall interactions. We will discuss pair interactions, relative motion and rotation, of two sedimenting spheroids and show that the interaction time increases significantly for non-spherical particles. More interestingly, we show that the particles are attracted to each other from larger lateral displacements. This has important implications for collision kernels. This work was supported by the European Research Council Grant No. ERC-2013-CoG-616186, TRITOS, and by the Swedish Research Council (VR).

  2. Inertial flow regimes of the suspension of finite size particles

    NASA Astrophysics Data System (ADS)

    Lashgari, Iman; Picano, Francesco; Brandt, Luca

    2015-03-01

    We study inertial flow regimes of the suspensions of finite size neutrally buoyant particles. These suspensions experience three different regimes by varying the Reynolds number, Re , and particle volume fraction, Φ. At low values of Re and Φ, flow is laminar-like where viscous stress is the dominating term in the stress budget. At high Re and relatively small Φ, the flow is turbulent-like where Reynolds stress has the largest contribution to the total stress. At high Φ, the flow regime is as a form of inertial shear-thickening characterized by a significant enhancement in the wall shear stress not due to the increment of Reynolds stress but to the particle stress. We further analyze the local behavior of the suspension in the three different regimes by studying the particle dispersion and collisions. Turbulent cases shows higher level of particle dispersion and higher values of the collision kernel (the radial distribution function times the particle relative velocity as a function of the distance between the particles) than those of the inertial shear-thickening regimes providing additional evidence of two different transport mechanisms in the Bagnoldian regime. Support from the European Research Council (ERC) is acknowledged.

  3. StarSmasher: Smoothed Particle Hydrodynamics code for smashing stars and planets

    NASA Astrophysics Data System (ADS)

    Gaburov, Evghenii; Lombardi, James C., Jr.; Portegies Zwart, Simon; Rasio, F. A.

    2018-05-01

    Smoothed Particle Hydrodynamics (SPH) is a Lagrangian particle method that approximates a continuous fluid as discrete nodes, each carrying various parameters such as mass, position, velocity, pressure, and temperature. In an SPH simulation the resolution scales with the particle density; StarSmasher is able to handle both equal-mass and equal number-density particle models. StarSmasher solves for hydro forces by calculating the pressure for each particle as a function of the particle's properties - density, internal energy, and internal properties (e.g. temperature and mean molecular weight). The code implements variational equations of motion and libraries to calculate the gravitational forces between particles using direct summation on NVIDIA graphics cards. Using a direct summation instead of a tree-based algorithm for gravity increases the accuracy of the gravity calculations at the cost of speed. The code uses a cubic spline for the smoothing kernel and an artificial viscosity prescription coupled with a Balsara Switch to prevent unphysical interparticle penetration. The code also implements an artificial relaxation force to the equations of motion to add a drag term to the calculated accelerations during relaxation integrations. Initially called StarCrash, StarSmasher was developed originally by Rasio.

  4. Preparation and properties of inhalable nanocomposite particles: effects of the temperature at a spray-dryer inlet upon the properties of particles.

    PubMed

    Tomoda, Keishiro; Ohkoshi, Takumi; Kawai, Yusaku; Nishiwaki, Motoko; Nakajima, Takehisa; Makino, Kimiko

    2008-02-15

    To overcome the disadvantages both of microparticles and nanoparticles for inhalation, we have prepared nanocomposite particles as drug carriers targeting lungs. The nanocomposite particles having sizes about 2.5 microm composed of sugar and drug-loaded PLGA nanoparticles can reach deep in the lungs, and they are decomposed into drug-loaded PLGA nanoparticles in the alveoli. Sugar was used as a binder of PLGA nanoparticles to be nanocomposite particles and is soluble in alveolar lining fluid. The primary nanoparticles containing bioactive materials were prepared by using a probe sonicator. And then they were spray dried with carrier materials, such as trehalose and lactose. The effects of inlet temperature of spray dryer were studied between 60 and 120 degrees C and the kind of sugars upon properties of nanocomposite particles. When the inlet temperatures were 80 and 90 degrees C, nanocomposite particles with average diameters of about 2.5 microm are obtained and they are decomposed into primary nanoparticles in water, in both sugars are used as a binder. But, those prepared above 100 degrees C are not decomposed into nanoparticles in water, while the average diameter was almost 2.5 microm. On the other hand, nanocomposite particles prepared at lower inlet temperatures have larger sizes but better redispersion efficiency in water. By the measurements of aerodynamic diameters of the nanocomposite particles prepared with trehalose at 70, 80, and 90 degrees C, it was shown that the particles prepared at 80 degrees C have the highest fine particle fraction (FPF) value and the particles are suitable for pulmonary delivery of bioactive materials deep in the lungs. Meanwhile the case with lactose, the particles prepared at 90 degrees C have near the best FPF value but they have many particles larger than 11 microm.

  5. Goldstonic pseudoscalar mesons in Bethe-Salpeter-inspired setting

    NASA Astrophysics Data System (ADS)

    Lucha, Wolfgang; Schöberl, Franz F.

    2018-03-01

    For a two-particle bound-state equation closer to its Bethe-Salpeter origins than Salpeter’s equation, with effective interaction kernel deliberately forged such as to ensure, in the limit of zero mass of the bound-state constituents, the vanishing of the arising bound-state mass, we scrutinize the emerging features of the lightest pseudoscalar mesons for their agreement with the behavior predicted by a generalization of the Gell-Mann-Oakes-Renner relation.

  6. Photon Counting Computed Tomography With Dedicated Sharp Convolution Kernels: Tapping the Potential of a New Technology for Stent Imaging.

    PubMed

    von Spiczak, Jochen; Mannil, Manoj; Peters, Benjamin; Hickethier, Tilman; Baer, Matthias; Henning, André; Schmidt, Bernhard; Flohr, Thomas; Manka, Robert; Maintz, David; Alkadhi, Hatem

    2018-05-23

    The aims of this study were to assess the value of a dedicated sharp convolution kernel for photon counting detector (PCD) computed tomography (CT) for coronary stent imaging and to evaluate to which extent iterative reconstructions can compensate for potential increases in image noise. For this in vitro study, a phantom simulating coronary artery stenting was prepared. Eighteen different coronary stents were expanded in plastic tubes of 3 mm diameter. Tubes were filled with diluted contrast agent, sealed, and immersed in oil calibrated to an attenuation of -100 HU simulating epicardial fat. The phantom was scanned in a modified second generation 128-slice dual-source CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Erlangen, Germany) equipped with both a conventional energy integrating detector and PCD. Image data were acquired using the PCD part of the scanner with 48 × 0.25 mm slices, a tube voltage of 100 kVp, and tube current-time product of 100 mAs. Images were reconstructed using a conventional convolution kernel for stent imaging with filtered back-projection (B46) and with sinogram-affirmed iterative reconstruction (SAFIRE) at level 3 (I463). For comparison, a dedicated sharp convolution kernel with filtered back-projection (D70) and SAFIRE level 3 (Q703) and level 5 (Q705) was used. The D70 and Q70 kernels were specifically designed for coronary stent imaging with PCD CT by optimizing the image modulation transfer function and the separation of contrast edges. Two independent, blinded readers evaluated subjective image quality (Likert scale 0-3, where 3 = excellent), in-stent diameter difference, in-stent attenuation difference, mathematically defined image sharpness, and noise of each reconstruction. Interreader reliability was calculated using Goodman and Kruskal's γ and intraclass correlation coefficients (ICCs). Differences in image quality were evaluated using a Wilcoxon signed-rank test. Differences in in-stent diameter difference, in-stent attenuation difference, image sharpness, and image noise were tested using a paired-sample t test corrected for multiple comparisons. Interreader and intrareader reliability were excellent (γ = 0.953, ICCs = 0.891-0.999, and γ = 0.996, ICCs = 0.918-0.999, respectively). Reconstructions using the dedicated sharp convolution kernel yielded significantly better results regarding image quality (B46: 0.4 ± 0.5 vs D70: 2.9 ± 0.3; P < 0.001), in-stent diameter difference (1.5 ± 0.3 vs 1.0 ± 0.3 mm; P < 0.001), and image sharpness (728 ± 246 vs 2069 ± 411 CT numbers/voxel; P < 0.001). Regarding in-stent attenuation difference, no significant difference was observed between the 2 kernels (151 ± 76 vs 158 ± 92 CT numbers; P = 0.627). Noise was significantly higher in all sharp convolution kernel images but was reduced by 41% and 59% by applying SAFIRE levels 3 and 5, respectively (B46: 16 ± 1, D70: 111 ± 3, Q703: 65 ± 2, Q705: 46 ± 2 CT numbers; P < 0.001 for all comparisons). A dedicated sharp convolution kernel for PCD CT imaging of coronary stents yields superior qualitative and quantitative image characteristics compared with conventional reconstruction kernels. Resulting higher noise levels in sharp kernel PCD imaging can be partially compensated with iterative image reconstruction techniques.

  7. Exact solutions for mass-dependent irreversible aggregations.

    PubMed

    Son, Seung-Woo; Christensen, Claire; Bizhani, Golnoosh; Grassberger, Peter; Paczuski, Maya

    2011-10-01

    We consider the mass-dependent aggregation process (k+1)X→X, given a fixed number of unit mass particles in the initial state. One cluster is chosen proportional to its mass and is merged into one, either with k neighbors in one dimension, or--in the well-mixed case--with k other clusters picked randomly. We find the same combinatorial exact solutions for the probability to find any given configuration of particles on a ring or line, and in the well-mixed case. The mass distribution of a single cluster exhibits scaling laws and the finite-size scaling form is given. The relation to the classical sum kernel of irreversible aggregation is discussed.

  8. Kernel abortion in maize : I. Carbohydrate concentration patterns and Acid invertase activity of maize kernels induced to abort in vitro.

    PubMed

    Hanft, J M; Jones, R J

    1986-06-01

    Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.

  9. Study on preparation method of Zanthoxylum bungeanum seeds kernel oil with zero trans-fatty acids.

    PubMed

    Liu, Tong; Yao, Shi-Yong; Yin, Zhong-Yi; Zheng, Xu-Xu; Shen, Yu

    2016-04-01

    The seed of Zanthoxylum bungeanum (Z. bungeanum) is a by-product of pepper production and rich in unsaturated fatty acid, cellulose, and protein. The seed oil obtained from traditional producing process by squeezing or extracting would be bad quality and could not be used as edible oil. In this paper, a new preparation method of Z. bungeanum seed kernel oil (ZSKO) was developed by comparing the advantages and disadvantages of alkali saponification-cold squeezing, alkali saponification-solvent extraction, and alkali saponification-supercritical fluid extraction with carbon dioxide (SFE-CO2). The results showed that the alkali saponification-cold squeezing could be the optimal preparation method of ZSKO, which contained the following steps: Z. bungeanum seed was pretreated by alkali saponification under the conditions of adding 10 %NaOH (w/w), solution temperature was 80 °C, and saponification reaction time was 45 min, and pretreated seed was separated by filtering, water washing, and overnight drying at 50 °C, then repeated squeezing was taken until no oil generated at 60 °C with 15 % moisture content, and ZSKO was attained finally using centrifuge. The produced ZSKO contained more than 90 % unsaturated fatty acids and no trans-fatty acids and be testified as a good edible oil with low-value level of acid and peroxide. It was demonstrated that the alkali saponification-cold squeezing process could be scaled up and applied to industrialized production of ZSKO.

  10. Sorghum for human food--a review.

    PubMed

    Anglani, C

    1998-01-01

    A brief review of literature on sorghum for human foods and on the relationship among some kernel characteristics and food quality is presented. The chief foods prepared with sorghum, such as tortilla, porridge, couscous and baked goods are described. Tortillas, prepared with 75% of whole sorghum and 25% of yellow maize, are better than those prepared with whole sorghum alone. A porridge formulation with a 30:40:30 mix of sorghum, maize and cassava respectively, has been shown to be the most acceptable combination. The cooked porridge Aceda has lower protein digestibility and higher biological value than the uncooked porridge Aceda. Sorghum is not considered breadmaking flour but the addition of 30% sorghum flour to wheat flour of 72% extraction rate produces a bread, evaluated as good to excellent.

  11. Process for preparation of large-particle-size monodisperse latexes

    NASA Technical Reports Server (NTRS)

    Vanderhoff, J. W.; Micale, F. J.; El-Aasser, M. S.; Kornfeld, D. M. (Inventor)

    1981-01-01

    Monodisperse latexes having a particle size in the range of 2 to 40 microns are prepared by seeded emulsion polymerization in microgravity. A reaction mixture containing smaller monodisperse latex seed particles, predetermined amounts of monomer, emulsifier, initiator, inhibitor and water is placed in a microgravity environment, and polymerization is initiated by heating. The reaction is allowed to continue until the seed particles grow to a predetermined size, and the resulting enlarged particles are then recovered. A plurality of particle-growing steps can be used to reach larger sizes within the stated range, with enlarge particles from the previous steps being used as seed particles for the succeeding steps. Microgravity enables preparation of particles in the stated size range by avoiding gravity related problems of creaming and settling, and flocculation induced by mechanical shear that have precluded their preparation in a normal gravity environment.

  12. Human serum albumin (HSA) nanoparticles: reproducibility of preparation process and kinetics of enzymatic degradation.

    PubMed

    Langer, K; Anhorn, M G; Steinhauser, I; Dreis, S; Celebi, D; Schrickel, N; Faust, S; Vogel, V

    2008-01-22

    Nanoparticles prepared from human serum albumin (HSA) are versatile carrier systems for drug delivery and can be prepared by an established desolvation process. A reproducible process with a low batch-to-batch variability is required for transfer from the lab to an industrial production. In the present study the batch-to-batch variability of the starting material HSA on the preparation of nanoparticles was investigated. HSA can build dimers and higher aggregates because of a free thiol group present in the molecule. Therefore, the quality of different HSA batches was analysed by size exclusion chromatography (SEC) and analytical ultracentrifugation (AUC). The amount of dimerised HSA detected by SEC did not affect particle preparation. Higher aggregates of the protein detected in two batches by AUC disturbed nanoparticle formation at pH values below 8.0. At pH 8.0 and above monodisperse particles between 200 and 300 nm could be prepared with all batches, with higher pH values leading to smaller particles. Besides human derived albumin a particle preparation was also feasible based on recombinant human serum albumin (rHSA). Under comparable preparation conditions monodisperse nanoparticles could be achieved and the same effects of protein aggregates on particle formation were observed. For nanoparticulate drug delivery systems the enzymatic degradation is a crucial parameter for the release of an embedded drug. For this reason, besides the particle preparation process, particle degradation in the presence of different enzymes was studied. Under acidic conditions HSA as well as rHSA nanoparticles could be digested by pepsin and cathepsin B. At neutral pH trypsin, proteinase K, and protease were suitable for particle degradation. It could be shown that the kinetics of particle degradation was dependent on the degree of particle stabilisation. Therefore, the degree of particle stabilisation will influence drug release after cellular accumulation of HSA nanoparticles.

  13. 7 CFR 810.602 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...

  14. Calibration of discrete element model parameters: soybeans

    NASA Astrophysics Data System (ADS)

    Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal

    2018-05-01

    Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.

  15. Kernel Abortion in Maize 1

    PubMed Central

    Hanft, Jonathan M.; Jones, Robert J.

    1986-01-01

    Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846

  16. Analysis and fabrication of tungsten CERMET materials for ultra-high temperature reactor applications via pulsed electric current sintering

    NASA Astrophysics Data System (ADS)

    Webb, Jonathan A.

    The optimized development path for the fabrication of ultra-high temperature W-UO2 CERMET fuel elements were explored within this dissertation. A robust literature search was conducted, which concluded that a W-UO 2 fuel element must contain a fine tungsten microstructure and spherical UO2 kernels throughout the entire consolidation process. Combined Monte Carlo and Computational Fluid Dynamics (CFD) analysis were used to determine the effects of rhenium and gadolinia additions on the performance of W-UO 2 fuel elements at refractory temperatures and in dry and water submerged environments. The computational analysis also led to the design of quasi-optimized fuel elements that can meet thermal-hydraulic and neutronic requirements A rigorous set of experiments were conducted to determine if Pulsed Electric Current Sintering (PECS) can fabricate tungsten and W-Ce02 specimens to the required geometries, densities and microstructures required for high temperature fuel elements as well as determine the mechanisms involved within the PECS consolidation process. The CeO2 acts as a surrogate for UO 2 fuel kernels in these experiments. The experiments seemed to confirm that PECS consolidation takes place via diffusional mass transfer methods; however, the densification process is rapidly accelerated due to the effects of current densities within the consolidating specimen. Fortunately the grain growth proceeds at a traditional rate and the PECS process can yield near fully dense W and W-Ce02 specimens with a finer microstructure than other sintering techniques. PECS consolidation techniques were also shown to be capable of producing W-UO2 segments at near-prototypic geometries; however, great care must be taken to coat the fuel particles with tungsten prior to sintering. Also, great care must be taken to ensure that the particles remain spherical in geometry under the influence of a uniaxial stress as applied during PECS, which involves mixing different fuel kernel sizes in order to reduce the porosity in the initial green compact. Particle mixing techniques were also shown to be capable of producing consolidated CERMETs, but with a less than desirable microstructure. The work presented herin will help in the development of very high temperature reactors for terrestrial and space missions in the future.

  17. Neutron dose rate analysis on HTGR-10 reactor using Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Suwoto; Adrial, H.; Hamzah, A.; Zuhair; Bakhri, S.; Sunaryo, G. R.

    2018-02-01

    The HTGR-10 reactor is cylinder-shaped core fuelled with kernel TRISO coated fuel particles in the spherical pebble with helium cooling system. The outlet helium gas coolant temperature outputted from the reactor core is designed to 700 °C. One advantage HTGR type reactor is capable of co-generation, as an addition to generating electricity, the reactor was designed to produce heat at high temperature can be used for other processes. The spherical fuel pebble contains 8335 TRISO UO2 kernel coated particles with enrichment of 10% and 17% are dispersed in a graphite matrix. The main purpose of this study was to analysis the distribution of neutron dose rates generated from HTGR-10 reactors. The calculation and analysis result of neutron dose rate in the HTGR-10 reactor core was performed using Monte Carlo MCNP5v1.6 code. The problems of double heterogeneity in kernel fuel coated particles TRISO and spherical fuel pebble in the HTGR-10 core are modelled well with MCNP5v1.6 code. The neutron flux to dose conversion factors taken from the International Commission on Radiological Protection (ICRP-74) was used to determine the dose rate that passes through the active core, reflectors, core barrel, reactor pressure vessel (RPV) and a biological shield. The calculated results of neutron dose rate with MCNP5v1.6 code using a conversion factor of ICRP-74 (2009) for radiation workers in the radial direction on the outside of the RPV (radial position = 220 cm from the center of the patio HTGR-10) provides the respective value of 9.22E-4 μSv/h and 9.58E-4 μSv/h for enrichment 10% and 17%, respectively. The calculated values of neutron dose rates are compliant with BAPETEN Chairman’s Regulation Number 4 Year 2013 on Radiation Protection and Safety in Nuclear Energy Utilization which sets the limit value for the average effective dose for radiation workers 20 mSv/year or 10μSv/h. Thus the protection and safety for radiation workers to be safe from the radiation source has been fulfilled. From the result analysis, it can be concluded that the model of calculation result of neutron dose rate for HTGR-10 core has met the required radiation safety standards.

  18. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  19. 7 CFR 810.1202 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...

  20. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize.

    PubMed

    Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.

  1. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize

    PubMed Central

    Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143

  2. 7 CFR 810.802 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...

  3. Accumulation of primary and secondary metabolites in edible jackfruit seed tissues and scavenging of reactive nitrogen species.

    PubMed

    Fernandes, Fátima; Ferreres, Federico; Gil-Izquierdo, Angel; Oliveira, Andreia P; Valentão, Patrícia; Andrade, Paula B

    2017-10-15

    Studies involving jackfruit tree (Artocarpus heterophyllus Lam.) focus on its fruit. Nevertheless a considerable part of jackfruit weight is represented by its seeds. Despite being consumed in several countries, knowledge about the chemical composition of these seeds is scarce. In this work, the accumulation of primary and secondary metabolites in jackfruit seed kernel and seed coating membrane was studied. Sixty-seven compounds were identified, sixty of them being reported for the first time in jackfruit seed. Both tissues had a similar qualitative profile, but significant quantitative differences were found. The capacity of aqueous extracts from jackfruit seed kernel and seed coating membranes to scavenge nitric oxide radical was also evaluated for the first time, the extract prepared from the seed coating membrane being the most potent. This work increases the potential revenue from a food that is still largely wasted. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Integral transforms of the quantum mechanical path integral: Hit function and path-averaged potential.

    PubMed

    Edwards, James P; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel

    2018-04-01

    We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).

  5. Integral transforms of the quantum mechanical path integral: Hit function and path-averaged potential

    NASA Astrophysics Data System (ADS)

    Edwards, James P.; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel

    2018-04-01

    We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).

  6. Initial results from safety testing of US AGR-2 irradiation test fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, Robert Noel; Hunn, John D.; Baldwin, Charles A.

    Two cylindrical compacts containing tristructural isotropic (TRISO)-coated particles with kernels that contained a mixture of uranium carbide and uranium oxide (UCO) and two compacts with UO 2-kernel TRISO particles have undergone 1600°C safety testing. These compacts were irradiated in the US Advanced Gas Reactor Fuel Development and Qualification Program's second irradiation test (AGR-2). The time-dependent releases of several radioisotopes ( 110mAg, 134Cs, 137Cs, 154Eu, 155Eu, 90Sr, and 85Kr) were monitored while heating the fuel specimens to 1600°C in flowing helium for 300 h. The UCO compacts behaved similarly to previously reported 1600°C-safety-tested UCO compacts from the AGR-1 irradiation. No failedmore » TRISO or failed SiC were detected (based on krypton and cesium release), and cesium release through intact SiC was very low. Release behavior of silver, europium, and strontium appeared to be dominated by inventory originally released through intact coating layers during irradiation but retained in the compact matrix until it was released during safety testing. Both UO 2 compacts exhibited cesium release from multiple particles whose SiC failed during the safety test. Europium and strontium release from these two UO 2 compacts appeared to be dominated by release from the particles with failed SiC. Silver release was characteristically like the release from the UCO compacts in that an initial release of the majority of silver trapped in the matrix occurred during ramping to 1600°C. However, additional silver release was observed later in the safety testing due to the UO 2 TRISO with failed SiC. Failure of the SiC layer in the UO 2 fuel appears to have been dominated by CO corrosion, as opposed to the palladium degradation observed in AGR-1 UCO fuel.« less

  7. Initial results from safety testing of US AGR-2 irradiation test fuel

    DOE PAGES

    Morris, Robert Noel; Hunn, John D.; Baldwin, Charles A.; ...

    2017-08-18

    Two cylindrical compacts containing tristructural isotropic (TRISO)-coated particles with kernels that contained a mixture of uranium carbide and uranium oxide (UCO) and two compacts with UO 2-kernel TRISO particles have undergone 1600°C safety testing. These compacts were irradiated in the US Advanced Gas Reactor Fuel Development and Qualification Program's second irradiation test (AGR-2). The time-dependent releases of several radioisotopes ( 110mAg, 134Cs, 137Cs, 154Eu, 155Eu, 90Sr, and 85Kr) were monitored while heating the fuel specimens to 1600°C in flowing helium for 300 h. The UCO compacts behaved similarly to previously reported 1600°C-safety-tested UCO compacts from the AGR-1 irradiation. No failedmore » TRISO or failed SiC were detected (based on krypton and cesium release), and cesium release through intact SiC was very low. Release behavior of silver, europium, and strontium appeared to be dominated by inventory originally released through intact coating layers during irradiation but retained in the compact matrix until it was released during safety testing. Both UO 2 compacts exhibited cesium release from multiple particles whose SiC failed during the safety test. Europium and strontium release from these two UO 2 compacts appeared to be dominated by release from the particles with failed SiC. Silver release was characteristically like the release from the UCO compacts in that an initial release of the majority of silver trapped in the matrix occurred during ramping to 1600°C. However, additional silver release was observed later in the safety testing due to the UO 2 TRISO with failed SiC. Failure of the SiC layer in the UO 2 fuel appears to have been dominated by CO corrosion, as opposed to the palladium degradation observed in AGR-1 UCO fuel.« less

  8. SU-E-T-236: Deconvolution of the Total Nuclear Cross-Sections of Therapeutic Protons and the Characterization of the Reaction Channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulmer, W.

    2015-06-15

    Purpose: The knowledge of the total nuclear cross-section Qtot(E) of therapeutic protons Qtot(E) provides important information in advanced radiotherapy with protons, such as the decrease of fluence of primary protons, the release of secondary particles (neutrons, protons, deuterons, etc.), and the production of nuclear fragments (heavy recoils), which usually undergo β+/− decay by emission of γ-quanta. Therefore determination of Qtot(E) is an important tool for sophisticated calculation algorithms of dose distributions. This cross-section can be determined by a linear combination of shifted Gaussian kernels and an error-function. The resonances resulting from deconvolutions in the energy space can be associated withmore » typical nuclear reactions. Methods: The described method of the determination of Qtot(E) results from an extension of the Breit-Wigner formula and a rather extended version of the nuclear shell theory to include nuclear correlation effects, clusters and highly excited/virtually excited nuclear states. The elastic energy transfer of protons to nucleons (the quantum numbers of the target nucleus remain constant) can be removed by the mentioned deconvolution. Results: The deconvolution of the term related to the error-function of the type cerf*er((E-ETh)/σerf] is the main contribution to obtain various nuclear reactions as resonances, since the elastic part of energy transfer is removed. The nuclear products of various elements of therapeutic interest like oxygen, calcium are classified and calculated. Conclusions: The release of neutrons is completely underrated, in particular, for low-energy protons. The transport of seconary particles, e.g. cluster formation by deuterium, tritium and α-particles, show an essential contribution to secondary particles, and the heavy recoils, which create γ-quanta by decay reactions, lead to broadening of the scatter profiles. These contributions cannot be accounted for by one single Gaussian kernel for the description of lateral scatter.« less

  9. Adhesion and volume constraints via nonlocal interactions determine cell organisation and migration profiles.

    PubMed

    Carrillo, José Antonio; Colombi, Annachiara; Scianna, Marco

    2018-05-14

    The description of the cell spatial pattern and characteristic distances is fundamental in a wide range of physio-pathological biological phenomena, from morphogenesis to cancer growth. Discrete particle models are widely used in this field, since they are focused on the cell-level of abstraction and are able to preserve the identity of single individuals reproducing their behavior. In particular, a fundamental role in determining the usefulness and the realism of a particle mathematical approach is played by the choice of the intercellular pairwise interaction kernel and by the estimate of its parameters. The aim of the paper is to demonstrate how the concept of H-stability, deriving from statistical mechanics, can have important implications in this respect. For any given interaction kernel, it in fact allows to a priori predict the regions of the free parameter space that result in stable configurations of the system characterized by a finite and strictly positive minimal interparticle distance, which is fundamental when dealing with biological phenomena. The proposed analytical arguments are indeed able to restrict the range of possible variations of selected model coefficients, whose exact estimate however requires further investigations (e.g., fitting with empirical data), as illustrated in this paper by series of representative simulations dealing with cell colony reorganization, sorting phenomena and zebrafish embryonic development. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Equalizing resolution in smoothed-particle hydrodynamics calculations using self-adaptive sinc kernels

    NASA Astrophysics Data System (ADS)

    García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin

    2014-10-01

    Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.

  11. Particle-in-cell simulations with charge-conserving current deposition on graphic processing units

    NASA Astrophysics Data System (ADS)

    Ren, Chuang; Kong, Xianglong; Huang, Michael; Decyk, Viktor; Mori, Warren

    2011-10-01

    Recently using CUDA, we have developed an electromagnetic Particle-in-Cell (PIC) code with charge-conserving current deposition for Nvidia graphic processing units (GPU's) (Kong et al., Journal of Computational Physics 230, 1676 (2011). On a Tesla M2050 (Fermi) card, the GPU PIC code can achieve a one-particle-step process time of 1.2 - 3.2 ns in 2D and 2.3 - 7.2 ns in 3D, depending on plasma temperatures. In this talk we will discuss novel algorithms for GPU-PIC including charge-conserving current deposition scheme with few branching and parallel particle sorting. These algorithms have made efficient use of the GPU shared memory. We will also discuss how to replace the computation kernels of existing parallel CPU codes while keeping their parallel structures. This work was supported by U.S. Department of Energy under Grant Nos. DE-FG02-06ER54879 and DE-FC02-04ER54789 and by NSF under Grant Nos. PHY-0903797 and CCF-0747324.

  12. PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation

    NASA Astrophysics Data System (ADS)

    Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long

    2018-06-01

    We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.

  13. Excitation energies from particle-particle random phase approximation with accurate optimized effective potentials

    NASA Astrophysics Data System (ADS)

    Jin, Ye; Yang, Yang; Zhang, Du; Peng, Degao; Yang, Weitao

    2017-10-01

    The optimized effective potential (OEP) that gives accurate Kohn-Sham (KS) orbitals and orbital energies can be obtained from a given reference electron density. These OEP-KS orbitals and orbital energies are used here for calculating electronic excited states with the particle-particle random phase approximation (pp-RPA). Our calculations allow the examination of pp-RPA excitation energies with the exact KS density functional theory (DFT). Various input densities are investigated. Specifically, the excitation energies using the OEP with the electron densities from the coupled-cluster singles and doubles method display the lowest mean absolute error from the reference data for the low-lying excited states. This study probes into the theoretical limit of the pp-RPA excitation energies with the exact KS-DFT orbitals and orbital energies. We believe that higher-order correlation contributions beyond the pp-RPA bare Coulomb kernel are needed in order to achieve even higher accuracy in excitation energy calculations.

  14. Temperature and magnetic field responsive hyaluronic acid particles with tunable physical and chemical properties

    NASA Astrophysics Data System (ADS)

    Ekici, Sema; Ilgin, Pinar; Yilmaz, Selahattin; Aktas, Nahit; Sahiner, Nurettin

    2011-01-01

    We report the preparation and characterization of thiolated-temperature-responsive hyaluronic acid-cysteamine-N-isopropyl acrylamide (HA-CYs-NIPAm) particles and thiolated-magnetic-responsive hyaluronic acid (HA-Fe-CYs) particles. Linear hyaluronic acid (HA) crosslinked with divinyl sulfone as HA particles was prepared using a water-in-oil micro emulsion system which were then oxidized HA-O with NaIO4 to develop aldehyde groups on the particle surface. HA-O hydrogel particles were then reacted with cysteamine (CYs) which interacted with aldehydes on the HA surface to form HA particles with cysteamine (HA-CYs) functionality on the surface. HA-CYs particles were further exposed to radical polymerization with NIPAm to obtain temperature responsive HA-CYs-NIPAm hydrogel particles. To acquire magnetic field responsive HA composites, magnetic iron particles were included in HA to form HA-Fe during HA particle preparation. HA-Fe hydrogel particles were also chemically modified. The prepared HA-CYs-NIPAm demonstrated temperature dependent size variations and phase transition temperature. HA-CYs-NIPAm and HA-Fe-CYs particles can be used as drug delivery vehicles. Sulfamethoxazole (SMZ), an antibacterial drug, was used as a model drug for temperature-induced release studies from these particles.

  15. Isolation and Structural Characterization of Antioxidant Peptides from Degreased Apricot Seed Kernels.

    PubMed

    Zhang, Haisheng; Xue, Jing; Zhao, Huanxia; Zhao, Xinshuai; Xue, Huanhuan; Sun, Yuhan; Xue, Wanrui

    2018-05-03

    Background : The composition and sequence of amino acids have a prominent influence on theantioxidant activities of peptides. Objective : A series of isolation and purification experiments was conducted to explore the amino acid sequence of antioxidant peptides, which led to its antioxidation causes. Methods : The degreased apricot seed kernels were hydrolyzed by compound proteases of alkaline protease and flavor protease (3:2, u/u) to prepare apricot seed kernel hydrolysates (ASKH). ASKH were separated into ASKH-A and ASKH-B by dialysis bag. ASKH-B (MW < 3.5 kDa) was further separated into fractions by Sephadex G-25 and G-15 gel-filtration chromatography. Reversed-phase HPLC (RP-HPLC) was performed to separate fraction B4b into two antioxidant peptides (peptide B4b-4 and B4b-6). Results : The amino acid sequences were Val-Leu-Tyr-Ile-Trp and Ser-Val-Pro-Tyr-Glu, respectively. Conclusions : The results suggested that ASKH antioxidant peptides may have potential utility as healthy ingredients and as food preservatives due to their antioxidant activity. Highlights : Materials with regional characteristics were selected to explore, and hydrolysates were identified by RP-HPLC and matrix-assisted laser desorption ionization-time-of-flight-MS to obtain amino acid sequences.

  16. Fabrication of Semiconducting Methylammonium Lead Halide Perovskite Particles by Spray Technology

    NASA Astrophysics Data System (ADS)

    Ahmadian-Yazdi, Mohammad-Reza; Eslamian, Morteza

    2018-01-01

    In this "nano idea" paper, three concepts for the preparation of methylammonium lead halide perovskite particles are proposed, discussed, and tested. The first idea is based on the wet chemistry preparation of the perovskite particles, through the addition of the perovskite precursor solution to an anti-solvent to facilitate the precipitation of the perovskite particles in the solution. The second idea is based on the milling of a blend of the perovskite precursors in the dry form, in order to allow for the conversion of the precursors to the perovskite particles. The third idea is based on the atomization of the perovskite solution by a spray nozzle, introducing the spray droplets into a hot wall reactor, so as to prepare perovskite particles, using the droplet-to-particle spray approach (spray pyrolysis). Preliminary results show that the spray technology is the most successful method for the preparation of impurity-free perovskite particles and perovskite paste to deposit perovskite thin films. As a proof of concept, a perovskite solar cell with the paste prepared by the sprayed perovskite powder was successfully fabricated.

  17. Fabrication of Semiconducting Methylammonium Lead Halide Perovskite Particles by Spray Technology.

    PubMed

    Ahmadian-Yazdi, Mohammad-Reza; Eslamian, Morteza

    2018-01-10

    In this "nano idea" paper, three concepts for the preparation of methylammonium lead halide perovskite particles are proposed, discussed, and tested. The first idea is based on the wet chemistry preparation of the perovskite particles, through the addition of the perovskite precursor solution to an anti-solvent to facilitate the precipitation of the perovskite particles in the solution. The second idea is based on the milling of a blend of the perovskite precursors in the dry form, in order to allow for the conversion of the precursors to the perovskite particles. The third idea is based on the atomization of the perovskite solution by a spray nozzle, introducing the spray droplets into a hot wall reactor, so as to prepare perovskite particles, using the droplet-to-particle spray approach (spray pyrolysis). Preliminary results show that the spray technology is the most successful method for the preparation of impurity-free perovskite particles and perovskite paste to deposit perovskite thin films. As a proof of concept, a perovskite solar cell with the paste prepared by the sprayed perovskite powder was successfully fabricated.

  18. Classification With Truncated Distance Kernel.

    PubMed

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  19. Comparison of the Effects of Blending and Juicing on the Phytochemicals Contents and Antioxidant Capacity of Typical Korean Kernel Fruit Juices

    PubMed Central

    Pyo, Young-Hee; Jin, Yoo-Jeong; Hwang, Ji-Young

    2014-01-01

    Four Korean kernel fruit (apple, pear, persimmon, and mandarin orange) juices were obtained by household processing techniques (i.e., blending, juicing). Whole and flesh fractions of each fruit were extracted by a blender or a juicer and then examined for phytochemical content (i.e., organic acids, polyphenol compounds). The antioxidant capacity of each juice was determined by ferric reducing antioxidant power (FRAP) and 2,2-diphenyl-1-picrylhydrazyl (DPPH) assays. Results revealed that juices that had been prepared by blending whole fruits had stronger antioxidant activities and contained larger amounts of phenolic compounds than juices that had been prepared by juicing the flesh fraction of the fruit. However, the concentration of ascorbic acid in apple, pear, and mandarin orange juices was significantly (P<0.05) higher in juice that had been processed by juicing, rather than blending. The juices with the highest ascorbic acid (233.9 mg/serving), total polyphenols (862.3 mg gallic acid equivalents/serving), and flavonoids (295.1 mg quercetin equivalents/serving) concentrations were blended persimmon juice, blended mandarin orange juice, and juiced apple juice, respectively. These results indicate that juice extraction techniques significantly (P<0.05) influences the phytochemical levels and antioxidant capacity of fruit juices. PMID:25054109

  20. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations.

    PubMed

    Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D

    2016-03-21

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  1. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    NASA Astrophysics Data System (ADS)

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-01

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  2. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.

    2016-03-01

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  3. An O( N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    DOE PAGES

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...

    2016-08-10

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O( N 2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Methodmore » (FMM) to evaluate the integrals in O( N) operations, with O( N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less

  4. Documentation of the appearance of a caviar-type deposit in Oven 1 following a large scale experiment for heating oil with Upper Silesian coal (in German)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rank

    1942-03-26

    When the oven was disassembled after the test, small kernels of porous material were found in both the upper and lower portion of the oven to a depth of about 2 m. The kernels were of various sizes up to 4 mm. From 1,300 metric ..cap alpha..ons of dry coal, there were 330 kg or the residue of 0.025% of the coal input. These kernels brought to mind deposits of spheroidal material termed ''caviar'', since they had rounded tops. However, they were irregularly long. After multiaxis micrography, no growth rings were found as in Leuna's lignite caviar. So, it wasmore » a question of small particles consisting almost totally of ash. The majority of the composition was Al, Fe, Na, silicic acid, S and Cl. The sulfur was found to be in sulfide form and Cl in a volatile form. The remains did not turn to caviar form since the CaO content was slight. The Al, Fe, Na, silicic acid, S and Cl were concentrated in comparison to coal ash and originate apparently from the catalysts (FeSO/sub 4/, Bayermasse, and Na/sub 2/S). It was notable that the Cl content was so high. 2 graphs, 1 table« less

  5. From Newton's Law to the Linear Boltzmann Equation Without Cut-Off

    NASA Astrophysics Data System (ADS)

    Ayi, Nathalie

    2017-03-01

    We provide a rigorous derivation of the linear Boltzmann equation without cut-off starting from a system of particles interacting via a potential with infinite range as the number of particles N goes to infinity under the Boltzmann-Grad scaling. More particularly, we will describe the motion of a tagged particle in a gas close to global equilibrium. The main difficulty in our context is that, due to the infinite range of the potential, a non-integrable singularity appears in the angular collision kernel, making no longer valid the single-use of Lanford's strategy. Our proof relies then on a combination of Lanford's strategy, of tools developed recently by Bodineau, Gallagher and Saint-Raymond to study the collision process, and of new duality arguments to study the additional terms associated with the long-range interaction, leading to some explicit weak estimates.

  6. Numerical and experimental validation of a particle Galerkin method for metal grinding simulation

    NASA Astrophysics Data System (ADS)

    Wu, C. T.; Bui, Tinh Quoc; Wu, Youcai; Luo, Tzui-Liang; Wang, Morris; Liao, Chien-Chih; Chen, Pei-Yin; Lai, Yu-Sheng

    2018-03-01

    In this paper, a numerical approach with an experimental validation is introduced for modelling high-speed metal grinding processes in 6061-T6 aluminum alloys. The derivation of the present numerical method starts with an establishment of a stabilized particle Galerkin approximation. A non-residual penalty term from strain smoothing is introduced as a means of stabilizing the particle Galerkin method. Additionally, second-order strain gradients are introduced to the penalized functional for the regularization of damage-induced strain localization problem. To handle the severe deformation in metal grinding simulation, an adaptive anisotropic Lagrangian kernel is employed. Finally, the formulation incorporates a bond-based failure criterion to bypass the prospective spurious damage growth issues in material failure and cutting debris simulation. A three-dimensional metal grinding problem is analyzed and compared with the experimental results to demonstrate the effectiveness and accuracy of the proposed numerical approach.

  7. In-pile test results of U-silicide or U-nitride coated U-7Mo particle dispersion fuel in Al

    NASA Astrophysics Data System (ADS)

    Kim, Yeon Soo; Park, J. M.; Lee, K. H.; Yoo, B. O.; Ryu, H. J.; Ye, B.

    2014-11-01

    U-silicide or U-nitride coated U-Mo particle dispersion fuel in Al (U-Mo/Al) was in-pile tested to examine the effectiveness of the coating as a diffusion barrier between the U-7Mo fuel kernels and Al matrix. This paper reports the PIE data and analyses focusing on the effectiveness of the coating in terms of interaction layer (IL) growth and general fuel performance. The U-silicide coating showed considerable success, but it also provided evidence for additional improvement for coating process. The U-nitride coated specimen showed largely inefficient results in reducing IL growth. From the test, important observations were also made that can be utilized to improve U-Mo/Al fuel performance. The heating process for coating turned out to be beneficial to suppress fuel swelling. The use of larger fuel particles confirmed favorable effects on fuel performance.

  8. Carbon monoxide formation in UO2 kerneled HTR fuel particles containing oxygen getters

    NASA Astrophysics Data System (ADS)

    Proksch, E.; Strigl, A.; Nabielek, H.

    1986-01-01

    Mass spectrometric measurements of CO in irradiated UO2 fuel particles containing oxygen getters are summarized. Uranium carbide addition in the 3% to 15% range reduces the CO release by factors between 25 and 80, up to burn-up levels as high as 70% FIMA. Unintentional gettering by SiC in TRISO coated particles with failed inner pyrocarbon layers results in CO reduction factors between 15 and 110. For ZrC, ambiguous results are obtained; ZrC probably results in CO reduction by a factor of 40; Ce2O3 and La2O3 seem less effective than the carbides; for Ce2O3, reduction factors between 3 and 15 are found. However, the results are possibly incorrect due to premature oxidation of the getter already during fabrication. Addition of SiO2 + Al2O3 has no influence on CO release.

  9. A technique to remove the tensile instability in weakly compressible SPH

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoyang; Yu, Peng

    2018-01-01

    When smoothed particle hydrodynamics (SPH) is directly applied for the numerical simulations of transient viscoelastic free surface flows, a numerical problem called tensile instability arises. In this paper, we develop an optimized particle shifting technique to remove the tensile instability in SPH. The basic equations governing free surface flow of an Oldroyd-B fluid are considered, and approximated by an improved SPH scheme. This includes the implementations of the correction of kernel gradient and the introduction of Rusanov flux into the continuity equation. To verify the effectiveness of the optimized particle shifting technique in removing the tensile instability, the impacting drop, the injection molding of a C-shaped cavity, and the extrudate swell, are conducted. The numerical results obtained are compared with those simulated by other numerical methods. A comparison among different numerical techniques (e.g., the artificial stress) to remove the tensile instability is further performed. All numerical results agree well with the available data.

  10. A density-adaptive SPH method with kernel gradient correction for modeling explosive welding

    NASA Astrophysics Data System (ADS)

    Liu, M. B.; Zhang, Z. L.; Feng, D. L.

    2017-09-01

    Explosive welding involves processes like the detonation of explosive, impact of metal structures and strong fluid-structure interaction, while the whole process of explosive welding has not been well modeled before. In this paper, a novel smoothed particle hydrodynamics (SPH) model is developed to simulate explosive welding. In the SPH model, a kernel gradient correction algorithm is used to achieve better computational accuracy. A density adapting technique which can effectively treat large density ratio is also proposed. The developed SPH model is firstly validated by simulating a benchmark problem of one-dimensional TNT detonation and an impact welding problem. The SPH model is then successfully applied to simulate the whole process of explosive welding. It is demonstrated that the presented SPH method can capture typical physics in explosive welding including explosion wave, welding surface morphology, jet flow and acceleration of the flyer plate. The welding angle obtained from the SPH simulation agrees well with that from a kinematic analysis.

  11. Deep learning architecture for iris recognition based on optimal Gabor filters and deep belief network

    NASA Astrophysics Data System (ADS)

    He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang

    2017-03-01

    Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.

  12. Ford Motor Company NDE facility shielding design.

    PubMed

    Metzger, Robert L; Van Riper, Kenneth A; Jones, Martin H

    2005-01-01

    Ford Motor Company proposed the construction of a large non-destructive evaluation laboratory for radiography of automotive power train components. The authors were commissioned to design the shielding and to survey the completed facility for compliance with radiation doses for occupationally and non-occupationally exposed personnel. The two X-ray sources are Varian Linatron 3000 accelerators operating at 9-11 MV. One performs computed tomography of automotive transmissions, while the other does real-time radiography of operating engines and transmissions. The shield thickness for the primary barrier and all secondary barriers were determined by point-kernel techniques. Point-kernel techniques did not work well for skyshine calculations and locations where multiple sources (e.g. tube head leakage and various scatter fields) impacted doses. Shielding for these areas was determined using transport calculations. A number of MCNP [Briesmeister, J. F. MCNPCA general Monte Carlo N-particle transport code version 4B. Los Alamos National Laboratory Manual (1997)] calculations focused on skyshine estimates and the office areas. Measurements on the operational facility confirmed the shielding calculations.

  13. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.

  14. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  15. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  16. Allowing for crystalline structure effects in Geant4

    DOE PAGES

    Bagli, Enrico; Asai, Makoto; Dotti, Andrea; ...

    2017-03-24

    In recent years, the Geant4 toolkit for the Monte Carlo simulation of radiation with matter has seen large growth in its divers user community. A fundamental aspect of a successful physics experiment is the availability of a reliable and precise simulation code. Geant4 currently does not allow for the simulation of particle interactions with anything other than amorphous matter. To overcome this limitation, the GECO (GEant4 Crystal Objects) project developed a general framework for managing solid-state structures in the Geant4 kernel and validate it against experimental data. As a result, accounting for detailed geometrical structures allows, for example, simulation ofmore » diffraction from crystal planes or the channeling of charged particle.« less

  17. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  18. FAST TRACK COMMUNICATION: The nonlinear fragmentation equation

    NASA Astrophysics Data System (ADS)

    Ernst, Matthieu H.; Pagonabarraga, Ignacio

    2007-04-01

    We study the kinetics of nonlinear irreversible fragmentation. Here, fragmentation is induced by interactions/collisions between pairs of particles and modelled by general classes of interaction kernels, for several types of breakage models. We construct initial value and scaling solutions of the fragmentation equations, and apply the 'non-vanishing mass flux' criterion for the occurrence of shattering transitions. These properties enable us to determine the phase diagram for the occurrence of shattering states and of scaling states in the phase space of model parameters.

  19. Optimized spray drying process for preparation of one-step calcium-alginate gel microspheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popeski-Dimovski, Riste

    Calcium-alginate micro particles have been used extensively in drug delivery systems. Therefore we establish a one-step method for preparation of internally gelated micro particles with spherical shape and narrow size distribution. We use four types of alginate with different G/M ratio and molar weight. The size of the particles is measured using light diffraction and scanning electron microscopy. Measurements showed that with this method, micro particles with size distribution around 4 micrometers can be prepared, and SEM imaging showed that those particles are spherical in shape.

  20. Exploiting graph kernels for high performance biomedical relation extraction.

    PubMed

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.

  1. 7 CFR 810.2202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernels, foreign material, and shrunken and broken kernels. The sum of these three factors may not exceed... the removal of dockage and shrunken and broken kernels. (g) Heat-damaged kernels. Kernels, pieces of... sample after the removal of dockage and shrunken and broken kernels. (h) Other grains. Barley, corn...

  2. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...

  3. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  4. Simultaneous multislice magnetic resonance fingerprinting (SMS-MRF) with direct-spiral slice-GRAPPA (ds-SG) reconstruction.

    PubMed

    Ye, Huihui; Cauley, Stephen F; Gagoski, Borjan; Bilgic, Berkin; Ma, Dan; Jiang, Yun; Du, Yiping P; Griswold, Mark A; Wald, Lawrence L; Setsompop, Kawin

    2017-05-01

    To develop a reconstruction method to improve SMS-MRF, in which slice acceleration is used in conjunction with highly undersampled in-plane acceleration to speed up MRF acquisition. In this work two methods are employed to efficiently perform the simultaneous multislice magnetic resonance fingerprinting (SMS-MRF) data acquisition and the direct-spiral slice-GRAPPA (ds-SG) reconstruction. First, the lengthy training data acquisition is shortened by employing the through-time/through-k-space approach, in which similar k-space locations within and across spiral interleaves are grouped and are associated with a single set of kernel. Second, inversion recovery preparation (IR prepped), variable flip angle (FA), and repetition time (TR) are used for the acquisition of the training data, to increase signal variation and to improve the conditioning of the kernel fitting. The grouping of k-space locations enables a large reduction in the number of kernels required, and the IR-prepped training data with variable FA and TR provide improved ds-SG kernels and reconstruction performance. With direct-spiral slice-GRAPPA, tissue parameter maps comparable to that of conventional MRF were obtained at multiband (MB) = 3 acceleration using t-blipped SMS-MRF acquisition with 32-channel head coil at 3 Tesla (T). The proposed reconstruction scheme allows MB = 3 accelerated SMS-MRF imaging with high-quality T 1 , T 2 , and off-resonance maps, and can be used to significantly shorten MRF acquisition and aid in its adoption in neuro-scientific and clinical settings. Magn Reson Med 77:1966-1974, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  5. Bread Wheat Quality: Some Physical, Chemical and Rheological Characteristics of Syrian and English Bread Wheat Samples.

    PubMed

    Al-Saleh, Abboud; Brennan, Charles S

    2012-11-22

    The relationships between breadmaking quality, kernel properties (physical and chemical), and dough rheology were investigated using flours from six genotypes of Syrian wheat lines, comprising both commercially grown cultivars and advanced breeding lines. Genotypes were grown in 2008/2009 season in irrigated plots in the Eastern part of Syria. Grain samples were evaluated for vitreousness, test weight, 1000-kernel weight and then milled and tested for protein content, ash, and water content. Dough rheology of the samples was studied by the determination of the mixing time, stability, weakness, resistance and the extensibility of the dough. Loaf baking quality was evaluated by the measurement of the specific weight, resilience and firmness in addition to the sensory analysis. A comparative study between the six Syrian wheat genotypes and two English flour samples was conducted. Significant differences were observed among Syrian genotypes in vitreousness (69.3%-95.0%), 1000-kernel weight (35.2-46.9 g) and the test weight (82.2-88.0 kg/hL). All samples exhibited high falling numbers (346 to 417 s for the Syrian samples and 285 and 305 s for the English flours). A significant positive correlation was exhibited between the protein content of the flour and its absorption of water (r = 0.84 **), as well as with the vitreousness of the kernel (r = 0.54 *). Protein content was also correlated with dough stability (r = 0.86 **), extensibility (r = 0.8 **), and negatively correlated with dough weakness (r = -0.69 **). Bread firmness and dough weakness were positively correlated (r = 0.66 **). Sensory analysis indicated Doumah-2 was the best appreciated whilst Doumah 40765 and 46055 were the least appreciated which may suggest their suitability for biscuit preparation rather than bread making.

  6. Bread Wheat Quality: Some Physical, Chemical and Rheological Characteristics of Syrian and English Bread Wheat Samples

    PubMed Central

    Al-Saleh, Abboud; Brennan, Charles S.

    2012-01-01

    The relationships between breadmaking quality, kernel properties (physical and chemical), and dough rheology were investigated using flours from six genotypes of Syrian wheat lines, comprising both commercially grown cultivars and advanced breeding lines. Genotypes were grown in 2008/2009 season in irrigated plots in the Eastern part of Syria. Grain samples were evaluated for vitreousness, test weight, 1000-kernel weight and then milled and tested for protein content, ash, and water content. Dough rheology of the samples was studied by the determination of the mixing time, stability, weakness, resistance and the extensibility of the dough. Loaf baking quality was evaluated by the measurement of the specific weight, resilience and firmness in addition to the sensory analysis. A comparative study between the six Syrian wheat genotypes and two English flour samples was conducted. Significant differences were observed among Syrian genotypes in vitreousness (69.3%–95.0%), 1000-kernel weight (35.2–46.9 g) and the test weight (82.2–88.0 kg/hL). All samples exhibited high falling numbers (346 to 417 s for the Syrian samples and 285 and 305 s for the English flours). A significant positive correlation was exhibited between the protein content of the flour and its absorption of water (r = 0.84 **), as well as with the vitreousness of the kernel (r = 0.54 *). Protein content was also correlated with dough stability (r = 0.86 **), extensibility (r = 0.8 **), and negatively correlated with dough weakness (r = −0.69 **). Bread firmness and dough weakness were positively correlated (r = 0.66 **). Sensory analysis indicated Doumah-2 was the best appreciated whilst Doumah 40765 and 46055 were the least appreciated which may suggest their suitability for biscuit preparation rather than bread making. PMID:28239087

  7. [Preparation of panax notoginseng saponins-tanshinone H(A) composite method for pulmonary delivery with spray-drying method and its characterization].

    PubMed

    Wang, Hua-Mei; Fu, Ting-Ming; Guo, Li-Wei

    2013-02-01

    To prepare panax notoginseng saponins-tanshinone II(A) composite particles for pulmonary delivery, in order to explore a dry powder particle preparation method ensuring synchronized arrival of multiple components of traditional Chinese medicine compounds at absorption sites. Panax notoginseng saponins-tanshinone II(A) composite particles were prepared with spray-drying method, and characterized by scanning electron microscopy (SEM), confocal laser scanning microscope (CLSM), X-ray diffraction (XRD), infrared analysis (IR), dry laser particle size analysis, high performance liquid chromatography (HPLC) and the aerodynamic behavior was evaluated by a Next Generation Impactor (NGI). The dry powder particles produced had narrow particle size distribution range and good aerodynamic behavior, and could realize synchronized administration of multiple components. The spray-drying method is used to combine traditional Chinese medicine components with different physical and chemical properties in the same particle, and product into traditional Chinese medicine compound particles in line with the requirements for pulmonary delivery.

  8. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    PubMed

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Unconventional protein sources: apricot seed kernels.

    PubMed

    Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M

    1981-09-01

    Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.

  10. An introduction to kernel-based learning algorithms.

    PubMed

    Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B

    2001-01-01

    This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.

  11. Design of CT reconstruction kernel specifically for clinical lung imaging

    NASA Astrophysics Data System (ADS)

    Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.

    2005-04-01

    In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.

  12. Quality changes in macadamia kernel between harvest and farm-gate.

    PubMed

    Walton, David A; Wallace, Helen M

    2011-02-01

    Macadamia integrifolia, Macadamia tetraphylla and their hybrids are cultivated for their edible kernels. After harvest, nuts-in-shell are partially dried on-farm and sorted to eliminate poor-quality kernels before consignment to a processor. During these operations, kernel quality may be lost. In this study, macadamia nuts-in-shell were sampled at five points of an on-farm postharvest handling chain from dehusking to the final storage silo to assess quality loss prior to consignment. Shoulder damage, weight of pieces and unsound kernel were assessed for raw kernels, and colour, mottled colour and surface damage for roasted kernels. Shoulder damage, weight of pieces and unsound kernel for raw kernels increased significantly between the dehusker and the final silo. Roasted kernels displayed a significant increase in dark colour, mottled colour and surface damage during on-farm handling. Significant loss of macadamia kernel quality occurred on a commercial farm during sorting and storage of nuts-in-shell before nuts were consigned to a processor. Nuts-in-shell should be dried as quickly as possible and on-farm handling minimised to maintain optimum kernel quality. 2010 Society of Chemical Industry.

  13. Seed-Surface Grafting Precipitation Polymerization for Preparing Microsized Optically Active Helical Polymer Core/Shell Particles and Their Application in Enantioselective Crystallization.

    PubMed

    Zhao, Biao; Lin, Jiangfeng; Deng, Jianping; Liu, Dong

    2018-05-14

    Core/shell particles constructed by polymer shell and silica core have constituted a significant category of advanced functional materials. However, constructing microsized optically active helical polymer core/shell particles still remains as a big academic challenge due to the lack of effective and universal preparation methods. In this study, a seed-surface grafting precipitation polymerization (SSGPP) strategy is developed for preparing microsized core/shell particles with SiO 2 as core on which helically substituted polyacetylene is covalently bonded as shell. The resulting core/shell particles exhibit fascinating optical activity and efficiently induce enantioselective crystallization of racemic threonine. Taking advantage of the preparation strategy, novel achiral polymeric and hybrid core/shell particles are also expected. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. A new discriminative kernel from probabilistic models.

    PubMed

    Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert

    2002-10-01

    Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.

  15. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  16. Thermal Decomposition Behaviors and Burning Characteristics of Composite Propellants Prepared Using Combined Ammonium Perchlorate/Ammonium Nitrate Particles

    NASA Astrophysics Data System (ADS)

    Kohga, Makoto; Handa, Saori

    2018-01-01

    The thermal decomposition behaviors and burning characteristics of propellants prepared with combined ammonium perchlorate (AP)/ammonium nitrate (AN) particles greatly depended on the AN content (χ) of the AP/AN sample. The thermal decomposition behaviors of the propellants prepared with the combined samples almost matched those of the propellants prepared by physically mixing AP and AN particles, while their burning characteristics differed. The use of combined AP/AN particles decreased the heterogeneity of the combustion waves of the AP/AN propellants because of the difference in the combustion wave structure. In contrast, the addition of Fe2O3 caused unsteady combustion of the propellants prepared using samples with χ values lower than 8.1%.

  17. Simple preparation of magnetic field-responsive structural colored Janus particles.

    PubMed

    Teshima, Midori; Seki, Takahiro; Takeoka, Yukikazu

    2018-03-08

    We established a simple method for preparing Janus particles displaying different structural colors using submicron-sized fine silica particles and magnetic nanoparticles composed of Fe 3 O 4 . A w/o emulsion is prepared by vortex-stirring a mixed aqueous solution of suspended fine silica particles and magnetic nanoparticles and of hexadecane containing an emulsifier. Subsequent drying of the emulsion on a hot plate using a magnetic stirrer provides a polydisperse particle aggregate displaying two different structural colors according to the ratio of the amount of fine silica particles to the amount of magnetic nanoparticles. This polydisperse particle aggregate can be converted into monodisperse particles simply by using a sieve made of stainless steel. In the presence of a magnet, the monodisperse Janus particles can change their orientation and can switch between two different structural colors.

  18. Estimating population exposure to ambient polycyclic aromatic hydrocarbon in the United States - Part I: Model development and evaluation.

    PubMed

    Zhang, Jie; Li, Jingyi; Wang, Peng; Chen, Gang; Mendola, Pauline; Sherman, Seth; Ying, Qi

    2017-02-01

    PAHs (polycyclic aromatic hydrocarbons) in the environment are of significant concern due to their negative impact on human health. PAH measurements at the air toxics monitoring network stations alone are not sufficient to provide a complete picture of ambient PAH levels or to allow accurate assessment of public exposure in the United States. In this study, speciation profiles for PAHs were prepared using data assembled from existing emission profile data bases, and the Sparse Matrix Operator Kernel Emissions (SMOKE) model was used to generate the gridded national emissions of 16 priority PAHs in the US. The estimated emissions were applied to simulate ambient concentration of PAHs for January, April, July and October 2011, using a modified Community Multiscale Air Quality (CMAQ) model (v5.0.1) that treats the gas and particle phase partitioning of PAHs and their reactions in the gas phase and on particle surface. Predicted daily PAH concentrations at 61 air toxics monitoring sites generally agreed with observations, and averaging the predictions over a month reduced the overall error. The best model performance was obtained at rural sites, with an average mean fractional bias (MFB) of -0.03 and mean fractional error (MFE) of 0.70. Concentrations at suburban and urban sites were underestimated with overall MFB=-0.57 and MFE=0.89. Predicted PAH concentrations were highest in January with better model performance (MFB=0.12, MFE=0.69; including all sites), and lowest in July with worse model performance (MFB=-0.90, MFE=1.08). Including heterogeneous reactions of several PAHs with O 3 on particle surface reduced the over-prediction bias in winter, although significant uncertainties were expected due to relative simple treatment of the heterogeneous reactions in the current model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Particle characterization of poorly water-soluble drugs using a spray freeze drying technique.

    PubMed

    Kondo, Masahiro; Niwa, Toshiyuki; Okamoto, Hirokazu; Danjo, Kazumi

    2009-07-01

    A spray freeze drying (SFD) method was developed to prepare the composite particles of poorly water-soluble drug. The aqueous solution dissolved drug and the functional polymer was sprayed directly into liquid nitrogen. Then, the iced droplets were lyophilized with freeze-dryer to prepare solid particles. Tolbutamide (TBM) and hydroxypropylmethylcellulose (HPMC) were used as a model drug and water-soluble polymeric carrier in this study, respectively. The morphological observation of particles revealed that the spherical particles having porous structure could be obtained by optimizing the loading amount of drug and polymer in the spray solution. Especially, SFD method was characterized that the prepared particles had significantly larger specific surface area comparing with those prepared by the standard spray drying technique. The physicochemical properties of the resultant particles were found to be dependent on the concentration of spray solution. When the solution with high content of drug and polymer was used, the particle size of the resulting composite particles increased and they became spherical. The specific surface area of the particles also increased as a result of higher concentration of solution. The evaluation of spray solution indicated that these results were dependent on the viscosity of spray solution. In addition, when composite particles of TBM were prepared using the SFD method with HPMC as a carrier, the crystallinity of TBM decreased as the proportion of HPMC increased. When the TBM : HPMC ratio reached 1 : 5, the crystallinity of the particles completely disappeared. The dissolution tests showed that the release profiles of poorly water-soluble TBM from SFD composite particles were drastically improved compared to bulk TBM. The 70% release time T(70) of composite particles prepared by the SFD method in a solution of pH 1.2 was quite smaller than that of bulk TBM, while in a solution of pH 6.8, it was slightly lower. In addition, the release rates were faster than those of standard spray dried (SD) composite particles for solutions of pH 1.2 and 6.8, respectively. When composite particles were prepared from mixtures with various composition ratios, T(70) was found to decrease as the proportion of HPMC increased; the release rate was faster than that of bulk TBM in a solution of pH 6.8, as well as solution of pH 1.2.

  20. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  1. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  2. Studies on stabilization mechanism and stealth effect of poloxamer 188 onto PLGA nanoparticles.

    PubMed

    Jain, Darshana; Athawale, Rajani; Bajaj, Amrita; Shrikhande, Shruti; Goel, Peeyush N; Gude, Rajiv P

    2013-09-01

    In nanoparticulate engineering for drug delivery systems, poloxamers tri block copolymers are employed as adsorbing molecules to modify the aggregation state and impart stability to products. The aim was to prepare nanoparticles using poloxamer188 as stabiliser and investigate the mechanism of stabilisation of the prepared particles. Nanoparticles were prepared by solvent diffusion method with poloxamer 188 as stabiliser. Hydrodynamic thickness and zeta potential of the prepared nanoparticles were determined by photon correlation spectroscopy. To study the extent of adsorption of poloxamer onto the prepared nanoparticles, adsorption isotherms were constructed. The adsorbed amount of poloxamer 188 onto the particles was determined by depletion method. Macrophageal uptake study was performed to assess the uptake of the prepared nanoparticles using RAW 264.7 cell lines. Nanoparticles were prepared with slight increase in particle size and in absolute value of zeta potential compared to uncoated particles suggesting that this effect was due to adsorption of poloxamer 188. TEM studies and surface area analysis supported the results obtained from particle size analysis indicating preparation of particles with a thin layer of adsorbed poloxamer 188. Adsorption kinetics modeling suggested that at low concentrations (0.001-0.010 g/L), Langmuir monolayer equation fits quite well and at higher concentrations (above 0.010 g/L) multilayer adsorption of poloxamer 188 onto the prepared particles occurred. Thus the nanoparticles had multilayer of poloxamer 188 adsorbed onto the non uniform surface of PLGA. Results of macrophageal uptake and liver cell study exhibits adsorbed concentration dependent bypass of RES uptake of nanoparticles. Hence, results substantiate the application of adsorption isotherms for designing nanoparticles possessing potential to exhibit prolonged circulation when administered in vivo. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Preparation of 1,3,5-triamo-2,4,6-trinitrobenzene of submicron particle size

    DOEpatents

    Rigdon, Lester P [Livermore, CA; Moody, Gordon L [Tracy, CA; McGuire, Raymond R [Brentwood, CA

    2001-05-01

    A method is disclosed for the preparation of very small particle size, relatively pure 1,3,5-triamino-2,4,6-trinitrobenzene (TATB). Particles of TATB prepared according to the disclosed method are of submicron size and have a surface area in the range from about 3.8 to 27 square meters per gram.

  4. Preparation of 1,3,5-triamino-2,4,6-trinitrobenzene of submicron particle size

    DOEpatents

    Rigdon, Lester P.; Moody, Gordon L.; McGuire, Raymond R.

    2001-01-01

    A method is disclosed for the preparation of very small particle size, relatively pure 1,3,5-triamino-2,4,6-trinitrobenzene (TATB). Particles of TATB prepared according to the disclosed method are of submicron size and have a surface area in the range from about 3.8 to 27 square meters per gram.

  5. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    PubMed

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P < 0.05) but the unbroken kernels became significantly harder. Moisture content and moisture uptake rate were positively correlated, and cooked rice hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  6. Synthesis and characterization of magnetic and non-magnetic core-shell polyepoxide micrometer-sized particles of narrow size distribution.

    PubMed

    Omer-Mizrahi, Melany; Margel, Shlomo

    2009-01-15

    Core polystyrene microspheres of narrow size distribution were prepared by dispersion polymerization of styrene in a mixture of ethanol and 2-methoxy ethanol. Uniform polyglycidyl methacrylate/polystyrene core-shell micrometer-sized particles were prepared by emulsion polymerization at 73 degrees C of glycidyl methacrylate in the presence of the core polystyrene microspheres. Core-shell particles with different properties (size, surface morphology and composition) have been prepared by changing various parameters belonging to the above seeded emulsion polymerization process, e.g., volumes of the monomer glycidyl methacrylate and the crosslinker monomer ethylene glycol dimethacrylate. Magnetic Fe(3)O(4)/polyglycidyl methacrylate/polystyrene micrometer-sized particles were prepared by coating the former core-shell particles with magnetite nanoparticles via a nucleation and growth mechanism. Characterization of the various particles has been accomplished by routine methods such as light microscopy, SEM, FTIR, BET and magnetic measurements.

  7. A deep convolutional neural network approach to single-particle recognition in cryo-electron microscopy.

    PubMed

    Zhu, Yanan; Ouyang, Qi; Mao, Youdong

    2017-07-21

    Single-particle cryo-electron microscopy (cryo-EM) has become a mainstream tool for the structural determination of biological macromolecular complexes. However, high-resolution cryo-EM reconstruction often requires hundreds of thousands of single-particle images. Particle extraction from experimental micrographs thus can be laborious and presents a major practical bottleneck in cryo-EM structural determination. Existing computational methods for particle picking often use low-resolution templates for particle matching, making them susceptible to reference-dependent bias. It is critical to develop a highly efficient template-free method for the automatic recognition of particle images from cryo-EM micrographs. We developed a deep learning-based algorithmic framework, DeepEM, for single-particle recognition from noisy cryo-EM micrographs, enabling automated particle picking, selection and verification in an integrated fashion. The kernel of DeepEM is built upon a convolutional neural network (CNN) composed of eight layers, which can be recursively trained to be highly "knowledgeable". Our approach exhibits an improved performance and accuracy when tested on the standard KLH dataset. Application of DeepEM to several challenging experimental cryo-EM datasets demonstrated its ability to avoid the selection of un-wanted particles and non-particles even when true particles contain fewer features. The DeepEM methodology, derived from a deep CNN, allows automated particle extraction from raw cryo-EM micrographs in the absence of a template. It demonstrates an improved performance, objectivity and accuracy. Application of this novel method is expected to free the labor involved in single-particle verification, significantly improving the efficiency of cryo-EM data processing.

  8. The GeantV project: Preparing the future of simulation

    DOE PAGES

    Amadio, G.; J. Apostolakis; Bandieramonte, M.; ...

    2015-12-23

    Detector simulation is consuming at least half of the HEP computing cycles, and even so, experiments have to take hard decisions on what to simulate, as their needs greatly surpass the availability of computing resources. New experiments still in the design phase such as FCC, CLIC and ILC as well as upgraded versions of the existing LHC detectors will push further the simulation requirements. Since the increase in computing resources is not likely to keep pace with our needs, it is therefore necessary to explore innovative ways of speeding up simulation in order to sustain the progress of High Energymore » Physics. The GeantV project aims at developing a high performance detector simulation system integrating fast and full simulation that can be ported on different computing architectures, including CPU accelerators. After more than two years of R&D the project has produced a prototype capable of transporting particles in complex geometries exploiting micro-parallelism, SIMD and multithreading. Portability is obtained via C++ template techniques that allow the development of machine- independent computational kernels. Furthermore, a set of tables derived from Geant4 for cross sections and final states provides a realistic shower development and, having been ported into a Geant4 physics list, can be used as a basis for a direct performance comparison.« less

  9. Exchange and correlation effects on plasmon dispersions and Coulomb drag in low-density electron bilayers

    NASA Astrophysics Data System (ADS)

    Badalyan, S. M.; Kim, C. S.; Vignale, G.; Senatore, G.

    2007-03-01

    We investigate the effect of exchange and correlation (XC) on the plasmon spectrum and the Coulomb drag between spatially separated low-density two-dimensional electron layers. We adopt a different approach, which employs dynamic XC kernels in the calculation of the bilayer plasmon spectra and of the plasmon-mediated drag, and static many-body local field factors in the calculation of the particle-hole contribution to the drag. The spectrum of bilayer plasmons and the drag resistivity are calculated in a broad range of temperatures taking into account both intra- and interlayer correlation effects. We observe that both plasmon modes are strongly affected by XC corrections. After the inclusion of the complex dynamic XC kernels, a decrease of the electron density induces shifts of the plasmon branches in opposite directions. This is in stark contrast with the tendency observed within random phase approximation that both optical and acoustical plasmons move away from the boundary of the particle-hole continuum with a decrease in the electron density. We find that the introduction of XC corrections results in a significant enhancement of the transresistivity and qualitative changes in its temperature dependence. In particular, the large high-temperature plasmon peak that is present in the random phase approximation is found to disappear when the XC corrections are included. Our numerical results at low temperatures are in good agreement with the results of recent experiments by Kellogg [Solid State Commun. 123, 515 (2002)].

  10. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  11. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Study on Energy Productivity Ratio (EPR) at palm kernel oil processing factory: case study on PT-X at Sumatera Utara Plantation

    NASA Astrophysics Data System (ADS)

    Haryanto, B.; Bukit, R. Br; Situmeang, E. M.; Christina, E. P.; Pandiangan, F.

    2018-02-01

    The purpose of this study was to determine the performance, productivity and feasibility of the operation of palm kernel processing plant based on Energy Productivity Ratio (EPR). EPR is expressed as the ratio of output to input energy and by-product. Palm Kernel plan is process in palm kernel to become palm kernel oil. The procedure started from collecting data needed as energy input such as: palm kernel prices, energy demand and depreciation of the factory. The energy output and its by-product comprise the whole production price such as: palm kernel oil price and the remaining products such as shells and pulp price. Calculation the equality of energy of palm kernel oil is to analyze the value of Energy Productivity Ratio (EPR) bases on processing capacity per year. The investigation has been done in Kernel Oil Processing Plant PT-X at Sumatera Utara plantation. The value of EPR was 1.54 (EPR > 1), which indicated that the processing of palm kernel into palm kernel oil is feasible to be operated based on the energy productivity.

  13. Method of preparing silicon carbide particles dispersed in an electrolytic bath for composite electroplating of metals

    DOEpatents

    Peng, Yu-Min; Wang, Jih-Wen; Liue, Chun-Ying; Yeh, Shinn-Horng

    1994-01-01

    A method for preparing silicon carbide particles dispersed in an electrolytic bath for composite electroplating of metals includes the steps of washing the silicon carbide particles with an organic solvent; washing the silicon carbide particles with an inorganic acid; grinding the silicon carbide particles; and heating the silicon carbide particles in a nickel-containing solution at a boiling temperature for a predetermined period of time.

  14. The preparation and the sustained release of titanium dioxide hollow particles encapsulating L-ascorbic acid

    NASA Astrophysics Data System (ADS)

    Tominaga, Yoko; Kadota, Kazunori; Shimosaka, Atsuko; Yoshida, Mikio; Oshima, Kotaro; Shirakawa, Yoshiyuki

    2018-05-01

    The preparation of the titanium dioxide hollow particles encapsulating L-ascorbic acid via sol-gel process using inkjet nozzle has been performed, and the sustained release and the effect protecting against degradation of L-ascorbic acid in the particles were investigated. The morphology of titanium dioxide particles was evaluated by scanning electron microscopy (SEM) and energy dispersive X-ray spectrometry (EDS). The sustained release and the effect protecting against degradation of L-ascorbic acid were estimated by dialysis bag method in phosphate buffer saline (PBS) (pH = 7.4) as release media. The prepared titanium dioxide particles exhibited spherical porous structures. The particle size distribution of the titanium dioxide particles was uniform. The hollow titanium dioxide particles encapsulating L-ascorbic acid showed the sustained release. It was also found that the degradation of L-ascorbic acid could be inhibited by encapsulating L-ascorbic acid in the titanium dioxide hollow particles.

  15. Oxidation property of SiO2-supported small nickel particle prepared by the sol-gel method

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Yamashita, S.; Afiza, N.; Katayama, M.; Inada, Y.

    2016-05-01

    The oxidation property of SiO2-supported small Ni particle has been studied by means of the in-situ XAFS method. The Ni particle with the average diameter of 4 nm supported on SiO2 was prepared by the sol-gel method. The XANES spectrum of the small metallic Ni particle was clearly different from that of bulk Ni. The exposure of diluted O2 gas at room temperature promoted the surface oxidation of Ni(0) particle. During the temperature programmed oxidation process, the supported Ni(0) particle was quantitatively oxidized to NiO, and the oxidation temperature was lower by ca. 200 °C than that of the SiO2-supported Ni particle with the larger particle radius of 17 nm prepared by the impregnation method.

  16. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    PubMed Central

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  17. Incorporation of memory effects in coarse-grained modeling via the Mori-Zwanzig formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhen; Bian, Xin; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu

    2015-12-28

    The Mori-Zwanzig formalism for coarse-graining a complex dynamical system typically introduces memory effects. The Markovian assumption of delta-correlated fluctuating forces is often employed to simplify the formulation of coarse-grained (CG) models and numerical implementations. However, when the time scales of a system are not clearly separated, the memory effects become strong and the Markovian assumption becomes inaccurate. To this end, we incorporate memory effects into CG modeling by preserving non-Markovian interactions between CG variables, and the memory kernel is evaluated directly from microscopic dynamics. For a specific example, molecular dynamics (MD) simulations of star polymer melts are performed while themore » corresponding CG system is defined by grouping many bonded atoms into single clusters. Then, the effective interactions between CG clusters as well as the memory kernel are obtained from the MD simulations. The constructed CG force field with a memory kernel leads to a non-Markovian dissipative particle dynamics (NM-DPD). Quantitative comparisons between the CG models with Markovian and non-Markovian approximations indicate that including the memory effects using NM-DPD yields similar results as the Markovian-based DPD if the system has clear time scale separation. However, for systems with small separation of time scales, NM-DPD can reproduce correct short-time properties that are related to how the system responds to high-frequency disturbances, which cannot be captured by the Markovian-based DPD model.« less

  18. Three-body spectrum in a finite volume: The role of cubic symmetry

    DOE PAGES

    Doring, M.; Hammer, H. -W.; Mai, M.; ...

    2018-06-15

    The three-particle quantization condition is partially diagonalized in the center-of-mass frame by using cubic symmetry on the lattice. To this end, instead of spherical harmonics, the kernel of the Bethe-Salpeter equation for particle-dimer scattering is expanded in the basis functions of different irreducible representations of the octahedral group. Such a projection is of particular importance for the three-body problem in the finite volume due to the occurrence of three-body singularities above breakup. Additionally, we study the numerical solution and properties of such a projected quantization condition in a simple model. It is shown that, for large volumes, these solutions allowmore » for an instructive interpretation of the energy eigenvalues in terms of bound and scattering states.« less

  19. Three-body spectrum in a finite volume: The role of cubic symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doring, M.; Hammer, H. -W.; Mai, M.

    The three-particle quantization condition is partially diagonalized in the center-of-mass frame by using cubic symmetry on the lattice. To this end, instead of spherical harmonics, the kernel of the Bethe-Salpeter equation for particle-dimer scattering is expanded in the basis functions of different irreducible representations of the octahedral group. Such a projection is of particular importance for the three-body problem in the finite volume due to the occurrence of three-body singularities above breakup. Additionally, we study the numerical solution and properties of such a projected quantization condition in a simple model. It is shown that, for large volumes, these solutions allowmore » for an instructive interpretation of the energy eigenvalues in terms of bound and scattering states.« less

  20. Exact solutions of the population balance equation including particle transport, using group analysis

    NASA Astrophysics Data System (ADS)

    Lin, Fubiao; Meleshko, Sergey V.; Flood, Adrian E.

    2018-06-01

    The population balance equation (PBE) has received an unprecedented amount of attention in recent years from both academics and industrial practitioners because of its long history, widespread use in engineering, and applicability to a wide variety of particulate and discrete-phase processes. However it is typically impossible to obtain analytical solutions, although in almost every case a numerical solution of the PBEs can be obtained. In this article, the symmetries of PBEs with homogeneous coagulation kernels involving aggregation, breakage and growth processes and particle transport in one dimension are found by direct solving the determining equations. Using the optimal system of one and two-dimensional subalgebras, all invariant solutions and reduced equations are obtained. In particular, an explicit analytical physical solution is also presented.

  1. Precipitation growth in convective clouds. [hail

    NASA Technical Reports Server (NTRS)

    Srivastava, R. C.

    1981-01-01

    Analytical solutions to the equations of both the growth and motion of hailstones in updrafts and of cloud water contents which vary linearly with height were used to investigate hail growth in a model cloud. A strong correlation was found between the hail embyro starting position and its trajectory and final size. A simple model of the evolution of particle size distribution by coalescence and spontaneous and binary disintegrations was formulated. Solutions for the mean mass of the distribution and the equilibrium size distribution were obtained for the case of constant collection kernel and disintegration parameters. Azimuthal scans of Doppler velocity at a number of elevation angles were used to calculate high resolution vertical profiles of particle speed and horizontal divergence (the vertical air velocity) in a region of widespread precipitation trailing a mid-latitude squall line.

  2. Viscosity scaling in concentrated dispersions and its impact on colloidal aggregation.

    PubMed

    Nicoud, Lucrèce; Lattuada, Marco; Lazzari, Stefano; Morbidelli, Massimo

    2015-10-07

    Gaining fundamental knowledge about diffusion in crowded environments is of great relevance in a variety of research fields, including reaction engineering, biology, pharmacy and colloid science. In this work, we determine the effective viscosity experienced by a spherical tracer particle immersed in a concentrated colloidal dispersion by means of Brownian dynamics simulations. We characterize how the effective viscosity increases from the solvent viscosity for small tracer particles to the macroscopic viscosity of the dispersion when large tracer particles are employed. Our results show that the crossover between these two regimes occurs at a tracer particle size comparable to the host particle size. In addition, it is found that data points obtained in various host dispersions collapse on one master curve when the normalized effective viscosity is plotted as a function of the ratio between the tracer particle size and the mean host particle size. In particular, this master curve was obtained by varying the volume fraction, the average size and the polydispersity of the host particle distribution. Finally, we extend these results to determine the size dependent effective viscosity experienced by a fractal cluster in a concentrated colloidal system undergoing aggregation. We include this scaling of the effective viscosity in classical aggregation kernels, and we quantify its impact on the kinetics of aggregate growth as well as on the shape of the aggregate distribution by means of population balance equation calculations.

  3. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  4. An SVM model with hybrid kernels for hydrological time series

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  5. Fabrication of Controllable Pore and Particle Size of Mesoporous Silica Nanoparticles via a Liquid-phase Synthesis Method and Its Absorption Characteristics

    NASA Astrophysics Data System (ADS)

    Nandiyanto, Asep Bayu Dani; Iskandar, Ferry; Okuyama, Kikuo

    2011-12-01

    Monodisperse spherical mesoporous silica nanoparticles were successfully synthesized using a liquid-phase synthesis method. The result showed particles with controllable pore size from several to tens nanometers with outer diameter of several tens nanometers. The ability in the control of pore size and outer diameter was altered by adjusting the precursor solution ratios. In addition, we have conducted the adsorption ability of the prepared particles. The result showed that large organic molecules were well-absorbed to the prepared silica porous particles, in which this result was not obtained when using commercial dense silica particle and/or hollow silica particle. With this result, the prepared mesoporous silica particles may be used efficiently in various applications, such as sensors, pharmaceuticals, environmentally sensitive pursuits, etc.

  6. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. [Preparation of ibuprofen/EC-PVP sustained-release composite particles by supercritical CO2 anti-solvent technology].

    PubMed

    Cai, Jin-Yuan; Huang, De-Chun; Wang, Zhi-Xiang; Dang, Bei-Lei; Wang, Qiu-Ling; Su, Xin-Guang

    2012-06-01

    Ibuprofen/ethyl-cellulose (EC)-polyvinylpyrrolidone (PVP) sustained-release composite particles were prepared by using supercritical CO2 anti-solvent technology. With drug loading as the main evaluation index, orthogonal experimental design was used to optimize the preparation process of EC-PVP/ibuprofen composite particles. The experiments such as encapsulation efficiency, particle size distribution, electron microscope analysis, infrared spectrum (IR), differential scanning calorimetry (DSC) and in vitro dissolution were used to analyze the optimal process combination. The orthogonal experimental optimization process conditions were set as follows: crystallization temperature 40 degrees C, crystallization pressure 12 MPa, PVP concentration 4 mgmL(-1), and CO2 velocity 3.5 Lmin(-1). Under the optimal conditions, the drug loading and encapsulation efficiency of ibuprofen/EC-PVP composite particles were 12.14% and 52.21%, and the average particle size of the particles was 27.621 microm. IR and DSC analysis showed that PVP might complex with EC. The experiments of in vitro dissolution showed that ibuprofen/EC-PVP composite particles had good sustained-release effect. Experiment results showed that, ibuprofen/EC-PVP sustained-release composite particles can be prepared by supercritical CO2 anti-solvent technology.

  8. Multiple kernels learning-based biological entity relationship extraction method.

    PubMed

    Dongliang, Xu; Jingchang, Pan; Bailing, Wang

    2017-09-20

    Automatic extracting protein entity interaction information from biomedical literature can help to build protein relation network and design new drugs. There are more than 20 million literature abstracts included in MEDLINE, which is the most authoritative textual database in the field of biomedicine, and follow an exponential growth over time. This frantic expansion of the biomedical literature can often be difficult to absorb or manually analyze. Thus efficient and automated search engines are necessary to efficiently explore the biomedical literature using text mining techniques. The P, R, and F value of tag graph method in Aimed corpus are 50.82, 69.76, and 58.61%, respectively. The P, R, and F value of tag graph kernel method in other four evaluation corpuses are 2-5% higher than that of all-paths graph kernel. And The P, R and F value of feature kernel and tag graph kernel fuse methods is 53.43, 71.62 and 61.30%, respectively. The P, R and F value of feature kernel and tag graph kernel fuse methods is 55.47, 70.29 and 60.37%, respectively. It indicated that the performance of the two kinds of kernel fusion methods is better than that of simple kernel. In comparison with the all-paths graph kernel method, the tag graph kernel method is superior in terms of overall performance. Experiments show that the performance of the multi-kernels method is better than that of the three separate single-kernel method and the dual-mutually fused kernel method used hereof in five corpus sets.

  9. Removal of gadolinium-based contrast agents: adsorption on activated carbon.

    PubMed

    Elizalde-González, María P; García-Díaz, Esmeralda; González-Perea, Mario; Mattusch, Jürgen

    2017-03-01

    Three carbon samples were employed in this work, including commercial (1690 m 2  g -1 ), activated carbon prepared from guava seeds (637 m 2  g -1 ), and activated carbon prepared from avocado kernel (1068 m 2  g -1 ), to study the adsorption of the following gadolinium-based contrast agents (GBCAs): gadoterate meglumine Dotarem®, gadopentetate dimeglumine Magnevist®, and gadoxetate disodium Primovist®. The activation conditions with H 3 PO 4 were optimized using a Taguchi methodology to obtain mesoporous materials. The best removal efficiency by square meter in a batch system in aqueous solution and model urine was achieved by avocado kernel carbon, in which mesoporosity prevails over microporosity. The kinetic adsorption curves were described by a pseudo-second-order equation, and the adsorption isotherms in the concentration range 0.5-6 mM fit the Freundlich equation. The chemical characterization of the surfaces shows that materials with a greater amount of phenolic functional groups adsorb the GBCA better. Adsorption strongly depends on the pH due to the combination of the following factors: contrast agent protonated forms and carbon surface charge. The tested carbon samples were able to adsorb 70-90% of GBCA in aqueous solution and less in model urine. This research proposes a method for the elimination of GBCA from patient urine before its discharge into wastewater.

  10. The application of k-Nearest Neighbour in the identification of high potential archers based on relative psychological coping skills variables

    NASA Astrophysics Data System (ADS)

    Taha, Zahari; Muazu Musa, Rabiu; Majeed, Anwar P. P. Abdul; Razali Abdullah, Mohamad; Muaz Alim, Muhammad; Nasir, Ahmad Fakhri Ab

    2018-04-01

    The present study aims at classifying and predicting high and low potential archers from a collection of psychological coping skills variables trained on different k-Nearest Neighbour (k-NN) kernels. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. Psychological coping skills inventory which evaluates the archers level of related coping skills were filled out by the archers prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed k-NN models, i.e. fine, medium, coarse, cosine, cubic and weighted kernel functions, were trained on the psychological variables. The k-means clustered the archers into high psychologically prepared archers (HPPA) and low psychologically prepared archers (LPPA), respectively. It was demonstrated that the cosine k-NN model exhibited good accuracy and precision throughout the exercise with an accuracy of 94% and considerably fewer error rate for the prediction of the HPPA and the LPPA as compared to the rest of the models. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected psychological coping skills variables examined which would consequently save time and energy during talent identification and development programme.

  11. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  12. 7 CFR 810.206 - Grades and grade requirements for barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... weight per bushel (pounds) Sound barley (percent) Maximum Limits of— Damaged kernels 1 (percent) Heat damaged kernels (percent) Foreign material (percent) Broken kernels (percent) Thin barley (percent) U.S... or otherwise of distinctly low quality. 1 Includes heat-damaged kernels. Injured-by-frost kernels and...

  13. Effect of pH on ionic liquid mediated synthesis of gold nanoparticle using elaiseguineensis (palm oil) kernel extract

    NASA Astrophysics Data System (ADS)

    Irfan, Muhammad; Ahmad, Tausif; Moniruzzaman, Muhammad; Abdullah, Bawadi

    2017-05-01

    This study was conducted for microwave assisted synthesis of stable gold nanoparticles (AuNPs) by reduction of chloroauric acid with Elaeis Guineensis (palm oil) kernel (POK) extract which was prepared in aqueous solution of ionic liquid, [EMIM][OAc], 1-Ethyl-3-methylimidazolium acetate. Effect of initial pH of reaction mixture (3.5 - 8.5) was observed on SPR absorbance, maximum wavelength (λmax ) and size distribution of AuNPs. Change of pH of reaction mixture from acidic to basic region resulted in appearance of strong SPR absorption peaks and blue shifting of λmax from 533 nm to 522 nm. TEM analysis revealed the formation of predominantly spherical AuNPs with mean diameter of 8.51 nm. Presence of reducing moieties such as flavonoids, phenolic and carboxylic groups in POK extract was confirmed by FTIR analysis. Colloidal solution of AuNPs was remained stable at room temperature and insignificant difference in zeta value was recorded within experimental tenure of 4 months.

  14. The partial replacement of palm kernel shell by carbon black and halloysite nanotubes as fillers in natural rubber composites

    NASA Astrophysics Data System (ADS)

    Daud, Shuhairiah; Ismail, Hanafi; Bakar, Azhar Abu

    2017-07-01

    The effect of partial replacement of palm kernel shell powder by carbon black (CB) and halloysite nanotube (HNT) on the tensile properties, rubber-filler interaction, thermal properties and morphological studies of natural rubber (NR) composites were investigated. Four different compositions of NR/PKS/CB and NR/PKS/HNT composites i.e 20/0, 15/5, 10/10,5/15 and 0/20 parts per hundred rubber (phr) were prepared on a two roll mill. The results showed that the tensile strength and modulus at 100% elongation (M100) and 300% elongation (M300) were higher for NR/PKS/CB compared to NR/PKS/HNT composites. NR/PKS/CB composites had the lowest elongation at break (Eb). The effect of commercial fillers in NR/PKS composites on tensile properties was confirmed by the rubber-filler interaction and scanning electron microscopy (SEM) study. The thermal stability of PKS filled NR composites with partially replaced by commercial fillers also determined by Thermo gravimetric Analysis (TGA).

  15. Characteristics of uranium carbonitride microparticles synthesized using different reaction conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, Chinthaka M; Lindemer, Terrence; Voit, Stewart L

    2014-11-01

    Three sets of different experimental conditions by changing the cover gases during the sample preparation were tested to synthesize uranium carbonitride (UC1-xNx) microparticles. In the first two sets of experiments using (N2 to N2-4%H2 to Ar) and (Ar to N2 to Ar) environments, single phase UC1-xNx was synthesized. When reducing environments (Ar-4%H2 to N2-4%H2 to Ar-4%H2) were utilized, theoretical densities up to 97% of single phase UC1-xNx kernels were obtained. Physical and chemical characteristics such as density, phase purity, and chemical compositions of the synthesized UC1-xNx materials for the diferent experimental conditions used are provided. In-depth analysis of the microstruturesmore » of UC1-xNx has been carried out and is discussed with the objective of large batch fabrication of high density UC1-xNx kernels.« less

  16. Preparation and flow cytometry of uniform silica-fluorescent dye microspheres.

    PubMed

    Bele, Marjan; Siiman, Olavi; Matijević, Egon

    2002-10-15

    Uniform fluorescent silica-dye microspheres have been prepared by coating preformed monodispersed silica particles with silica layers containing rhodamine 6G or acridine orange. The resulting dispersions exhibit intense fluorescent emission between 500 and 600 nm, over a broad excitation wavelength range of 460 to 550 nm, even with exceedingly small amounts of dyes incorporated into the silica particles (10-30 ppm, expressed as weight of dye relative to weight of dry particles). The fluorescent particles can be prepared in micrometer diameters suitable for analyses using flow cytometry with 488-nm laser excitation.

  17. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  18. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  19. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will not...

  20. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  1. The Classification of Diabetes Mellitus Using Kernel k-means

    NASA Astrophysics Data System (ADS)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  2. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  3. Detection of maize kernels breakage rate based on K-means clustering

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping

    2017-04-01

    In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.

  4. Modeling adaptive kernels from probabilistic phylogenetic trees.

    PubMed

    Nicotra, Luca; Micheli, Alessio

    2009-01-01

    Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.

  5. ADVANCING THE FUNDAMENTAL UNDERSTANDING AND SCALE-UP OF TRISO FUEL COATERS VIA ADVANCED MEASUREMENT AND COMPUTATIONAL TECHNIQUES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biswas, Pratim; Al-Dahhan, Muthanna

    2012-11-01

    Tri-isotropic (TRISO) fuel particle coating is critical for the future use of nuclear energy produced byadvanced gas reactors (AGRs). The fuel kernels are coated using chemical vapor deposition in a spouted fluidized bed. The challenges encountered in operating TRISO fuel coaters are due to the fact that in modern AGRs, such as High Temperature Gas Reactors (HTGRs), the acceptable level of defective/failed coated particles is essentially zero. This specification requires processes that produce coated spherical particles with even coatings having extremely low defect fractions. Unfortunately, the scale-up and design of the current processes and coaters have been based on empiricalmore » approaches and are operated as black boxes. Hence, a voluminous amount of experimental development and trial and error work has been conducted. It has been clearly demonstrated that the quality of the coating applied to the fuel kernels is impacted by the hydrodynamics, solids flow field, and flow regime characteristics of the spouted bed coaters, which themselves are influenced by design parameters and operating variables. Further complicating the outlook for future fuel-coating technology and nuclear energy production is the fact that a variety of new concepts will involve fuel kernels of different sizes and with compositions of different densities. Therefore, without a fundamental understanding the underlying phenomena of the spouted bed TRISO coater, a significant amount of effort is required for production of each type of particle with a significant risk of not meeting the specifications. This difficulty will significantly and negatively impact the applications of AGRs for power generation and cause further challenges to them as an alternative source of commercial energy production. Accordingly, the proposed work seeks to overcome such hurdles and advance the scale-up, design, and performance of TRISO fuel particle spouted bed coaters. The overall objectives of the proposed work are to advance the fundamental understanding of the hydrodynamics by systematically investigating the effect of design and operating variables, to evaluate the reported dimensionless groups as scaling factors, and to establish a reliable scale-up methodology for the TRISO fuel particle spouted bed coaters based on hydrodynamic similarity via advanced measurement and computational techniques. An additional objective is to develop an on-line non-invasive measurement technique based on gamma ray densitometry (i.e. Nuclear Gauge Densitometry) that can be installed and used for coater process monitoring to ensure proper performance and operation and to facilitate the developed scale-up methodology. To achieve the objectives set for the project, the work will use optical probes and gamma ray computed tomography (CT) (for the measurements of solids/voidage holdup cross-sectional distribution and radial profiles along the bed height, spouted diameter, and fountain height) and radioactive particle tracking (RPT) (for the measurements of the 3D solids flow field, velocity, turbulent parameters, circulation time, solids lagrangian trajectories, and many other of spouted bed related hydrodynamic parameters). In addition, gas dynamic measurement techniques and pressure transducers will be utilized to complement the obtained information. The measurements obtained by these techniques will be used as benchmark data to evaluate and validate the computational fluid dynamic (CFD) models (two fluid model or discrete particle model) and their closures. The validated CFD models and closures will be used to facilitate the developed methodology for scale-up, design and hydrodynamic similarity. Successful execution of this work and the proposed tasks will advance the fundamental understanding of the coater flow field and quantify it for proper and safe design, scale-up, and performance. Such achievements will overcome the barriers to AGR applications and will help assure that the US maintains nuclear energy as a feasible option to meet the nation's needs for energy and environmental safety. In addition, the outcome of the proposed study will have a broader impact on other processes that utilize spouted beds, such as coal gasification, granulation, drying, catalytic reactions, etc.« less

  6. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  7. Customizing poly(lactic-co-glycolic acid) particles for biomedical applications.

    PubMed

    Swider, Edyta; Koshkina, Olga; Tel, Jurjen; Cruz, Luis J; de Vries, I Jolanda M; Srinivas, Mangala

    2018-04-11

    Nano- and microparticles have increasingly widespread applications in nanomedicine, ranging from drug delivery to imaging. Poly(lactic-co-glycolic acid) (PLGA) particles are the most widely-applied type of particles due to their biocompatibility and biodegradability. Here, we discuss the preparation of PLGA particles, and various modifications to tailor particles for applications in biological systems. We highlight new preparation approaches, including microfluidics and PRINT method, and modifications of PLGA particles resulting in novel or responsive properties, such as Janus or upconversion particles. Finally, we describe how the preparation methods can- and should-be adapted to tailor the properties of particles for the desired biomedical application. Our aim is to enable researchers who work with PLGA particles to better appreciate the effects of the selected preparation procedure on the final properties of the particles and its biological implications. Nanoparticles are increasingly important in the field of biomedicine. Particles made of polymers are in the spotlight, due to their biodegradability, biocompatibility, versatility. In this review, we aim to discuss the range of formulation techniques, manipulations, and applications of poly(lactic-co-glycolic acid) (PLGA) particles, to enable a researcher to effectively select or design the optimal particles for their application. We describe the various techniques of PLGA particle synthesis and their impact on possible applications. We focus on recent developments in the field of PLGA particles, and new synthesis techniques that have emerged over the past years. Overall, we show how the chemistry of PLGA particles can be adapted to solve pressing biological needs. Copyright © 2018 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  8. Suspended-Bed Reactor preliminary design, /sup 233/U--/sup 232/Th cycle. Final report (revised)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karam, R.A.; Alapour, A.; Lee, C.C.

    1977-11-01

    The preliminary design Suspended-Bed Reactor is described. Coated particles about 2 mm in diameter are used as the fuel. The coatings consist of three layers: (1) low density pyrolytic graphite, 70 ..mu.. thick, (2) silicon carbide pressure vessel, 30 ..mu.. thick, and (3) ZrC layer, 50 ..mu.. thick, to protect the pressure vessel from moisture and oxygen. The fuel kernel can be either uranium-thorium dicarbide or metal. The coated particles are suspended by helium gas (coolant) in a cluster of pressurized tubes. The upward flow of helium fluidizes the coated particles. As the flow rate increases, the bed of particlesmore » is lifted upward to the core section. The particles are restrained at the upper end of the core by a suitable screen. The overall particle density in the core is just enough for criticality condition. Should the helium flow cease, the bed in the core section will collapse, and the particles will flow downward into the section where the increased physical spacings among the tubes brings about a safe shutdown. By immersing this section of the tubes in a large graphite block to serve as a heat sink, dissipation of decay heat becomes manageable. This eliminates the need for emergency core cooling systems.« less

  9. Implementing Molecular Dynamics on Hybrid High Performance Computers - Particle-Particle Particle-Mesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Kohlmeyer, Axel; Plimpton, Steven J

    The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with anmore » approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle-particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers.« less

  10. Sorption Modeling and Verification for Off-Gas Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tavlarides, Lawrence L.; Lin, Ronghong; Nan, Yue

    2015-04-29

    The project has made progress toward developing a comprehensive modeling capability for the capture of target species in off gas evolved during the reprocessing of nuclear fuel. The effort has integrated experimentation, model development, and computer code development for adsorption and absorption processes. For adsorption, a modeling library has been initiated to include (a) equilibrium models for uptake of off-gas components by adsorbents, (b) mass transfer models to describe mass transfer to a particle, diffusion through the pores of the particle and adsorption on the active sites of the particle, and (c) interconnection of these models to fixed bed adsorptionmore » modeling which includes advection through the bed. For single-component equilibria, a Generalized Statistical Thermodynamic Adsorption (GSTA) code was developed to represent experimental data from a broad range of isotherm types; this is equivalent to a Langmuir isotherm in the two-parameter case, and was demonstrated for Kr on INL-engineered sorbent HZ PAN, water sorption on molecular sieve A sorbent material (MS3A), and Kr and Xe capture on metal-organic framework (MOF) materials. The GSTA isotherm was extended to multicomponent systems through application of a modified spreading pressure surface activity model and generalized predictive adsorbed solution theory; the result is the capability to estimate multicomponent adsorption equilibria from single-component isotherms. This advance, which enhances the capability to simulate systems related to off-gas treatment, has been demonstrated for a range of real-gas systems in the literature and is ready for testing with data currently being collected for multicomponent systems of interest, including iodine and water on MS3A. A diffusion kinetic model for sorbent pellets involving pore and surface diffusion as well as external mass transfer has been established, and a methodology was developed for determining unknown diffusivity parameters from transient uptake data. Two parallel approaches have been explored for integrating the kernels described above into a mass-transport model for adsorption in fixed beds. In one, the GSTA isotherm kernel has been incorporated into the MOOSE framework; in the other approach, a focused finite-difference framework and PDE kernels have been developed. Issues, including oscillatory behavior in MOOSE solutions to advection-diffusion problems, and opportunities have been identified for each approach, and a path forward has been identified toward developing a stronger modeling platform. Experimental systems were established for collection of microscopic kinetics and equilibria data for single and multicomponent uptake of gaseous species on solid sorbents. The systems, which can operate at ambient temperature to 250°C and dew points from -69 to 17°C, are useful for collecting data needed for modeling performance of sorbents of interest. Experiments were conducted to determine applicable models and parameters for isotherms and mass transfer for water and/or iodine adsorption on MS3A. Validation experiments were also conducted for water adsorption on fixed beds of MS3A. For absorption, work involved modeling with supportive experimentation. A dynamic model was developed to simulate CO 2 absorption with chemical reaction using high alkaline content water solutions. A computer code was developed to implement the model based upon transient mass and energy balances. Experiments were conducted in a laboratory-scale column to determine model parameters. The influence of geometric parameters and operating variables on CO 2 absorption was studied over a wide range of conditions. This project has resulted in 7 publications, with 3 manuscripts in preparation. Also, 15 presentations were given at national meetings of ANS and AIChE and at Material Recovery and Waste Forms Campaign Working Group meetings.« less

  11. Study on Production of Silicon Nanoparticles from Quartz Sand for Hybrid Solar Cell Applications

    NASA Astrophysics Data System (ADS)

    Arunmetha, S.; Vinoth, M.; Srither, S. R.; Karthik, A.; Sridharpanday, M.; Suriyaprabha, R.; Manivasakan, P.; Rajendran, V.

    2018-01-01

    Nano silicon (nano Si) particles were directly prepared from natural mineral quartz sand and thereafter used to fabricate the hybrid silicon solar cells. Here, in this preparation technique, two process stages were involved. In the first stage, the alkaline extraction and acid precipitation processes were applied on quartz sand to fetch silica nanoparticles. In the second stage, magnesiothermic and modified magnesiothermic reduction reactions were applied on nano silica particles to prepare nano Si particles. The effect of two distinct reduction methodologies on nano Si particle preparation was compared. The magnesiothermic and modified magnesiothermic reductions in the silica to silicon conversion process were studied with the help of x-ray diffraction (XRD) with intent to study the phase changes during the reduction reaction as well as its crystalline nature in the pure silicon phase. The particles consist of a combination of fine particles with spherical morphology. In addition to this, the optical study indicated an increase in visible light absorption and also increases the performance of the solar cell. The obtained nano Si particles were used as an active layer to fabricate the hybrid solar cells (HSCs). The obtained results confirmed that the power conversion efficiency (PCE) of the magnesiothermically modified nano Si cells (1.06%) is much higher as compared to the nano Si cells that underwent magnesiothermic reduction (1.02%). Thus, this confirms the increased PCE of the investigated nano Si solar cell up to 1.06%. It also revealed that nano Si behaved as an electron acceptor and transport material. The present study provided valuable insights and direction for the preparation of nano Si particles from quartz sand, including the influence of process methods. The prepared nano Si particles can be utilized for HSCs and an array of portable electronic devices.

  12. Modeling photopolarimetric characteristics of comet dust as a polydisperse mixture of polyshaped rough spheroids

    NASA Astrophysics Data System (ADS)

    Kolokolova, L.; Das, H.; Dubovik, O.; Lapyonok, T.

    2013-12-01

    It is widely recognized now that the main component of comet dust is aggregated particles that consist of submicron grains. It is also well known that cometary dust obey a rather wide size distribution with abundant particles whose size reaches dozens of microns. However, numerous attempts of computer simulation of light scattering by comet dust using aggregated particles have not succeeded to consider particles larger than a couple of microns due to limitations in the memory and speed of available computers. Attempts to substitute aggregates by polydisperse solid particles (spheres, spheroids, cylinders) could not consistently reproduce observed angular and spectral characteristics of comet brightness and polarization even in such a general case as polyshaped (i.e. containing particles of a variety of aspect ratios) mixture of spheroids (Kolokolova et al., In: Photopolarimetry in Remote Sensing, Kluwer Acad. Publ., 431, 2004). In this study we are checking how well cometary dust can be modeled using modeling tools for rough spheroids. With this purpose we use the software package described in Dubovik et al. (J. Geophys. Res., 111, D11208, doi:10.1029/2005JD006619d, 2006) that allows for a substantial reduction of computer time in calculating scattering properties of spheroid mixtures by means of using pre-calculated kernels - quadrature coefficients employed in the numerical integration of spheroid optical properties over size and shape. The kernels were pre-calculated for spheroids of 25 axis ratios, ranging from 0.3 to 3, and 42 size bins within the size parameter range 0.01 - 625. This software package has been recently expanded with the possibility of simulating not only smooth but also rough spheroids that is used in present study. We consider refractive indexes of the materials typical for comet dust: silicate, carbon, organics, and their mixtures. We also consider porous particles accounting on voids in the spheroids through effective medium approach. The roughness of the spheroids is considered as a normal distribution of particle surface slopes and can be of different degree depending on the standard deviation of the distribution, σ, where σ=0 corresponds to smooth surface and σ=0.5 describes severely rough surface (see Young et al., J. Atm. Sci., 70, 330, 2012). We perform computations for two wavelengths, typical for blue (447nm) and red (640nm) cometary continuum filters. We compare phase angle dependence of polarization and brightness and their spectral change obtained with the rough-spheroid model with those observed for comets (e.g. Kolokolova et al., In: Comets 2, Arizona Press, 577, 2004) to see how well rough spheroids can reproduce cometary low albedo, red color, red polarimetric color, negative polarization at small phase angles and polarization maximum at medium phase angles.

  13. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  14. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  15. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  16. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  17. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  18. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  19. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  20. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  1. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  2. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  3. Wavelet SVM in Reproducing Kernel Hilbert Space for hyperspectral remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Du, Peijun; Tan, Kun; Xing, Xiaoshi

    2010-12-01

    Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.

  4. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature

    PubMed Central

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838

  6. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  7. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature.

    PubMed

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.

  8. Physical and chemical properties and adsorption type of activated carbon prepared from plum kernels by NaOH activation.

    PubMed

    Tseng, Ru-Ling

    2007-08-25

    Activated carbon was prepared from plum kernels by NaOH activation at six different NaOH/char ratios. The physical properties including the BET surface area, the total pore volume, the micropore ratio, the pore diameter, the burn-off, and the scanning electron microscope (SEM) observation as well as the chemical properties, namely elemental analysis and temperature programmed desorption (TPD), were measured. The results revealed a two-stage activation process: stage 1 activated carbons were obtained at NaOH/char ratios of 0-1, surface pyrolysis being the main reaction; stage 2 activated carbons were obtained at NaOH/char ratios of 2-4, etching and swelling being the main reactions. The physical properties of stage 2 activated carbons were similar, and specific area was from 1478 to 1887m(2)g(-1). The results of reaction mechanism of NaOH activation revealed that it was apparently because of the loss ratio of elements C, H, and O in the activated carbon, and the variations in the surface functional groups and the physical properties. The adsorption of the above activated carbons on phenol and three kinds of dyes (MB, BB1, and AB74) were used for an isotherm equilibrium adsorption study. The data fitted the Langmuir isotherm equation. Various kinds of adsorbents showed different adsorption types; separation factor (R(L)) was used to determine the level of favorability of the adsorption type. In this work, activated carbons prepared by NaOH activation were evaluated in terms of their physical properties, chemical properties, and adsorption type; and activated carbon PKN2 was found to have most application potential.

  9. Patchy micelles based on coassembly of block copolymer chains and block copolymer brushes on silica particles.

    PubMed

    Zhu, Shuzhe; Li, Zhan-Wei; Zhao, Hanying

    2015-04-14

    Patchy particles are a type of colloidal particles with one or more well-defined patches on the surfaces. The patchy particles with multiple compositions and functionalities have found wide applications from the fundamental studies to practical uses. In this research patchy micelles with thiol groups in the patches were prepared based on coassembly of free block copolymer chains and block copolymer brushes on silica particles. Thiol-terminated and cyanoisopropyl-capped polystyrene-block-poly(N-isopropylacrylamide) block copolymers (PS-b-PNIPAM-SH and PS-b-PNIPAM-CIP) were synthesized by reversible addition-fragmentation chain transfer polymerization and chemical modifications. Pyridyl disulfide-functionalized silica particles (SiO2-SS-Py) were prepared by four-step surface chemical reactions. PS-b-PNIPAM brushes on silica particles were prepared by thiol-disulfide exchange reaction between PS-b-PNIPAM-SH and SiO2-SS-Py. Surface micelles on silica particles were prepared by coassembly of PS-b-PNIPAM-CIP and block copolymer brushes. Upon cleavage of the surface micelles from silica particles, patchy micelles with thiol groups in the patches were obtained. Dynamic light scattering, transmission electron microscopy, and zeta-potential measurements demonstrate the preparation of patchy micelles. Gold nanoparticles can be anchored onto the patchy micelles through S-Au bonds, and asymmetric hybrid structures are formed. The thiol groups can be oxidized to disulfides, which results in directional assembly of the patchy micelles. The self-assembly behavior of the patchy micelles was studied experimentally and by computer simulation.

  10. Taking Lessons Learned from a Proxy Application to a Full Application for SNAP and PARTISN

    DOE PAGES

    Womeldorff, Geoffrey Alan; Payne, Joshua Estes; Bergen, Benjamin Karl

    2017-06-09

    SNAP is a proxy application which simulates the computational motion of a neutral particle transport code, PARTISN. Here in this work, we have adapted parts of SNAP separately; we have re-implemented the iterative shell of SNAP in the task-model runtime Legion, showing an improvement to the original schedule, and we have created multiple Kokkos implementations of the computational kernel of SNAP, displaying similar performance to the native Fortran. We then translate our Kokkos experiments in SNAP to PARTISN, necessitating engineering development, regression testing, and further thought.

  11. Taking Lessons Learned from a Proxy Application to a Full Application for SNAP and PARTISN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Womeldorff, Geoffrey Alan; Payne, Joshua Estes; Bergen, Benjamin Karl

    SNAP is a proxy application which simulates the computational motion of a neutral particle transport code, PARTISN. Here in this work, we have adapted parts of SNAP separately; we have re-implemented the iterative shell of SNAP in the task-model runtime Legion, showing an improvement to the original schedule, and we have created multiple Kokkos implementations of the computational kernel of SNAP, displaying similar performance to the native Fortran. We then translate our Kokkos experiments in SNAP to PARTISN, necessitating engineering development, regression testing, and further thought.

  12. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    PubMed

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  13. Rheological behavior of water-in-oil emulsions stabilized by hydrophobic bentonite particles.

    PubMed

    Binks, Bernard P; Clint, John H; Whitby, Catherine P

    2005-06-07

    A study of the rheological behavior of water-in-oil emulsions stabilized by hydrophobic bentonite particles is described. Concentrated emulsions were prepared and diluted at constant particle concentration to investigate the effect of drop volume fraction on the viscosity and viscoelastic response of the emulsions. The influence of the structure of the hydrophobic clay particles in the oil has also been studied by using oils in which the clay swells to very different extents. Emulsions prepared from isopropyl myristate, in which the particles do not swell, are increasingly flocculated as the drop volume fraction increases and the viscosity of the emulsions increases accordingly. The concentrated emulsions are viscoelastic and the elastic storage and viscous loss moduli also increase with increasing drop volume fraction. Emulsions prepared from toluene, in which the clay particles swell to form tactoids, are highly structured due to the formation of an integrated network of clay tactoids and drops, and the moduli of the emulsions are significantly larger than those of the emulsions prepared from isopropyl myristate.

  14. Synthesis of highly-monodisperse spherical titania particles with diameters in the submicron range.

    PubMed

    Tanaka, Shunsuke; Nogami, Daisuke; Tsuda, Natsuki; Miyake, Yoshikazu

    2009-06-15

    Monodisperse titania spheres with particle diameters in the range 380-960 nm were successfully synthesized by hydrolysis and condensation of titanium tetraisopropoxide. The preparation was performed using ammonia or dodecylamine (DDA) as a catalyst in methanol/acetonitrile co-solvent at room temperature. The samples were characterized by powder X-ray diffraction, scanning electron microscopy, transmission electron microscopy, dynamic light scattering, and nitrogen sorption measurement. The use of DDA was effective for the synthesis of monodisperse titania spheres with low coefficient of variation. When the titania spherical particles with coefficient of variation less than 4% were obtained, the colloidal crystallization easily occurred simply by centrifugation. The monodispersity was maintained even after crystallization of the particles by high temperature annealing. The titania particles prepared using DDA had mesopores near the surface of the spheres, providing high pore accessibility to the sphere from the surface-air interface. The particle size uniformity and photocatalytic reactivity of the titania prepared using DDA were higher than those of the titania prepared using ammonia.

  15. Feature Extraction of Electronic Nose Signals Using QPSO-Based Multiple KFDA Signal Processing

    PubMed Central

    Wen, Tailai; Huang, Daoyu; Lu, Kun; Deng, Changjian; Zeng, Tanyue; Yu, Song; He, Zhiyi

    2018-01-01

    The aim of this research was to enhance the classification accuracy of an electronic nose (E-nose) in different detecting applications. During the learning process of the E-nose to predict the types of different odors, the prediction accuracy was not quite satisfying because the raw features extracted from sensors’ responses were regarded as the input of a classifier without any feature extraction processing. Therefore, in order to obtain more useful information and improve the E-nose’s classification accuracy, in this paper, a Weighted Kernels Fisher Discriminant Analysis (WKFDA) combined with Quantum-behaved Particle Swarm Optimization (QPSO), i.e., QWKFDA, was presented to reprocess the original feature matrix. In addition, we have also compared the proposed method with quite a few previously existing ones including Principal Component Analysis (PCA), Locality Preserving Projections (LPP), Fisher Discriminant Analysis (FDA) and Kernels Fisher Discriminant Analysis (KFDA). Experimental results proved that QWKFDA is an effective feature extraction method for E-nose in predicting the types of wound infection and inflammable gases, which shared much higher classification accuracy than those of the contrast methods. PMID:29382146

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerczak, Tyler J.; Smith, Kurt R.; Petrie, Christian M.

    Tristructural-isotropic (TRISO)–coated particle fuel is a promising advanced fuel concept consisting of a spherical fuel kernel made of uranium oxide and uranium carbide, surrounded by a porous carbonaceous buffer layer and successive layers of dense inner pyrolytic carbon (IPyC), silicon carbide (SiC) deposited by chemical vapor , and dense outer pyrolytic carbon (OPyC). This fuel concept is being considered for advanced reactor applications such as high temperature gas-cooled reactors (HTGRs) and molten salt reactors (MSRs), as well as for accident-tolerant fuel for light water reactors (LWRs). Development and implementation of TRISO fuel for these reactor concepts support the US Departmentmore » of Energy (DOE) Office of Nuclear Energy mission to promote safe, reliable nuclear energy that is sustainable and environmentally friendly. During operation, the SiC layer serves as the primary barrier to metallic fission products and actinides not retained in the kernel. It has been observed that certain fission products are released from TRISO fuel during operation, notably, Ag, Eu, and Sr [1]. Release of these radioisotopes causes safety and maintenance concerns.« less

  17. Feature Extraction of Electronic Nose Signals Using QPSO-Based Multiple KFDA Signal Processing.

    PubMed

    Wen, Tailai; Yan, Jia; Huang, Daoyu; Lu, Kun; Deng, Changjian; Zeng, Tanyue; Yu, Song; He, Zhiyi

    2018-01-29

    The aim of this research was to enhance the classification accuracy of an electronic nose (E-nose) in different detecting applications. During the learning process of the E-nose to predict the types of different odors, the prediction accuracy was not quite satisfying because the raw features extracted from sensors' responses were regarded as the input of a classifier without any feature extraction processing. Therefore, in order to obtain more useful information and improve the E-nose's classification accuracy, in this paper, a Weighted Kernels Fisher Discriminant Analysis (WKFDA) combined with Quantum-behaved Particle Swarm Optimization (QPSO), i.e., QWKFDA, was presented to reprocess the original feature matrix. In addition, we have also compared the proposed method with quite a few previously existing ones including Principal Component Analysis (PCA), Locality Preserving Projections (LPP), Fisher Discriminant Analysis (FDA) and Kernels Fisher Discriminant Analysis (KFDA). Experimental results proved that QWKFDA is an effective feature extraction method for E-nose in predicting the types of wound infection and inflammable gases, which shared much higher classification accuracy than those of the contrast methods.

  18. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  20. PREPARATION OF SPHERICAL URANIUM DIOXIDE PARTICLES

    DOEpatents

    Levey, R.P. Jr.; Smith, A.E.

    1963-04-30

    This patent relates to the preparation of high-density, spherical UO/sub 2/ particles 80 to 150 microns in diameter. Sinterable UO/sub 2/ powder is wetted with 3 to 5 weight per cent water and tumbled for at least 48 hours. The resulting spherical particles are then sintered. The sintered particles are useful in dispersion-type fuel elements for nuclear reactors. (AEC)

  1. Preparation of hierarchical porous Zn-salt particles and their superhydrophobic performance

    NASA Astrophysics Data System (ADS)

    Gao, Dahai; Jia, Mengqiu

    2015-12-01

    Superhydrophobic surfaces arranged by hierarchical porous particles were prepared using modified hydrothermal routes under the effect of sodium citrate. Two particle samples were generated in the medium of hexamethylenetetramine (P1) and urea (P2), respectively. X-ray diffraction, scanning electron microscope, and transmission electron microscope were adopted for the investigation, and results revealed that the P1 and P2 particles are porous microspheres with crosslinked extremely thin (10-30 nm) sheet crystals composed of Zn5(OH)8Ac2·2H2O and Zn5(CO3)2(OH)6, respectively. The prepared particles were treated with a fluoroethylene vinyl ether derivative and studied using Fourier transform infrared spectroscopy and energy-dispersive X-ray spectrometer. Results showed that the hierarchical surfaces of these particles were combined with low-wettable fluorocarbon layers. Moreover, the fabricated surface composed of the prepared hierarchical particles displayed considerably high contact angles, indicating great superhydrophobicity for the products. The wetting behavior of the particles was analyzed with a theoretical wetting model in comparison with that of chestnut-like ZnO products obtained through a conventional hydrothermal route. Correspondingly, this study provided evidence that high roughness surface plays a great role in superhydrophobic behavior.

  2. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    PubMed Central

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  3. Combined multi-kernel head computed tomography images optimized for depicting both brain parenchyma and bone.

    PubMed

    Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki

    2014-01-01

    The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.

  4. Pharmacokinetic behavior of intravitreal triamcinolone acetonide prepared by a hospital pharmacy.

    PubMed

    Oishi, Masako; Maeda, Shinichiro; Hashida, Noriyasu; Ohguro, Nobuyuki; Tano, Yasuo; Kurokawa, Nobuo

    2008-01-01

    We developed a new hospital pharmaceutical preparation of triamcinolone acetonide (TA) for intravitreal injections using sodium hyaluronate as the vehicle. The purpose of this study was to compare the pharmacokinetic behavior of this hospital pharmacy preparation of TA (HPP-TA) to that of a commercial preparation of TA (CP-TA) in rats. We injected the two preparations of TA into the vitreous humor of male Wistar rats. The rats were killed between days 1 and 21, and the concentration of TA in the vitreous was measured by high-performance liquid chromatography to determine the pharmacokinetic parameters. We also examined the microscopic appearance of the TA particles in these preparations. The elimination half-life was 6.08 days for the CP-TA and 5.78 days for the HPP-TA. A two-compartment model was suitable to approximate the pharmacokinetic behavior of HPP-TA in the vitreous body, but this model was not suitable for CP-TA, because its pharmacokinetic behavior was not sufficiently stable. The particle size of CP-TA was largest, followed by TA powder and HPP-TA. Many particles were agglutinated in the CP-TA preparation, whereas the TA particles were fine and dispersed in the HPP-TA medium. The TA particle size and the suspension medium are likely important factors in the preparation of a safe and stable suspension of TA. HPP-TA satisfied these requirements and should be suitable for clinical use.

  5. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  6. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  7. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  8. graphkernels: R and Python packages for graph comparison

    PubMed Central

    Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-01-01

    Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902

  9. Aflatoxin variability in pistachios.

    PubMed Central

    Mahoney, N E; Rodriguez, S B

    1996-01-01

    Pistachio fruit components, including hulls (mesocarps and epicarps), seed coats (testas), and kernels (seeds), all contribute to variable aflatoxin content in pistachios. Fresh pistachio kernels were individually inoculated with Aspergillus flavus and incubated 7 or 10 days. Hulled, shelled kernels were either left intact or wounded prior to inoculation. Wounded kernels, with or without the seed coat, were readily colonized by A. flavus and after 10 days of incubation contained 37 times more aflatoxin than similarly treated unwounded kernels. The aflatoxin levels in the individual wounded pistachios were highly variable. Neither fungal colonization nor aflatoxin was detected in intact kernels without seed coats. Intact kernels with seed coats had limited fungal colonization and low aflatoxin concentrations compared with their wounded counterparts. Despite substantial fungal colonization of wounded hulls, aflatoxin was not detected in hulls. Aflatoxin levels were significantly lower in wounded kernels with hulls than in kernels of hulled pistachios. Both the seed coat and a water-soluble extract of hulls suppressed aflatoxin production by A. flavus. PMID:8919781

  10. graphkernels: R and Python packages for graph comparison.

    PubMed

    Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-02-01

    Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.

  11. Preparation of modified waterworks sludge particles as adsorbent to enhance coagulation of slightly polluted source water.

    PubMed

    Chen, Wei; Gao, Xiaohong; Xu, Hang; Wang, Kang; Chen, Taoyuan

    2017-08-01

    Without treatment, waterworks sludge is ineffective as an adsorbent. In this study, raw waterworks sludge was used as the raw material to prepare modified sludge particles through high-temperature calcination and alkali modification. The feasibility of using a combination of modified particles and polyaluminum chloride (PAC) as a coagulant for treatment of slightly polluted source water was also investigated. The composition, structure, and surface properties of the modified particles were characterized, and their capabilities for removing ammonia nitrogen and turbidity were determined. The results indicate that the optimal preparation conditions for the modified sludge particles were achieved by preparing the particles with a roasting temperature of 483.12 °C, a roasting time of 3.32 h, and a lye concentration of 3.75%. Furthermore, enhanced coagulation is strengthened with the addition of modified sludge particles, which is reflected by reduction of the required PAC dose and enhancement of the removal efficiency of ammonia nitrogen and turbidity by over 80 and 93%, respectively. Additional factors such as pH, temperature, dose, and dosing sequence were also evaluated. The optimum doses of modified particles and PAC were 40 and 15 mg/L, respectively, and adding modified particles at the same time as or prior to adding PAC improves removal efficiency.

  12. [Preparation of curcumin-EC sustained-release composite particles by supercritical CO2 anti-solvent technology].

    PubMed

    Bai, Wei-li; Yan, Ting-yuan; Wang, Zhi-xiang; Huang, De-chun; Yan, Ting-xuan; Li, Ping

    2015-01-01

    Curcumin-ethyl-cellulose (EC) sustained-release composite particles were prepared by using supercritical CO2 anti-solvent technology. With drug loading and yield of inclusion complex as evaluation indexes, on the basis of single factor tests, orthogonal experimental design was used to optimize the preparation process of curcumin-EC sustained-release composite particles. The experiments such as drug loading, yield, particle size distribution, electron microscope analysis (SEM) , infrared spectrum (IR), differential scanning calorimetry (DSC) and in vitro dissolution were used to analyze the optimal process combination. The orthogonal experimental optimization process conditions were set as follows: crystallization temperature 45 degrees C, crystallization pressure 10 MPa, curcumin concentration 8 g x L(-1), solvent flow rate 0.9 mL x min(-1), and CO2 velocity 4 L x min(-1). Under the optimal conditions, the average drug loading and yield of curcumin-EC sustained-release composite particles were 33.01% and 83.97%, and the average particle size of the particles was 20.632 μm. IR and DSC analysis showed that curcumin might complex with EC. The experiments of in vitro dissolution showed that curcumin-EC composite particles had good sustained-release effect. Curcumin-EC sustained-release composite particles can be prepared by supercritical CO2 anti-solvent technology.

  13. Method for preparing spherical thermoplastic particles of uniform size

    DOEpatents

    Day, J.R.

    1975-11-17

    Spherical particles of thermoplastic material of virtually uniform roundness and diameter are prepared by cutting monofilaments of a selected diameter into rod-like segments of a selected uniform length which are then heated in a viscous liquid to effect the formation of the spherical particles.

  14. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  16. 7 CFR 810.204 - Grades and grade requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...— Damaged kernels 1 (percent) Foreign material (percent) Other grains (percent) Skinned and broken kernels....0 10.0 15.0 1 Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered against sound barley. Notes: Malting barley shall not be infested in accordance with...

  17. Antibacterial activities of amorphous cefuroxime axetil ultrafine particles prepared by high gravity antisolvent precipitation (HGAP).

    PubMed

    Zhao, Hong; Kang, Xu-liang; Chen, Xuan-li; Wang, Jie-xin; Le, Yuan; Shen, Zhi-gang; Chen, Jian-feng

    2009-01-01

    In vitro and in vivo antibacterial activities on the Staphylococcus aureus and Escherichia coli of the amorphous cefuroxime axetil (CFA) ultrafine particles prepared by HGAP method were investigated in this paper. The conventional sprayed CFA particles were studied as the control group. XRD, SEM, BET tests were performed to investigate the morphology changes of the samples before and after sterile. The in vitro dissolution test, minimal inhibitory concentrations (MIC) and the in vivo experiment on mice were explored. The results demonstrated that: (i) The structure, morphology and amorphous form of the particles could be affected during steam sterile process; (ii) CFA particles with different morphologies showed varied antibacterial activities; and (iii) the in vitro and in vivo antibacterial activities of the ultrafine particles prepared by HGAP is markedly stronger than that of the conventional sprayed amorphous particles.

  18. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  19. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  20. 7 CFR 810.205 - Grades and grade requirements for Two-rowed Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (percent) Maximum limits of— Wild oats (percent) Foreign material (percent) Skinned and broken kernels... Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered...

  1. Preparation and Characterization of Colloidal Silica Particles under Mild Conditions

    ERIC Educational Resources Information Center

    Neville, Frances; Zin, Azrinawati Mohd.; Jameson, Graeme J.; Wanless, Erica J.

    2012-01-01

    A microscale laboratory experiment for the preparation and characterization of silica particles at neutral pH and ambient temperature conditions is described. Students first employ experimental fabrication methods to make spherical submicrometer silica particles via the condensation of an alkoxysilane and polyethyleneimine, which act to catalyze…

  2. Process for preparing fine-grain metal carbide powder

    DOEpatents

    Kennedy, C.R.; Jeffers, F.P.

    Fine-grain metal carbide powder suitable for use in the fabrication of heat resistant products is prepared by coating bituminous pitch on SiO/sub 2/ or Ta/sub 2/O/sub 5/ particles, heating the coated particles to convert the bituminous pitch to coke, and then heating the particles to a higher temperature to convert the particles to a carbide by reaction of said coke therewith.

  3. Detection of ochratoxin A contamination in stored wheat using near-infrared hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Senthilkumar, T.; Jayas, D. S.; White, N. D. G.; Fields, P. G.; Gräfenhan, T.

    2017-03-01

    Near-infrared (NIR) hyperspectral imaging system was used to detect five concentration levels of ochratoxin A (OTA) in contaminated wheat kernels. The wheat kernels artificially inoculated with two different OTA producing Penicillium verrucosum strains, two different non-toxigenic P. verrucosum strains, and sterile control wheat kernels were subjected to NIR hyperspectral imaging. The acquired three-dimensional data were reshaped into readable two-dimensional data. Principal Component Analysis (PCA) was applied to the two dimensional data to identify the key wavelengths which had greater significance in detecting OTA contamination in wheat. Statistical and histogram features extracted at the key wavelengths were used in the linear, quadratic and Mahalanobis statistical discriminant models to differentiate between sterile control, five concentration levels of OTA contamination in wheat kernels, and five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels. The classification models differentiated sterile control samples from OTA contaminated wheat kernels and non-OTA producing P. verrucosum inoculated wheat kernels with a 100% accuracy. The classification models also differentiated between five concentration levels of OTA contaminated wheat kernels and between five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels with a correct classification of more than 98%. The non-OTA producing P. verrucosum inoculated wheat kernels and OTA contaminated wheat kernels subjected to hyperspectral imaging provided different spectral patterns.

  4. Application of kernel method in fluorescence molecular tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Baikejiang, Reheman; Li, Changqing

    2017-02-01

    Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.

  5. Credit scoring analysis using kernel discriminant

    NASA Astrophysics Data System (ADS)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  6. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  7. METHOD FOR PREPARATION OF UO$sub 2$ PARTICLES

    DOEpatents

    Johnson, J.R.; Taylor, A.J.

    1959-09-22

    A method is described for the preparation of highdensity UO/sub 2/ particles within the size range of 40 to 100 microns. In accordance with the invention UO/sub 2/ particles are autoclaved with an aqueous solution of uranyl ions. The resulting crystals are reduced to UO/sub 2/ and the UO/sub 2/ is heated to at least 1000 deg C to effect densification. The resulting UO/sub 2/ particles are screened, and oversize particles are crushed and screened to recover the particles within the desired size range.

  8. Crossover between collective and independent-particle excitations in quasi-2D electron gas with one filled subband

    NASA Astrophysics Data System (ADS)

    Nazarov, Vladimir U.

    2018-05-01

    While it has been recently demonstrated that, for quasi-two-dimensional electron gas (Q2DEG) with one filled subband, the dynamic exchange f x and Hartree f H kernels cancel each other in the low-density regime r s → ∞ (by half and completely, for the spin-neutral and fully spin-polarized cases, respectively), here we analytically show that the same happens at arbitrary densities at short distances. This motivates us to study the confinement dependence of the excitations in Q2DEG. Our calculations unambiguously confirm that, at strong confinements, the time-dependent exact exchange excitation energies approach the single-particle Kohn-Sham ones for the spin-polarized case, while the same, but less pronounced, tendency is observed for spin-neutral Q2DEG.

  9. Value addition of wild apricot fruits grown in North-West Himalayan regions-a review.

    PubMed

    Sharma, Rakesh; Gupta, Anil; Abrol, G S; Joshi, V K

    2014-11-01

    Wild apricot (Prunus armeniaca L.) commonly known as chulli is a potential fruit widely distributed in North-West Himalayan regions of the world. The fruits are good source of carbohydrates, vitamins, minerals besides having attractive colour and typical flavour. Unlike table purpose varieties of apricots like New Castle, the fruits of wild apricot are unsuitable for fresh consumption because of its high acid and low sugar content. However, the fruits are traditionally utilized for open sun drying, pulping to prepare different products such as jams, chutney and naturally fermented and distilled liquor. But, scientific literature on processing and value addition of wild apricot is scanty. Preparation of jam with 25 % wild apricot +75 % apple showed maximum score for organoleptic characteristics due to better taste and colour. Osmotic dehydration has been found as a suitable method for drying of wild type acidic apricots. A good quality sauce using wild apricot pulp and tomato pulp in the ratio of 1:1 has been prepared, while chutney of good acceptability prepared from wild apricot pulp (100 %) has also been documented. Preparation of apricot-soy protein enriched products like apricot-soya leather, toffee and fruit bars has been reported, which are reported to meet the protein requirements of adult and children as per the recommendations of ICMR. Besides these processed products, preparation of alcoholic beverages like wine, vermouth and brandy from wild apricot fruits has also been reported by various researchers. Further, after utilization of pulp for preparation of value added products, the stones left over have been successfully utilized for oil extraction which has medicinal and cosmetic value. The traditional method of oil extraction has been reported to be unhygienic and result in low oil yield with poor quality, whereas improved mechanical method of oil extraction has been found to produce good quality oil. The apricot kernel oil and press cake have successfully been utilized for preparation of various value added products such as facial cream, lip balm, essential oil and protein isolate with good quality attributes and consumer acceptability. However, no scientific information on utilization of shells remained after kernel separation is available, but the shells are traditionally utilized for burning purpose during winters by the farmers. Therefore, it seems that every part of wild apricot can be utilized for conversion into value-added products and commercial utilization of this fruit will certainly add value to this underutilized fruit and also increase the economy of farmers.

  10. Correlation and classification of single kernel fluorescence hyperspectral data with aflatoxin concentration in corn kernels inoculated with Aspergillus flavus spores.

    PubMed

    Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D

    2010-05-01

    The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to 0.91 when a threshold of either 20 or 100 ng g(-1) was used. Overall, the results indicate that fluorescence hyperspectral imaging may be applicable in estimating aflatoxin content in individual corn kernels.

  11. Preparation of Octadecyltrichlorosilane Nanopatterns Using Particle Lithography: An Atomic Force Microscopy Laboratory

    ERIC Educational Resources Information Center

    Highland, Zachary L.; Saner, ChaMarra K.; Garno, Jayne C.

    2018-01-01

    Experiments are described that involve undergraduates learning concepts of nanoscience and chemistry. Students prepare nanopatterns of organosilane films using protocols of particle lithography. A few basic techniques are needed to prepare samples, such as centrifuging, mixing, heating, and drying. Students obtain hands-on skills with nanoscale…

  12. Classification of Phylogenetic Profiles for Protein Function Prediction: An SVM Approach

    NASA Astrophysics Data System (ADS)

    Kotaru, Appala Raju; Joshi, Ramesh C.

    Predicting the function of an uncharacterized protein is a major challenge in post-genomic era due to problems complexity and scale. Having knowledge of protein function is a crucial link in the development of new drugs, better crops, and even the development of biochemicals such as biofuels. Recently numerous high-throughput experimental procedures have been invented to investigate the mechanisms leading to the accomplishment of a protein’s function and Phylogenetic profile is one of them. Phylogenetic profile is a way of representing a protein which encodes evolutionary history of proteins. In this paper we proposed a method for classification of phylogenetic profiles using supervised machine learning method, support vector machine classification along with radial basis function as kernel for identifying functionally linked proteins. We experimentally evaluated the performance of the classifier with the linear kernel, polynomial kernel and compared the results with the existing tree kernel. In our study we have used proteins of the budding yeast saccharomyces cerevisiae genome. We generated the phylogenetic profiles of 2465 yeast genes and for our study we used the functional annotations that are available in the MIPS database. Our experiments show that the performance of the radial basis kernel is similar to polynomial kernel is some functional classes together are better than linear, tree kernel and over all radial basis kernel outperformed the polynomial kernel, linear kernel and tree kernel. In analyzing these results we show that it will be feasible to make use of SVM classifier with radial basis function as kernel to predict the gene functionality using phylogenetic profiles.

  13. Intraear Compensation of Field Corn, Zea mays, from Simulated and Naturally Occurring Injury by Ear-Feeding Larvae.

    PubMed

    Steckel, S; Stewart, S D

    2015-06-01

    Ear-feeding larvae, such as corn earworm, Helicoverpa zea Boddie (Lepidoptera: Noctuidae), can be important insect pests of field corn, Zea mays L., by feeding on kernels. Recently introduced, stacked Bacillus thuringiensis (Bt) traits provide improved protection from ear-feeding larvae. Thus, our objective was to evaluate how injury to kernels in the ear tip might affect yield when this injury was inflicted at the blister and milk stages. In 2010, simulated corn earworm injury reduced total kernel weight (i.e., yield) at both the blister and milk stage. In 2011, injury to ear tips at the milk stage affected total kernel weight. No differences in total kernel weight were found in 2013, regardless of when or how much injury was inflicted. Our data suggested that kernels within the same ear could compensate for injury to ear tips by increasing in size, but this increase was not always statistically significant or sufficient to overcome high levels of kernel injury. For naturally occurring injury observed on multiple corn hybrids during 2011 and 2012, our analyses showed either no or a minimal relationship between number of kernels injured by ear-feeding larvae and the total number of kernels per ear, total kernel weight, or the size of individual kernels. The results indicate that intraear compensation for kernel injury to ear tips can occur under at least some conditions. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Adsorption performance of fixed-bed column for the removal of Fe (II) in groundwater using activated carbon made from palm kernel shells

    NASA Astrophysics Data System (ADS)

    Sylvia, N.; Hakim, L.; Fardian, N.; Yunardi

    2018-03-01

    When the manganese is under the acceptable limit, then the removal of Fe (II) ion, the common metallic compound contained in groundwater, is one of the most important stages in the processing of groundwater to become potable water. This study was aimed at investigating the performance of a fixed-bed adsorption column filled, with activated carbon prepared from palm kernel shells, in the removal of Fe (II) ion from groundwater. The influence of important parameters such as bed depth and the flow rate was investigated. The bed depth adsorbent was varied at 7.5, 10 and 12 cm. At a different flow rate of 6, 10 and 14 L/minute. The Atomic Absorb Spectrophotometer was used to measure the Fe (II) ion concentration, thereafter the results were confirmed using a breakthrough curve showing that flow rate and bed depth affected the curve. The mathematical model that used to predict the result was the Thomas and Adams-Bohart model. This model is used to process design, in which predicting time and bed depth needed to meet the breakthrough. This study reveals that the Thomas model was the most appropriate one, including the use of Palm Kernel Shell for processing groundwater. According to the Thomas Model, the highest capacity of adsorption (66.189 mg/g) of 0.169-mg/L of groundwater was achieved with a flow rate of 6 L/minute, with the bed depth at 14 cm.

  15. Effect of oil content and kernel processing of corn silage on digestibility and milk production by dairy cows.

    PubMed

    Weiss, W P; Wyatt, D J

    2000-02-01

    Corn silages were produced from a high oil corn hybrid and from its conventional hybrid counterpart and were harvested with a standard silage chopper or a chopper equipped with a kernel processing unit. High oil silages had higher concentrations of fatty acids (5.5 vs. 3.4% of dry matter) and crude protein (8.4 vs. 7.5% of dry matter) than the conventional hybrid. Processed silage had larger particle size than unprocessed silage, but more starch was found in small particles for processed silage. Dry matter intake was not influenced by treatment (18.4 kg/d), but yield of fat-corrected milk (23.9 vs. 22.6 kg/d) was increased by feeding high oil silage. Overall, processing corn silage did not affect milk production, but cows fed processed conventional silage tended to produce more milk than did cows fed unprocessed conventional silage. Milk protein percent, but not yield, was reduced with high oil silage. Milk fat percent, but not yield, was higher with processed silage. Overall, processed silage had higher starch digestibility, but the response was much greater for the conventional silage hybrid. The concentration of total digestible nutrients (TDN) tended to be higher for diets with high oil silage (71.6 vs. 69.9%) and tended to be higher for processed silage than unprocessed silage (71.7 vs. 69.8%), but an interaction between variety and processing was observed. Processing conventional corn silage increased TDN to values similar to high oil corn silage but processing high oil corn silage did not influence TDN.

  16. The use of kernel density estimators in breakthrough curve reconstruction and advantages in risk analysis

    NASA Astrophysics Data System (ADS)

    Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.

    2014-12-01

    Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ≥105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.

  17. Particle size distribution of brown and white rice during gastric digestion measured by image analysis.

    PubMed

    Bornhorst, Gail M; Kostlan, Kevin; Singh, R Paul

    2013-09-01

    The particle size distribution of foods during gastric digestion indicates the amount of physical breakdown that occurred due to the peristaltic movement of the stomach walls in addition to the breakdown that initially occurred during oral processing. The objective of this study was to present an image analysis technique that was rapid, simple, and could distinguish between food components (that is, rice kernel and bran layer in brown rice). The technique was used to quantify particle breakdown of brown and white rice during gastric digestion in growing pigs (used as a model for an adult human) over 480 min of digestion. The particle area distributions were fit to a Rosin-Rammler distribution function. Brown and white rice exhibited considerable breakdown as the number of particles per image decreased over time. The median particle area (x(50)) increased during digestion, suggesting a gastric sieving phenomenon, where small particles were emptied and larger particles were retained for additional breakdown. Brown rice breakdown was further quantified by an examination of the bran layer fragments and rice grain pieces. The percentage of total particle area composed of bran layer fragments was greater in the distal stomach than the proximal stomach in the first 120 min of digestion. The results of this study showed that image analysis may be used to quantify particle breakdown of a soft food product during gastric digestion, discriminate between different food components, and help to clarify the role of food structure and processing in food breakdown during gastric digestion. © 2013 Institute of Food Technologists®

  18. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  19. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  20. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  1. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  2. Initial examination of fuel compacts and TRISO particles from the US AGR-2 irradiation test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunn, John D.; Baldwin, Charles A.; Montgomery, Fred C.

    Post-irradiation examination was completed on two as-irradiated compacts from the US Advanced Gas Reactor Fuel Development and Qualification Program’s second irradiation test. These compacts were selected for examination because there were indications that they may have contained particles that released cesium through a failed or defective SiC layer. The coated particles were recovered from these compacts by electrolytic deconsolidation of the surrounding graphitic matrix in nitric acid. The leach-burn-leach (LBL) process was used to dissolve and analyze exposed metallic elements (actinides and fission products), and each particle was individually surveyed for relative cesium retention with the Irradiated Microsphere Gamma Analyzermore » (IMGA). Data from IMGA and LBL examinations provided information on fission product release during irradiation and whether any specific particles had below-average retention that could be related to coating layer defects or radiation-induced degradation. A few selected normal-retention particles and six with abnormally-low cesium inventory were analyzed using X-ray tomography to produce three-dimensional images of the internal coating structure. Four of the low-cesium particles had obviously damaged or degraded SiC, and X-ray imaging was able to guide subsequent grinding and polishing to expose the regions of interest for analysis by optical and electron microscopy. Additional particles from each compact were also sectioned and examined to study the overall radiation-induced microstructural changes in the kernel and coating layers.« less

  3. Initial examination of fuel compacts and TRISO particles from the US AGR-2 irradiation test

    DOE PAGES

    Hunn, John D.; Baldwin, Charles A.; Montgomery, Fred C.; ...

    2017-10-21

    Post-irradiation examination was completed on two as-irradiated compacts from the US Advanced Gas Reactor Fuel Development and Qualification Program’s second irradiation test. These compacts were selected for examination because there were indications that they may have contained particles that released cesium through a failed or defective SiC layer. The coated particles were recovered from these compacts by electrolytic deconsolidation of the surrounding graphitic matrix in nitric acid. The leach-burn-leach (LBL) process was used to dissolve and analyze exposed metallic elements (actinides and fission products), and each particle was individually surveyed for relative cesium retention with the Irradiated Microsphere Gamma Analyzermore » (IMGA). Data from IMGA and LBL examinations provided information on fission product release during irradiation and whether any specific particles had below-average retention that could be related to coating layer defects or radiation-induced degradation. A few selected normal-retention particles and six with abnormally-low cesium inventory were analyzed using X-ray tomography to produce three-dimensional images of the internal coating structure. Four of the low-cesium particles had obviously damaged or degraded SiC, and X-ray imaging was able to guide subsequent grinding and polishing to expose the regions of interest for analysis by optical and electron microscopy. Additional particles from each compact were also sectioned and examined to study the overall radiation-induced microstructural changes in the kernel and coating layers.« less

  4. Suspension concentration distribution in turbulent flows: An analytical study using fractional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Kundu, Snehasis

    2018-09-01

    In this study vertical distribution of sediment particles in steady uniform turbulent open channel flow over erodible bed is investigated using fractional advection-diffusion equation (fADE). Unlike previous investigations on fADE to investigate the suspension distribution, in this study the modified Atangana-Baleanu-Caputo fractional derivative with a non-singular and non-local kernel is employed. The proposed fADE is solved and an analytical model for finding vertical suspension distribution is obtained. The model is validated against experimental as well as field measurements of Missouri River, Mississippi River and Rio Grande conveyance channel and is compared with the Rouse equation and other fractional model found in literature. A quantitative error analysis shows that the proposed model is able to predict the vertical distribution of particles more appropriately than previous models. The validation results shows that the fractional model can be equally applied to all size of particles with an appropriate choice of the order of the fractional derivative α. It is also found that besides particle diameter, parameter α depends on the mass density of particle and shear velocity of the flow. To predict this parameter, a multivariate regression is carried out and a relation is proposed for easy application of the model. From the results for sand and plastic particles, it is found that the parameter α is more sensitive to mass density than the particle diameter. The rationality of the dependence of α on particle and flow characteristics has been justified physically.

  5. Alcohol conversion

    DOEpatents

    Wachs, Israel E.; Cai, Yeping

    2002-01-01

    Preparing an aldehyde from an alcohol by contacting the alcohol in the presence of oxygen with a catalyst prepared by contacting an intimate mixture containing metal oxide support particles and particles of a catalytically active metal oxide from Groups VA, VIA, or VIIA, with a gaseous stream containing an alcohol to cause metal oxide from the discrete catalytically active metal oxide particles to migrate to the metal oxide support particles and to form a monolayer of catalytically active metal oxide on said metal oxide support particles.

  6. Optical properties of alkali halide crystals from all-electron hybrid TD-DFT calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webster, R., E-mail: ross.webster07@imperial.ac.uk; Harrison, N. M.; Bernasconi, L.

    2015-06-07

    We present a study of the electronic and optical properties of a series of alkali halide crystals AX, with A = Li, Na, K, Rb and X = F, Cl, Br based on a recent implementation of hybrid-exchange time-dependent density functional theory (TD-DFT) (TD-B3LYP) in the all-electron Gaussian basis set code CRYSTAL. We examine, in particular, the impact of basis set size and quality on the prediction of the optical gap and exciton binding energy. The formation of bound excitons by photoexcitation is observed in all the studied systems and this is shown to be correlated to specific features ofmore » the Hartree-Fock exchange component of the TD-DFT response kernel. All computed optical gaps and exciton binding energies are however markedly below estimated experimental and, where available, 2-particle Green’s function (GW-Bethe-Salpeter equation, GW-BSE) values. We attribute this reduced exciton binding to the incorrect asymptotics of the B3LYP exchange correlation ground state functional and of the TD-B3LYP response kernel, which lead to a large underestimation of the Coulomb interaction between the excited electron and hole wavefunctions. Considering LiF as an example, we correlate the asymptotic behaviour of the TD-B3LYP kernel to the fraction of Fock exchange admixed in the ground state functional c{sub HF} and show that there exists one value of c{sub HF} (∼0.32) that reproduces at least semi-quantitatively the optical gap of this material.« less

  7. Submicron magnetic core conducting polypyrrole polymer shell: Preparation and characterization.

    PubMed

    Tenório-Neto, Ernandes Taveira; Baraket, Abdoullatif; Kabbaj, Dounia; Zine, Nadia; Errachid, Abdelhamid; Fessi, Hatem; Kunita, Marcos Hiroiuqui; Elaissari, Abdelhamid

    2016-04-01

    Magnetic particles are of great interest in various biomedical applications, such as, sample preparation, in vitro biomedical diagnosis, and both in vivo diagnosis and therapy. For in vitro applications and especially in labs-on-a-chip, microfluidics, microsystems, or biosensors, the needed magnetic dispersion should answer various criteria, for instance, submicron size in order to avoid a rapid sedimentation rate, fast separations under an applied magnetic field, and appreciable colloidal stability (stable dispersion under shearing process). Then, the aim of this work was to prepare highly magnetic particles with a magnetic core and conducting polymer shell particles in order to be used not only as a carrier, but also for the in vitro detection step. The prepared magnetic seed dispersions were functionalized using pyrrole and pyrrole-2-carboxylic acid. The obtained core-shell particles were characterized in terms of particle size, size distribution, magnetization properties, FTIR analysis, surface morphology, chemical composition, and finally, the conducting property of those particles were evaluated by cyclic voltammetry. The obtained functional submicron highly magnetic particles are found to be conducting material bearing function carboxylic group on the surface. These promising conducting magnetic particles can be used for both transport and lab-on-a-chip detection. Copyright © 2015. Published by Elsevier B.V.

  8. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  9. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  10. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  11. Wigner functions defined with Laplace transform kernels.

    PubMed

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  12. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  13. Preparation of nano-hydroxyapatite particles with different morphology and their response to highly malignant melanoma cells in vitro

    NASA Astrophysics Data System (ADS)

    Li, Bo; Guo, Bo; Fan, Hongsong; Zhang, Xingdong

    2008-11-01

    To investigate the effects of nano-hydroxyapatite (HA) particles with different morphology on highly malignant melanoma cells, three kinds of HA particles with different morphology were synthesized and co-cultured with highly malignant melanoma cells using phosphate-buffered saline (PBS) as control. A precipitation method with or without citric acid addition as surfactant was used to produce rod-like hydroxyapatite (HA) particles with nano- and micron size, respectively, and a novel oil-in-water emulsion method was employed to prepare ellipse-like nano-HA particles. Particle morphology and size distribution of the as prepared HA powders were characterized by transmission electron microscope (TEM) and dynamic light scattering technique. The nano- and micron HA particles with different morphology were co-cultured with highly malignant melanoma cells. Immunofluorescence analysis and MTT assay were employed to evaluate morphological change of nucleolus and proliferation of tumour cells, respectively. To compare the effects of HA particles on cell response, the PBS without HA particles was used as control. The experiment results indicated that particle nanoscale effect rather than particle morphology of HA was more effective for the inhibition on highly malignant melanoma cells proliferation.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demkowicz, Paul Andrew; Harp, Jason M.; Winston, Philip L.

    Destructive post-irradiation examination was performed on AGR-1 fuel Compact 4-1-1, which was irradiated to a final compact-average burnup of 19.4% FIMA (fissions per initial metal atom) and a time-average, volume-average temperature of 1072°C. The analysis of this compact focused on characterizing the extent of fission product release from the particles and examining particles to determine the condition of the kernels and coating layers. The work included deconsolidation of the compact and leach-burn-leach analysis, visual inspection and gamma counting of individual particles, metallurgical preparation of selected particles, and examination of particle cross-sections with optical microscopy, electron microscopy, and elemental analysis. Deconsolidation-leach-burn-leachmore » (DLBL) analysis revealed no particles with failed TRISO or failed SiC layers (as indicated by very low uranium inventory in all of the leach solutions). The total fractions of the predicted compact inventories of fission products Ce-144, Cs-134, Cs-137, and Sr-90 that were present in the compact outside of the SiC layers were <2×10 -6, based on DLBL data. The Ag-110m fraction in the compact outside the SiC layers was 3.3×10 -2, indicating appreciable release of silver through the intact coatings and subsequent retention in the OPyC layers or matrix. The Eu-154 fraction was 2.4×10 -4, which is equivalent to the inventory in one average particle, and indicates a small but measurable level of release from the intact coatings. Gamma counting of 61 individual particles indicated no particles with anomalously low fission product retention. The average ratio of measured inventory to calculated inventory was close to a value of 1.0 for several fission product isotopes (Ce-144, Cs-134, and Cs-137), indicating good retention and reasonably good agreement with the predicted inventories. Measured-to-calculated (M/C) activity ratios for fission products Eu-154, Eu-155, Ru-106, Sb-125, and Zr-95 were significantly less than 1.0. However, as no significant release of these fission products from compacts was noted during previous analysis of the AGR-1 capsule components, the low M/C ratios are most likely an indication of a bias in the inventories predicted by physics simulations of the AGR-1 experiment. The distribution of Ag-110m M/C ratios was centered on a value of 1.02 and was fairly broad (standard deviation of 0.18, with values as high as 1.42 and as low as 0.68). Based on all data gathered to date, it is believed that silver retention in the particles was on average relatively high, but that the broad distribution in values among the particles represents significant variation in the inventory of Ag-110m generated in the particles. Ceramographic analysis of particle cross-sections revealed many of the characteristic microstructures often observed in irradiated AGR-1 particles from other fuel compacts. Palladium-rich fission product clusters were observed in the IPyC and SiC layers near the IPyC-SiC interface of three Compact 4-1-1 particle cross-sections. In spite of the presence of fission product clusters in the SiC layer, no significant corrosion or degradation of the layer was observed in any of the particles examined.« less

  15. Classification of corn kernels contaminated with aflatoxins using fluorescence and reflectance hyperspectral images analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas

    2015-05-01

    Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.

  16. Influence of Kernel Age on Fumonisin B1 Production in Maize by Fusarium moniliforme

    PubMed Central

    Warfield, Colleen Y.; Gilchrist, David G.

    1999-01-01

    Production of fumonisins by Fusarium moniliforme on naturally infected maize ears is an important food safety concern due to the toxic nature of this class of mycotoxins. Assessing the potential risk of fumonisin production in developing maize ears prior to harvest requires an understanding of the regulation of toxin biosynthesis during kernel maturation. We investigated the developmental-stage-dependent relationship between maize kernels and fumonisin B1 production by using kernels collected at the blister (R2), milk (R3), dough (R4), and dent (R5) stages following inoculation in culture at their respective field moisture contents with F. moniliforme. Highly significant differences (P ≤ 0.001) in fumonisin B1 production were found among kernels at the different developmental stages. The highest levels of fumonisin B1 were produced on the dent stage kernels, and the lowest levels were produced on the blister stage kernels. The differences in fumonisin B1 production among kernels at the different developmental stages remained significant (P ≤ 0.001) when the moisture contents of the kernels were adjusted to the same level prior to inoculation. We concluded that toxin production is affected by substrate composition as well as by moisture content. Our study also demonstrated that fumonisin B1 biosynthesis on maize kernels is influenced by factors which vary with the developmental age of the tissue. The risk of fumonisin contamination may begin early in maize ear development and increases as the kernels reach physiological maturity. PMID:10388675

  17. Preparation and encapsulation of white/yellow dual colored suspensions for electrophoretic displays

    NASA Astrophysics Data System (ADS)

    Han, Jingjing; Li, Xiaoxu; Feng, Yaqing; Zhang, Bao

    2014-11-01

    C.I. Pigment Yellow 181 (PY181) composite particles encapsulated by polyethylene (PE) were prepared by dispersion polymerization method, and C.I. Pigment Yellow 110 (PY110) composite particles encapsulated by polystyrene (PS) with mini-emulsion polymerization method were achieved, respectively. The modified pigments were characterized by fourier transform infrared spectroscopy, scanning electron microscope and transmission electron microscope. Compared with the PE-coated PY 181 pigments, the PS-coated PY-110 particles had a narrow particle size distribution, regular spherical and average particle size of 450 nm. Suspension 1 and suspension 3 were prepared by the two composite particles dispersed in isopar M. A chromatic electrophoretic display cell consisting of yellow particles was successfully fabricated using dispersions of yellow ink particles in a mixed dielectric solvent with white particles as contrast. The response behavior and the contrast ratio to the electric voltage were also examined. The contrast ratio of pigments modified by polystyrene was 1.48, as well as the response time was 2 s, which were better than those of pigments modified by polyethylene.

  18. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  19. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Novel near-infrared sampling apparatus for single kernel analysis of oil content in maize.

    PubMed

    Janni, James; Weinstock, B André; Hagen, Lisa; Wright, Steve

    2008-04-01

    A method of rapid, nondestructive chemical and physical analysis of individual maize (Zea mays L.) kernels is needed for the development of high value food, feed, and fuel traits. Near-infrared (NIR) spectroscopy offers a robust nondestructive method of trait determination. However, traditional NIR bulk sampling techniques cannot be applied successfully to individual kernels. Obtaining optimized single kernel NIR spectra for applied chemometric predictive analysis requires a novel sampling technique that can account for the heterogeneous forms, morphologies, and opacities exhibited in individual maize kernels. In this study such a novel technique is described and compared to less effective means of single kernel NIR analysis. Results of the application of a partial least squares (PLS) derived model for predictive determination of percent oil content per individual kernel are shown.

  1. Comparison of silver release predictions using PARFUME with results from the AGR-2 irradiation experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise P.; Demkowicz, Paul A.; Baldwin, Charles A.

    2016-11-01

    The PARFUME (PARticle FUel ModEl) code was used to predict silver release from tristructural isotropic (TRISO) coated fuel particles and compacts during the second irradiation experiment (AGR-2) of the Advanced Gas Reactor Fuel Development and Qualification program. The PARFUME model for the AGR-2 experiment used the fuel compact volume average temperature for each of the 559 days of irradiation to calculate the release of fission product silver from a representative particle for a select number of AGR-2 compacts and individual fuel particles containing either mixed uranium carbide/oxide (UCO) or 100% uranium dioxide (UO2) kernels. Post-irradiation examination (PIE) measurements were performedmore » to provide data on release of silver from these compacts and individual fuel particles. The available experimental fractional releases of silver were compared to their corresponding PARFUME predictions. Preliminary comparisons show that PARFUME under-predicts the PIE results in UCO compacts and is in reasonable agreement with experimental data for UO2 compacts. The accuracy of PARFUME predictions is impacted by the code limitations in the modeling of the temporal and spatial distributions of the temperature across the compacts. Nevertheless, the comparisons on silver release lie within the same order of magnitude.« less

  2. Soft template synthesis of yolk/silica shell particles.

    PubMed

    Wu, Xue-Jun; Xu, Dongsheng

    2010-04-06

    Yolk/shell particles possess a unique structure that is composed of hollow shells that encapsulate other particles but with an interstitial space between them. These structures are different from core/shell particles in that the core particles are freely movable in the shell. Yolk/shell particles combine the properties of each component, and can find potential applications in catalysis, lithium ion batteries, and biosensors. In this Research News article, a soft-template-assisted method for the preparation of yolk/silica shell particles is presented. The demonstrated method is simple and general, and can produce hollow silica spheres incorporated with different particles independent of their diameters, geometry, and composition. Furthermore, yolk/mesoporous silica shell particles and multishelled particles are also prepared through optimization of the experimental conditions. Finally, potential applications of these particles are discussed.

  3. Computed tomography coronary stent imaging with iterative reconstruction: a trade-off study between medium kernel and sharp kernel.

    PubMed

    Zhou, Qijing; Jiang, Biao; Dong, Fei; Huang, Peiyu; Liu, Hongtao; Zhang, Minming

    2014-01-01

    To evaluate the improvement of iterative reconstruction in image space (IRIS) technique in computed tomographic (CT) coronary stent imaging with sharp kernel, and to make a trade-off analysis. Fifty-six patients with 105 stents were examined by 128-slice dual-source CT coronary angiography (CTCA). Images were reconstructed using standard filtered back projection (FBP) and IRIS with both medium kernel and sharp kernel applied. Image noise and the stent diameter were investigated. Image noise was measured both in background vessel and in-stent lumen as objective image evaluation. Image noise score and stent score were performed as subjective image evaluation. The CTCA images reconstructed with IRIS were associated with significant noise reduction compared to that of CTCA images reconstructed using FBP technique in both of background vessel and in-stent lumen (the background noise decreased by approximately 25.4% ± 8.2% in medium kernel (P

  4. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  5. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    PubMed

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  6. Mapping QTLs controlling kernel dimensions in a wheat inter-varietal RIL mapping population.

    PubMed

    Cheng, Ruiru; Kong, Zhongxin; Zhang, Liwei; Xie, Quan; Jia, Haiyan; Yu, Dong; Huang, Yulong; Ma, Zhengqiang

    2017-07-01

    Seven kernel dimension QTLs were identified in wheat, and kernel thickness was found to be the most important dimension for grain weight improvement. Kernel morphology and weight of wheat (Triticum aestivum L.) affect both yield and quality; however, the genetic basis of these traits and their interactions has not been fully understood. In this study, to investigate the genetic factors affecting kernel morphology and the association of kernel morphology traits with kernel weight, kernel length (KL), width (KW) and thickness (KT) were evaluated, together with hundred-grain weight (HGW), in a recombinant inbred line population derived from Nanda2419 × Wangshuibai, with data from five trials (two different locations over 3 years). The results showed that HGW was more closely correlated with KT and KW than with KL. A whole genome scan revealed four QTLs for KL, one for KW and two for KT, distributed on five different chromosomes. Of them, QKl.nau-2D for KL, and QKt.nau-4B and QKt.nau-5A for KT were newly identified major QTLs for the respective traits, explaining up to 32.6 and 41.5% of the phenotypic variations, respectively. Increase of KW and KT and reduction of KL/KT and KW/KT ratios always resulted in significant higher grain weight. Lines combining the Nanda 2419 alleles of the 4B and 5A intervals had wider, thicker, rounder kernels and a 14% higher grain weight in the genotype-based analysis. A strong, negative linear relationship of the KW/KT ratio with grain weight was observed. It thus appears that kernel thickness is the most important kernel dimension factor in wheat improvement for higher yield. Mapping and marker identification of the kernel dimension-related QTLs definitely help realize the breeding goals.

  7. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Adaptive kernel function using line transect sampling

    NASA Astrophysics Data System (ADS)

    Albadareen, Baker; Ismail, Noriszura

    2018-04-01

    The estimation of f(0) is crucial in the line transect method which is used for estimating population abundance in wildlife survey's. The classical kernel estimator of f(0) has a high negative bias. Our study proposes an adaptation in the kernel function which is shown to be more efficient than the usual kernel estimator. A simulation study is adopted to compare the performance of the proposed estimators with the classical kernel estimators.

  9. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  10. Pollen source effects on growth of kernel structures and embryo chemical compounds in maize.

    PubMed

    Tanaka, W; Mantese, A I; Maddonni, G A

    2009-08-01

    Previous studies have reported effects of pollen source on the oil concentration of maize (Zea mays) kernels through modifications to both the embryo/kernel ratio and embryo oil concentration. The present study expands upon previous analyses by addressing pollen source effects on the growth of kernel structures (i.e. pericarp, endosperm and embryo), allocation of embryo chemical constituents (i.e. oil, protein, starch and soluble sugars), and the anatomy and histology of the embryos. Maize kernels with different oil concentration were obtained from pollinations with two parental genotypes of contrasting oil concentration. The dynamics of the growth of kernel structures and allocation of embryo chemical constituents were analysed during the post-flowering period. Mature kernels were dissected to study the anatomy (embryonic axis and scutellum) and histology [cell number and cell size of the scutellums, presence of sub-cellular structures in scutellum tissue (starch granules, oil and protein bodies)] of the embryos. Plants of all crosses exhibited a similar kernel number and kernel weight. Pollen source modified neither the growth period of kernel structures, nor pericarp growth rate. By contrast, pollen source determined a trade-off between embryo and endosperm growth rates, which impacted on the embryo/kernel ratio of mature kernels. Modifications to the embryo size were mediated by scutellum cell number. Pollen source also affected (P < 0.01) allocation of embryo chemical compounds. Negative correlations among embryo oil concentration and those of starch (r = 0.98, P < 0.01) and soluble sugars (r = 0.95, P < 0.05) were found. Coincidently, embryos with low oil concentration had an increased (P < 0.05-0.10) scutellum cell area occupied by starch granules and fewer oil bodies. The effects of pollen source on both embryo/kernel ratio and allocation of embryo chemicals seems to be related to the early established sink strength (i.e. sink size and sink activity) of the embryos.

  11. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall be...

  12. 7 CFR 51.2090 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... defect which makes a kernel or piece of kernel unsuitable for human consumption, and includes decay...: Shriveling when the kernel is seriously withered, shrunken, leathery, tough or only partially developed: Provided, that partially developed kernels are not considered seriously damaged if more than one-fourth of...

  13. Anisotropic hydrodynamics with a scalar collisional kernel

    NASA Astrophysics Data System (ADS)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  14. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Straight-chain halocarbon forming fluids for TRISO fuel kernel production - Tests with yttria-stabilized zirconia microspheres

    NASA Astrophysics Data System (ADS)

    Baker, M. P.; King, J. C.; Gorman, B. P.; Braley, J. C.

    2015-03-01

    Current methods of TRISO fuel kernel production in the United States use a sol-gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.

  16. Numerical study of the ignition behavior of a post-discharge kernel injected into a turbulent stratified cross-flow

    NASA Astrophysics Data System (ADS)

    Jaravel, Thomas; Labahn, Jeffrey; Ihme, Matthias

    2017-11-01

    The reliable initiation of flame ignition by high-energy spark kernels is critical for the operability of aviation gas turbines. The evolution of a spark kernel ejected by an igniter into a turbulent stratified environment is investigated using detailed numerical simulations with complex chemistry. At early times post ejection, comparisons of simulation results with high-speed Schlieren data show that the initial trajectory of the kernel is well reproduced, with a significant amount of air entrainment from the surrounding flow that is induced by the kernel ejection. After transiting in a non-flammable mixture, the kernel reaches a second stream of flammable methane-air mixture, where the successful of the kernel ignition was found to depend on the local flow state and operating conditions. By performing parametric studies, the probability of kernel ignition was identified, and compared with experimental observations. The ignition behavior is characterized by analyzing the local chemical structure, and its stochastic variability is also investigated.

  17. The site, size, spatial stability, and energetics of an X-ray flare kernel

    NASA Technical Reports Server (NTRS)

    Petrasso, R.; Gerassimenko, M.; Nolte, J.

    1979-01-01

    The site, size evolution, and energetics of an X-ray kernel that dominated a solar flare during its rise and somewhat during its peak are investigated. The position of the kernel remained stationary to within about 3 arc sec over the 30-min interval of observations, despite pulsations in the kernel X-ray brightness in excess of a factor of 10. This suggests a tightly bound, deeply rooted magnetic structure, more plausibly associated with the near chromosphere or low corona rather than with the high corona. The H-alpha flare onset coincided with the appearance of the kernel, again suggesting a close spatial and temporal coupling between the chromospheric H-alpha event and the X-ray kernel. At the first kernel brightness peak its size was no larger than about 2 arc sec, when it accounted for about 40% of the total flare flux. In the second rise phase of the kernel, a source power input of order 2 times 10 to the 24th ergs/sec is minimally required.

  18. Security writing application of thermal decomposition assisted NaYF4:Er3+/Yb3+ upconversion phosphor

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Tiwari, S. P.; Esteves da Silva, Joaquim C. G.; Kumar, K.

    2018-07-01

    The authors have synthesized water-dispersible NaYF4:Er3+/Yb3+ upconversion particles via a thermal decomposition route and optimized the green upconversion emission through a concentration variation of the Yb3+ sensitizer. The prepared particles were found to be ellipsoid in shape having an average particle dimension of 600  ×  150 nm. It is observed that the sample with 18 mmol% Yb3+ ion concentration and 2 mmol% Er3+ ion gives optimum upconversion intensity in the green region under 980 nm excitation. Colloidal dispersibility of the sample in different solvents was checked and hexane was found to be the best medium for the prepared particles. The particle size of the sample was found to be suitable for the preparation of colloidal ink and security writing on a plain sheet of paper. This was demonstrated successfully using ink prepared in polyvinyl chloride gold medium.

  19. Memory effects for a stochastic fractional oscillator in a magnetic field

    NASA Astrophysics Data System (ADS)

    Mankin, Romi; Laas, Katrin; Laas, Tõnu; Paekivi, Sander

    2018-01-01

    The problem of random motion of harmonically trapped charged particles in a constant external magnetic field is studied. A generalized three-dimensional Langevin equation with a power-law memory kernel is used to model the interaction of Brownian particles with the complex structure of viscoelastic media (e.g., dusty plasmas). The influence of a fluctuating environment is modeled by an additive fractional Gaussian noise. In the long-time limit the exact expressions of the first-order and second-order moments of the fluctuating position for the Brownian particle subjected to an external periodic force in the plane perpendicular to the magnetic field have been calculated. Also, the particle's angular momentum is found. It is shown that an interplay of external periodic forcing, memory, and colored noise can generate a variety of cooperation effects, such as memory-induced sign reversals of the angular momentum, multiresonance versus Larmor frequency, and memory-induced particle confinement in the absence of an external trapping field. Particularly in the case without external trapping, if the memory exponent is lower than a critical value, we find a resonancelike behavior of the anisotropy in the particle position distribution versus the driving frequency, implying that it can be efficiently excited by an oscillating electric field. Similarities and differences between the behaviors of the models with internal and external noises are also discussed.

  20. The pre-image problem in kernel methods.

    PubMed

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

Top