Determining the minimum required uranium carbide content for HTGR UCO fuel kernels
McMurray, Jacob W.; Lindemer, Terrence B.; Brown, Nicholas R.; ...
2017-03-10
There are three important failure mechanisms that must be controlled in high-temperature gas-cooled reactor (HTGR) fuel for certain higher burnup applications are SiC layer rupture, SiC corrosion by CO, and coating compromise from kernel migration. All are related to high CO pressures stemming from free O generated when uranium present as UO 2 fissions and the O is not subsequently bound by other elements. Furthermore, in the HTGR UCO kernel design, CO buildup from excess O is controlled by the inclusion of additional uranium in the form of a carbide, UC x. An approach for determining the minimum UC xmore » content to ensure negligible CO formation was developed and demonstrated using CALPHAD models and the Serpent 2 reactor physics and depletion analysis tool. Our results are intended to be more accurate than previous estimates by including more nuclear and chemical factors, in particular the effect of transmutation products on the oxygen distribution as the fuel kernel composition evolves with burnup.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, William R.; Lee, John C.; baxter, Alan
Information and measured data from the intial Fort St. Vrain (FSV) high temperature gas reactor core is used to develop a benchmark configuration to validate computational methods for analysis of a full-core, commercial HTR configuration. Large uncertainties in the geometry and composition data for the FSV fuel and core are identified, including: (1) the relative numbers of fuel particles for the four particle types, (2) the distribution of fuel kernel diameters for the four particle types, (3) the Th:U ratio in the initial FSV core, (4) and the buffer thickness for the fissile and fertile particles. Sensitivity studies were performedmore » to assess each of these uncertainties. A number of methods were developed to assist in these studies, including: (1) the automation of MCNP5 input files for FSV using Python scripts, (2) a simple method to verify isotopic loadings in MCNP5 input files, (3) an automated procedure to conduct a coupled MCNP5-RELAP5 analysis for a full-core FSV configuration with thermal-hydraulic feedback, and (4) a methodology for sampling kernel diameters from arbitrary power law and Gaussian PDFs that preserved fuel loading and packing factor constraints. A reference FSV fuel configuration was developed based on having a single diameter kernel for each of the four particle types, preserving known uranium and thorium loadings and packing factor (58%). Three fuel models were developed, based on representing the fuel as a mixture of kernels with two diameters, four diameters, or a continuous range of diameters. The fuel particles were put into a fuel compact using either a lattice-bsed approach or a stochastic packing methodology from RPI, and simulated with MCNP5. The results of the sensitivity studies indicated that the uncertainties in the relative numbers and sizes of fissile and fertile kernels were not important nor were the distributions of kernel diameters within their diameter ranges. The uncertainty in the Th:U ratio in the intial FSV core was found to be important with a crude study. The uncertainty in the TRISO buffer thickness was estimated to be unimportant but the study was not conclusive. FSV fuel compacts and a regular FSV fuel element were analyzed with MCNP5 and compared with predictions using a modified version of HELIOS that is capable of analyzing TRISO fuel configurations. The HELIOS analyses were performed by SSP. The eigenvalue discrepancies between HELIOS and MCNP5 are currently on the order of 1% but these are still being evaluated. Full-core FSV configurations were developed for two initial critical configurations - a cold, clean critical loading and a critical configuration at 70% power. MCNP5 predictions are compared to experimental data and the results are mixed. Analyses were also done for the pulsed neutron experiments that were conducted by GA for the initial FSV core. MCNP5 was used to model these experiments and reasonable agreement with measured results has been observed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlou, A. T.; Betzler, B. R.; Burke, T. P.
Uncertainties in the composition and fabrication of fuel compacts for the Fort St. Vrain (FSV) high temperature gas reactor have been studied by performing eigenvalue sensitivity studies that represent the key uncertainties for the FSV neutronic analysis. The uncertainties for the TRISO fuel kernels were addressed by developing a suite of models for an 'average' FSV fuel compact that models the fuel as (1) a mixture of two different TRISO fuel particles representing fissile and fertile kernels, (2) a mixture of four different TRISO fuel particles representing small and large fissile kernels and small and large fertile kernels and (3)more » a stochastic mixture of the four types of fuel particles where every kernel has its diameter sampled from a continuous probability density function. All of the discrete diameter and continuous diameter fuel models were constrained to have the same fuel loadings and packing fractions. For the non-stochastic discrete diameter cases, the MCNP compact model arranged the TRISO fuel particles on a hexagonal honeycomb lattice. This lattice-based fuel compact was compared to a stochastic compact where the locations (and kernel diameters for the continuous diameter cases) of the fuel particles were randomly sampled. Partial core configurations were modeled by stacking compacts into fuel columns containing graphite. The differences in eigenvalues between the lattice-based and stochastic models were small but the runtime of the lattice-based fuel model was roughly 20 times shorter than with the stochastic-based fuel model. (authors)« less
Invited Review. Combustion instability in spray-guided stratified-charge engines. A review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fansler, Todd D.; Reuss, D. L.; Sick, V.
2015-02-02
Our article reviews systematic research on combustion instabilities (principally rare, random misfires and partial burns) in spray-guided stratified-charge (SGSC) engines operated at part load with highly stratified fuel -air -residual mixtures. Results from high-speed optical imaging diagnostics and numerical simulation provide a conceptual framework and quantify the sensitivity of ignition and flame propagation to strong, cyclically varying temporal and spatial gradients in the flow field and in the fuel -air -residual distribution. For SGSC engines using multi-hole injectors, spark stretching and locally rich ignition are beneficial. Moreover, combustion instability is dominated by convective flow fluctuations that impede motion of themore » spark or flame kernel toward the bulk of the fuel, coupled with low flame speeds due to locally lean mixtures surrounding the kernel. In SGSC engines using outwardly opening piezo-electric injectors, ignition and early flame growth are strongly influenced by the spray's characteristic recirculation vortex. For both injection systems, the spray and the intake/compression-generated flow field influence each other. Factors underlying the benefits of multi-pulse injection are identified. Finally, some unresolved questions include (1) the extent to which piezo-SGSC misfires are caused by failure to form a flame kernel rather than by flame-kernel extinction (as in multi-hole SGSC engines); (2) the relative contributions of partially premixed flame propagation and mixing-controlled combustion under the exceptionally late-injection conditions that permit SGSC operation on E85-like fuels with very low NO x and soot emissions; and (3) the effects of flow-field variability on later combustion, where fuel-air-residual mixing within the piston bowl becomes important.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMurray, Jacob W.; Lindemer, Terrence B.; Brown, Nicholas R.
There are three important failure mechanisms that must be controlled in high-temperature gas-cooled reactor (HTGR) fuel for certain higher burnup applications are SiC layer rupture, SiC corrosion by CO, and coating compromise from kernel migration. All are related to high CO pressures stemming from free O generated when uranium present as UO 2 fissions and the O is not subsequently bound by other elements. Furthermore, in the HTGR UCO kernel design, CO buildup from excess O is controlled by the inclusion of additional uranium in the form of a carbide, UC x. An approach for determining the minimum UC xmore » content to ensure negligible CO formation was developed and demonstrated using CALPHAD models and the Serpent 2 reactor physics and depletion analysis tool. Our results are intended to be more accurate than previous estimates by including more nuclear and chemical factors, in particular the effect of transmutation products on the oxygen distribution as the fuel kernel composition evolves with burnup.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jolly, Brian C.; Helmreich, Grant; Cooley, Kevin M.
In support of fully ceramic microencapsulated (FCM) fuel development, coating development work is ongoing at Oak Ridge National Laboratory (ORNL) to produce tri-structural isotropic (TRISO) coated fuel particles with both UN kernels and surrogate (uranium-free) kernels. The nitride kernels are used to increase fissile density in these SiC-matrix fuel pellets with details described elsewhere. The surrogate TRISO particles are necessary for separate effects testing and for utilization in the consolidation process development. This report focuses on the fabrication and characterization of surrogate TRISO particles which use 800μm in diameter ZrO 2 microspheres as the kernel.
Triso coating development progress for uranium nitride kernels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jolly, Brian C.; Lindemer, Terrence; Terrani, Kurt A.
2015-08-01
In support of fully ceramic matrix (FCM) fuel development [1-2], coating development work is ongoing at the Oak Ridge National Laboratory (ORNL) to produce tri-structural isotropic (TRISO) coated fuel particles with UN kernels [3]. The nitride kernels are used to increase fissile density in these SiC-matrix fuel pellets with details described elsewhere [4]. The advanced gas reactor (AGR) program at ORNL used fluidized bed chemical vapor deposition (FBCVD) techniques for TRISO coating of UCO (two phase mixture of UO2 and UCx) kernels [5]. Similar techniques were employed for coating of the UN kernels, however significant changes in processing conditions weremore » required to maintain acceptable coating properties due to physical property and dimensional differences between the UCO and UN kernels (Table 1).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, Karen E.; van Rooyen, Isabella J.
2016-11-01
AGR-1 fuel Compact 4-3-3 achieved 18.63% FIMA and was exposed subsequently to a safety test at 1600°C. Two particles, AGR1-433-003 and AGR1-433-007, with measured-to-calculated 110mAg inventories of <22% and 100%, respectively, were selected for comparative electron microprobe analysis to determine whether the distribution or abundance of fission products differed proximally and distally from the deformed kernel in AGR1-433-003, and how this compared to fission product distribution in AGR1-433-007. On the deformed side of AGR1-433-003, Xe, Cs, I, Eu, Sr, and Te concentrations in the kernel buffer interface near the protruded kernel were up to six times higher than on themore » opposite, non-deformed side. At the SiC-inner pyrolytic carbon (IPyC) interface proximal to the deformed kernel, Pd and Ag concentrations were 1.2 wt% and 0.04 wt% respectively, whereas on the SiC-IPyC interface distal from the kernel deformation those elements measured 0.4 and 0.01 wt%, respectively. Palladium and Ag concentrations at the SiC-IPyC interface of AGR1-433-007 were 2.05 and 0.05 wt.%, respectively. Rare earth element concentrations at the SiC-IPyC interface of AGR1-433-007 were a factor of ten higher than at the SiC-IPyC interfaces measured in particle AGR1-433-003. Palladium permeated the SiC layer of AGR1-433-007 and the non-deformed SiC layer of AGR1-433-003.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jolly, Brian C.; Lindemer, Terrence; Terrani, Kurt A.
In support of fully ceramic matrix (FCM) fuel development, coating development work has begun at the Oak Ridge National Laboratory (ORNL) to produce tri-isotropic (TRISO) coated fuel particles with UN kernels. The nitride kernels are used to increase heavy metal density in these SiC-matrix fuel pellets with details described elsewhere. The advanced gas reactor (AGR) program at ORNL used fluidized bed chemical vapor deposition (FBCVD) techniques for TRISO coating of UCO (two phase mixture of UO 2 and UC x) kernels. Similar techniques were employed for coating of the UN kernels, however significant changes in processing conditions were required tomore » maintain acceptable coating properties due to physical property and dimensional differences between the UCO and UN kernels.« less
NASA Astrophysics Data System (ADS)
Hunt, R. D.; Silva, G. W. C. M.; Lindemer, T. B.; Anderson, K. K.; Collins, J. L.
2012-08-01
The US Department of Energy continues to use the internal gelation process in its preparation of tristructural isotropic coated fuel particles. The focus of this work is to develop uranium fuel kernels with adequately dispersed silicon carbide (SiC) nanoparticles, high crush strengths, uniform particle diameter, and good sphericity. During irradiation to high burnup, the SiC in the uranium kernels will serve as getters for excess oxygen and help control the oxygen potential in order to minimize the potential for kernel migration. The hardness of SiC required modifications to the gelation system that was used to make uranium kernels. Suitable processing conditions and potential equipment changes were identified so that the SiC could be homogeneously dispersed in gel spheres. Finally, dilute hydrogen rather than argon should be used to sinter the uranium kernels with SiC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aly, A.; Avramova, Maria; Ivanov, Kostadin
To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaise Collin
The Idaho National Laboraroty (INL) PARFUME (particle fuel model) code was used to assess the overall fuel performance of uranium nitride (UN) tristructural isotropic (TRISO) ceramic fuel under irradiation conditions typical of a Light Water Reactor (LWR). The dimensional changes of the fuel particle layers and kernel were calculated, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated depending on the strain behavior of the constituent materials at high fast fluence and burn up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along withmore » stress levels in the inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn up. These material properties have large uncertainties at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, a large experimental effort would be needed to establish material properties, including kernel and PyC swelling rates, under these conditions before definitive conclusions can be drawn on the behavior of UN TRISO fuel in LWRs.« less
Selection and properties of alternative forming fluids for TRISO fuel kernel production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, M. P.; King, J. C.; Gorman, B. P.
2013-01-01
Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardousmore » alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1- bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.« less
Selection and properties of alternative forming fluids for TRISO fuel kernel production
NASA Astrophysics Data System (ADS)
Baker, M. P.; King, J. C.; Gorman, B. P.; Marshall, D. W.
2013-01-01
Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ˜10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.
DNS study of the ignition of n-heptane fuel spray under high pressure and lean conditions
NASA Astrophysics Data System (ADS)
Wang, Yunliang; Rutland, Christopher J.
2005-01-01
Direct numerical simulations (DNS) are used to investigate the ignition of n-heptane fuel spray under high pressure and lean conditions. For the solution of the carrier gas fluid, the Eulerian method is employed, while for the fuel spray, the Lagrangian method is used. A chemistry mechanism for n-heptane with 33 species and 64 reactions is adopted to describe the chemical reactions. Initial carrier gas temperature and pressure are 926 K and 30.56 atmospheres, respectively. Initial global equivalence ratio is 0.258. Two cases with droplet radiuses of 35.5 and 20.0 macrons are simulated. Evolutions of the carrier gas temperature and species mass fractions are presented. Contours of the carrier gas temperature and species mass fractions near ignition and after ignition are presented. The results show that the smaller fuel droplet case ignites earlier than the larger droplet case. For the larger droplet case, ignition occurs first at one location; for the smaller droplet case, however, ignition occurs first at multiple locations. At ignition kernels, significant NO is produced when temperature is high enough at the ignition kernels. For the larger droplet case, more NO is produced than the smaller droplet case due to the inhomogeneous distribution and incomplete mixing of fuel vapor.
Modeling and analysis of UN TRISO fuel for LWR application using the PARFUME code
NASA Astrophysics Data System (ADS)
Collin, Blaise P.
2014-08-01
The Idaho National Laboratory (INL) PARFUME (PARticle FUel ModEl) code was used to assess the overall fuel performance of uranium nitride (UN) tristructural isotropic (TRISO) ceramic fuel under irradiation conditions typical of a Light Water Reactor (LWR). The dimensional changes of the fuel particle layers and kernel were calculated, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated depending on the strain behavior of the constituent materials at high fast fluence and burn-up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn-up. These material properties have large uncertainties at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, a large experimental effort would be needed to establish material properties, including kernel and PyC swelling rates, under these conditions before definitive conclusions can be drawn on the behavior of UN TRISO fuel in LWRs.
Microscopic analysis of irradiated AGR-1 coated particle fuel compacts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott A. Ploger; Paul A. Demkowicz; John D. Hunn
The AGR-1 experiment involved irradiation of 72 TRISO-coated particle fuel compacts to a peak compact-average burnup of 19.5% FIMA with no in-pile failures observed out of 3 x 105 total particles. Irradiated AGR-1 fuel compacts have been cross-sectioned and analyzed with optical microscopy to characterize kernel, buffer, and coating behavior. Six compacts have been examined, spanning a range of irradiation conditions (burnup, fast fluence, and irradiation temperature) and including all four TRISO coating variations irradiated in the AGR-1 experiment. The cylindrical specimens were sectioned both transversely and longitudinally, then polished to expose from 36 to 79 individual particles near midplanemore » on each mount. The analysis focused primarily on kernel swelling and porosity, buffer densification and fracturing, buffer–IPyC debonding, and fractures in the IPyC and SiC layers. Characteristic morphologies have been identified, 981 particles have been classified, and spatial distributions of particle types have been mapped. No significant spatial patterns were discovered in these cross sections. However, some trends were found between morphological types and certain behavioral aspects. Buffer fractures were found in 23% of the particles, and these fractures often resulted in unconstrained kernel protrusion into the open cavities. Fractured buffers and buffers that stayed bonded to IPyC layers appear related to larger pore size in kernels. Buffer–IPyC interface integrity evidently factored into initiation of rare IPyC fractures. Fractures through part of the SiC layer were found in only four classified particles, all in conjunction with IPyC–SiC debonding. Compiled results suggest that the deliberate coating fabrication variations influenced the frequencies of IPyC fractures and IPyC–SiC debonds.« less
NASA Astrophysics Data System (ADS)
Baker, M. P.; King, J. C.; Gorman, B. P.; Braley, J. C.
2015-03-01
Current methods of TRISO fuel kernel production in the United States use a sol-gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.
Thermochemical Assessment of Oxygen Gettering by SiC or ZrC in PuO2-x TRISO Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besmann, Theodore M
2010-01-01
Particulate nuclear fuel in a modular helium reactor is being considered for the consumption of excess plutonium and related transuranics. In particular, efforts to largely consume transuranics in a single-pass will require the fuel to undergo very high burnup. This deep burn concept will thus make the proposed plutonia TRISO fuel particularly likely to suffer kernel migration where carbon in the buffer layer and inner pyrolytic carbon layer is transported from the high temperature side of the particle to the low temperature side. This phenomenon is oberved to cause particle failure and therefore must be mitigated. The addition of SiCmore » or ZrC in the oxide kernel or in a layer in communication with the kernel will lower the oxygen potential and therefore prevent kernel migration, and this has been demonstrated with SiC. In this work a thermochemical analysis was performed to predict oxygen potential behavior in the plutonia TRISO fuel to burnups of 50% FIMA with and without the presence of oxygen gettering SiC and ZrC. Kernel migration is believed to be controlled by CO gas transporting carbon from the hot side to the cool side, and CO pressure is governed by the oxygen potential in the presence of carbon. The gettering phases significantly reduce the oxygen potential and thus CO pressure in an otherwise PuO2-x kernel, and prevent kernel migration by limiting CO gas diffusion through the buffer layer. The reduction in CO pressure can also reduce the peak pressure within the particles by ~50%, thus reducing the likelihood of pressure-induced particle failure. A model for kernel migration was used to semi-quantitatively assess the effect of controlling oxygen potential with SiC or ZrC and did demonstrated the dramatic effect of the addition of these phases on carbon transport.« less
Modeling and Analysis of FCM UN TRISO Fuel Using the PARFUME Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaise Collin
2013-09-01
The PARFUME (PARticle Fuel ModEl) modeling code was used to assess the overall fuel performance of uranium nitride (UN) tri-structural isotropic (TRISO) ceramic fuel in the frame of the design and development of Fully Ceramic Matrix (FCM) fuel. A specific modeling of a TRISO particle with UN kernel was developed with PARFUME, and its behavior was assessed in irradiation conditions typical of a Light Water Reactor (LWR). The calculations were used to access the dimensional changes of the fuel particle layers and kernel, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated dependingmore » on the strain behavior of the constituent materials at high fast fluence and burn-up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the pyrolytic carbon (PyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn-up. These material properties are unknown at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, more effort is needed to determine them and positively conclude on the applicability of FCM fuel to LWRs.« less
Production of LEU Fully Ceramic Microencapsulated Fuel for Irradiation Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terrani, Kurt A; Kiggans Jr, James O; McMurray, Jake W
2016-01-01
Fully Ceramic Microencapsulated (FCM) fuel consists of tristructural isotropic (TRISO) fuel particles embedded inside a SiC matrix. This fuel inherently possesses multiple barriers to fission product release, namely the various coating layers in the TRISO fuel particle as well as the dense SiC matrix that hosts these particles. This coupled with the excellent oxidation resistance of the SiC matrix and the SiC coating layer in the TRISO particle designate this concept as an accident tolerant fuel (ATF). The FCM fuel takes advantage of uranium nitride kernels instead of oxide or oxide-carbide kernels used in high temperature gas reactors to enhancemore » heavy metal loading in the highly moderated LWRs. Production of these kernels with appropriate density, coating layer development to produce UN TRISO particles, and consolidation of these particles inside a SiC matrix have been codified thanks to significant R&D supported by US DOE Fuel Cycle R&D program. Also, surrogate FCM pellets (pellets with zirconia instead of uranium-bearing kernels) have been neutron irradiated and the stability of the matrix and coating layer under LWR irradiation conditions have been established. Currently the focus is on production of LEU (7.3% U-235 enrichment) FCM pellets to be utilized for irradiation testing. The irradiation is planned at INL s Advanced Test Reactor (ATR). This is a critical step in development of this fuel concept to establish the ability of this fuel to retain fission products under prototypical irradiation conditions.« less
Ceramography of Irradiated tristructural isotropic (TRISO) Fuel from the AGR-2 Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rice, Francine Joyce; Stempien, John Dennis
2016-09-01
Ceramography was performed on cross sections from four tristructural isotropic (TRISO) coated particle fuel compacts taken from the AGR-2 experiment, which was irradiated between June 2010 and October 2013 in the Advanced Test Reactor (ATR). The fuel compacts examined in this study contained TRISO-coated particles with either uranium oxide (UO2) kernels or uranium oxide/uranium carbide (UCO) kernels that were irradiated to final burnup values between 9.0 and 11.1% FIMA. These examinations are intended to explore kernel and coating morphology evolution during irradiation. This includes kernel porosity, swelling, and migration, and irradiation-induced coating fracture and separation. Variations in behavior within amore » specific cross section, which could be related to temperature or burnup gradients within the fuel compact, are also explored. The criteria for categorizing post-irradiation particle morphologies developed for AGR-1 ceramographic exams, was applied to the particles in the AGR-2 compacts particles examined. Results are compared with similar investigations performed as part of the earlier AGR-1 irradiation experiment. This paper presents the results of the AGR-2 examinations and discusses the key implications for fuel irradiation performance.« less
Safety Testing of AGR-2 UCO Compacts 5-2-2, 2-2-2, and 5-4-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunn, John D.; Morris, Robert Noel; Baldwin, Charles A.
2016-08-01
Post-irradiation examination (PIE) is being performed on tristructural-isotropic (TRISO) coated-particle fuel compacts from the Advanced Gas Reactor (AGR) Fuel Development and Qualification Program second irradiation experiment (AGR-2). This effort builds upon the understanding acquired throughout the AGR-1 PIE campaign, and is establishing a database for the different AGR-2 fuel designs. The AGR-2 irradiation experiment included TRISO fuel particles coated at BWX Technologies (BWXT) with a 150-mm-diameter engineering-scale coater. Two coating batches were tested in the AGR-2 irradiation experiment. Batch 93085 had 508-μm-diameter uranium dioxide (UO 2) kernels. Batch 93073 had 427-μm-diameter UCO kernels, which is a kernel design where somemore » of the uranium oxide is converted to uranium carbide during fabrication to provide a getter for oxygen liberated during fission and limit CO production. Fabrication and property data for the AGR-2 coating batches have been compiled and compared to those for AGR-1. The AGR-2 TRISO coatings were most like the AGR-1 Variant 3 TRISO deposited in the 50-mm-diameter ORNL lab-scale coater. In both cases argon-dilution of the hydrogen and methyltrichlorosilane coating gas mixture employed to deposit the SiC was used to produce a finer-grain, more equiaxed SiC microstructure. In addition to the fact that AGR-1 fuel had smaller, 350-μm-diameter UCO kernels, notable differences in the TRISO particle properties included the pyrocarbon anisotropy, which was slightly higher in the particles coated in the engineering-scale coater, and the exposed kernel defect fraction, which was higher for AGR-2 fuel due to the detected presence of particles with impact damage introduced during TRISO particle handling.« less
Performance modeling of Deep Burn TRISO fuel using ZrC as a load-bearing layer and an oxygen getter
NASA Astrophysics Data System (ADS)
Wongsawaeng, Doonyapong
2010-01-01
The effects of design choices for the TRISO particle fuel were explored in order to determine their contribution to attaining high-burnup in Deep Burn modular helium reactor fuels containing transuranics from light water reactor spent fuel. The new design features were: (1) ZrC coating substituted for the SiC, allowing the fuel to survive higher accident temperatures; (2) pyrocarbon/SiC "alloy" substituted for the inner pyrocarbon coating to reduce layer failure and (3) pyrocarbon seal coat and thin ZrC oxygen getter coating on the kernel to eliminate CO. Fuel performance was evaluated using General Atomics Company's PISA code. The only acceptable design has a 200-μm kernel diameter coupled with at least 150-μm thick, 50% porosity buffer, a 15-μm ZrC getter over a 10-μm pyrocarbon seal coat on the kernel, an alloy inner pyrocarbon, and ZrC substituted for SiC. The code predicted that during a 1600 °C postulated accident at 70% FIMA, the ZrC failure probability is <10-4.
Design Evolutuion of Hot Isotatic Press Cans for NTP Cermet Fuel Fabrication
NASA Technical Reports Server (NTRS)
Mireles, O. R.; Broadway, J.; Hickman, R.
2014-01-01
Nuclear Thermal Propulsion (NTP) is under consideration for potential use in deep space exploration missions due to desirable performance properties such as a high specific impulse (> 850 seconds). Tungsten (W)-60vol%UO2 cermet fuel elements are under development, with efforts emphasizing fabrication, performance testing and process optimization to meet NTP service life requirements [1]. Fuel elements incorporate design features that provide redundant protection from crack initiation, crack propagation potentially resulting in hot hydrogen (H2) reduction of UO2 kernels. Fuel erosion and fission product retention barriers include W coated UO2 fuel kernels, W clad internal flow channels and fuel element external W clad resulting in a fully encapsulated fuel element design as shown.
NASA Astrophysics Data System (ADS)
Lindemer, T. B.; Voit, S. L.; Silva, C. M.; Besmann, T. M.; Hunt, R. D.
2014-05-01
The US Department of Energy is developing a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with uranium nitride (UN) kernels with diameters near 825 μm. This effort explores factors involved in the conversion of uranium oxide-carbon microspheres into UN kernels. An analysis of previous studies with sufficient experimental details is provided. Thermodynamic calculations were made to predict pressures of carbon monoxide and other relevant gases for several reactions that can be involved in the conversion of uranium oxides and carbides into UN. Uranium oxide-carbon microspheres were heated in a microbalance with an attached mass spectrometer to determine details of calcining and carbothermic conversion in argon, nitrogen, and vacuum. A model was derived from experiments on the vacuum conversion to uranium oxide-carbide kernels. UN-containing kernels were fabricated using this vacuum conversion as part of the overall process. Carbonitride kernels of ∼89% of theoretical density were produced along with several observations concerning the different stages of the process.
Conceptual design of quadriso particles with europium burnable absorber in HTRS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, A.; Nuclear Engineering Division
2010-05-18
In High Temperature Reactors, burnable absorbers are utilized to manage the excess reactivity at the early stage of the fuel cycle. In this study QUADRISO particles are proposed to manage the initial xcess reactivity of High Temperature Reactors. The QUADRISO concept synergistically couples the decrease of the burnable poison with the decrease of the fissile materials at the fuel particle level. This echanism is set up by introducing a burnable poison layer around the fuel kernel in ordinary TRISO particles or by mixing the burnable poison with any of the TRISO coated layers. At the beginning of life, the nitialmore » excess reactivity is small because some neutrons are absorbed in the burnable poison and they are prevented from entering the fuel kernel. At the end of life, when the absorber is almost depleted, ore eutrons stream into the fuel kernel of QUADRISO particles causing fission reactions. The mechanism has been applied to a prismatic High Temperature Reactor with europium or erbium burnable absorbers, showing a significant reduction in the initial excess reactivity of the core.« less
A novel concept of QUADRISO particles. Part II: Utilization for excess reactivity control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, A.
2010-07-01
In high temperature reactors, burnable absorbers are utilized to manage the excess reactivity at the early stage of the fuel cycle. In this paper QUADRISO particles are proposed to manage the initial excess reactivity of high temperature reactors. The QUADRISO concept synergistically couples the decrease of the burnable poison with the decrease of the fissile materials at the fuel particle level. This mechanism is set up by introducing a burnable poison layer around the fuel kernel in ordinary TRISO particles or by mixing the burnable poison with any of the TRISO coated layers. At the beginning of life, the initialmore » excess reactivity is small because some neutrons are absorbed in the burnable poison and they are prevented from entering the fuel kernel. At the end of life, when the absorber is almost depleted, more neutrons stream into the fuel kernel of QUADRISO particles causing fission reactions. The mechanism has been applied to a prismatic high temperature reactor with europium or erbium burnable absorbers, showing a significant reduction in the initial excess reactivity of the core.« less
A novel concept of QUADRISO particles : Part II Utilization for excess reactivity control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, A.
2011-01-01
In high temperature reactors, burnable absorbers are utilized to manage the excess reactivity at the early stage of the fuel cycle. In this paper QUADRISO particles are proposed to manage the initial excess reactivity of high temperature reactors. The QUADRISO concept synergistically couples the decrease of the burnable poison with the decrease of the fissile materials at the fuel particle level. This mechanism is set up by introducing a burnable poison layer around the fuel kernel in ordinary TRISO particles or by mixing the burnable poison with any of the TRISO coated layers. At the beginning of life, the initialmore » excess reactivity is small because some neutrons are absorbed in the burnable poison and they are prevented from entering the fuel kernel. At the end of life, when the absorber is almost depleted, more neutrons stream into the fuel kernel of QUADRISO particles causing fission reactions. The mechanism has been applied to a prismatic high temperature reactor with europium or erbium burnable absorbers, showing a significant reduction in the initial excess reactivity of the core.« less
Unconventional Signal Processing Using the Cone Kernel Time-Frequency Representation.
1992-10-30
Wigner - Ville distribution ( WVD ), the Choi- Williams distribution , and the cone kernel distribution were compared with the spectrograms. Results were...ambiguity function. Figures A-18(c) and (d) are the Wigner - Ville Distribution ( WVD ) and CK-TFR Doppler maps. In this noiseless case all three exhibit...kernel is the basis for the well known Wigner - Ville distribution . In A-9(2), the cone kernel defined by Zhao, Atlas and Marks [21 is described
Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar
2017-01-01
Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838
Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar
2017-01-01
Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindemer, Terrence; Voit, Stewart L; Silva, Chinthaka M
2014-01-01
The U.S. Department of Energy is considering a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with large, dense uranium nitride (UN) kernels. This effort explores many factors involved in using gel-derived uranium oxide-carbon microspheres to make large UN kernels. Analysis of recent studies with sufficient experimental details is provided. Extensive thermodynamic calculations are used to predict carbon monoxide and other pressures for several different reactions that may be involved in conversion of uranium oxides and carbides to UN. Experimentally, the method for making themore » gel-derived microspheres is described. These were used in a microbalance with an attached mass spectrometer to determine details of carbothermic conversion in argon, nitrogen, or vacuum. A quantitative model is derived from experiments for vacuum conversion to an uranium oxide-carbide kernel.« less
Novel near-infrared sampling apparatus for single kernel analysis of oil content in maize.
Janni, James; Weinstock, B André; Hagen, Lisa; Wright, Steve
2008-04-01
A method of rapid, nondestructive chemical and physical analysis of individual maize (Zea mays L.) kernels is needed for the development of high value food, feed, and fuel traits. Near-infrared (NIR) spectroscopy offers a robust nondestructive method of trait determination. However, traditional NIR bulk sampling techniques cannot be applied successfully to individual kernels. Obtaining optimized single kernel NIR spectra for applied chemometric predictive analysis requires a novel sampling technique that can account for the heterogeneous forms, morphologies, and opacities exhibited in individual maize kernels. In this study such a novel technique is described and compared to less effective means of single kernel NIR analysis. Results of the application of a partial least squares (PLS) derived model for predictive determination of percent oil content per individual kernel are shown.
Progress in understanding fission-product behaviour in coated uranium-dioxide fuel particles
NASA Astrophysics Data System (ADS)
Barrachin, M.; Dubourg, R.; Kissane, M. P.; Ozrin, V.
2009-03-01
Supported by results of calculations performed with two analytical tools (MFPR, which takes account of physical and chemical mechanisms in calculating the chemical forms and physical locations of fission products in UO2, and MEPHISTA, a thermodynamic database), this paper presents an investigation of some important aspects of the fuel microstructure and chemical evolutions of irradiated TRISO particles. The following main conclusions can be identified with respect to irradiated TRISO fuel: first, the relatively low oxygen potential within the fuel particles with respect to PWR fuel leads to chemical speciation that is not typical of PWR fuels, e.g., the relatively volatile behaviour of barium; secondly, the safety-critical fission-product caesium is released from the urania kernel but the buffer and pyrolytic-carbon coatings could form an important chemical barrier to further migration (i.e., formation of carbides). Finally, significant releases of fission gases from the urania kernel are expected even in nominal conditions.
Time-frequency distributions for propulsion-system diagnostics
NASA Astrophysics Data System (ADS)
Griffin, Michael E.; Tulpule, Sharayu
1991-12-01
The Wigner distribution and its smoothed versions, i.e., Choi-Williams and Gaussian kernels, are evaluated for propulsion system diagnostics. The approach is intended for off-line kernel design by using the ambiguity domain to select the appropriate Gaussian kernel. The features produced by the Wigner distribution and its smoothed versions correlate remarkably well with documented failure indications. The selection of the kernel on the other hand is very subjective for our unstructured data.
NASA Astrophysics Data System (ADS)
Liu, Xiao; Cai, Zun; Tong, Yiheng; Zheng, Hongtao
2017-08-01
Large Eddy Simulation (LES) and experiment were employed to investigate the transient ignition and flame propagation process in a rearwall-expansion cavity scramjet combustor using combined fuel injection schemes. The compressible supersonic solver and three ethylene combustion mechanisms were first validated against experimental data and results show in reasonably good agreement. Fuel injection scheme combining transverse and direct injectors in the cavity provides a benefit mixture distribution and could achieve a successful ignition. Four stages are illustrated in detail from both experiment and LES. After forced ignition in the cavity, initial flame kernel propagates upstream towards the cavity front edge and ignites the mixture, which acts as a continuous pilot flame, and then propagates downstream along the cavity shear layer rapidly to the combustor exit. Cavity shear layer flame stabilization mode can be concluded from the heat release rate and local high temperature distribution during the combustion process.
Novel characterization method of impedance cardiography signals using time-frequency distributions.
Escrivá Muñoz, Jesús; Pan, Y; Ge, S; Jensen, E W; Vallverdú, M
2018-03-16
The purpose of this document is to describe a methodology to select the most adequate time-frequency distribution (TFD) kernel for the characterization of impedance cardiography signals (ICG). The predominant ICG beat was extracted from a patient and was synthetized using time-frequency variant Fourier approximations. These synthetized signals were used to optimize several TFD kernels according to a performance maximization. The optimized kernels were tested for noise resistance on a clinical database. The resulting optimized TFD kernels are presented with their performance calculated using newly proposed methods. The procedure explained in this work showcases a new method to select an appropriate kernel for ICG signals and compares the performance of different time-frequency kernels found in the literature for the case of ICG signals. We conclude that, for ICG signals, the performance (P) of the spectrogram with either Hanning or Hamming windows (P = 0.780) and the extended modified beta distribution (P = 0.765) provided similar results, higher than the rest of analyzed kernels. Graphical abstract Flowchart for the optimization of time-frequency distribution kernels for impedance cardiography signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Sonat Sen; Michael A. Pope; Abderrafi M. Ougouag
2012-04-01
The tri-isotropic (TRISO) fuel developed for High Temperature reactors is known for its extraordinary fission product retention capabilities [1]. Recently, the possibility of extending the use of TRISO particle fuel to Light Water Reactor (LWR) technology, and perhaps other reactor concepts, has received significant attention [2]. The Deep Burn project [3] currently focuses on once-through burning of transuranic fissile and fissionable isotopes (TRU) in LWRs. The fuel form for this purpose is called Fully-Ceramic Micro-encapsulated (FCM) fuel, a concept that borrows the TRISO fuel particle design from high temperature reactor technology, but uses SiC as a matrix material rather thanmore » graphite. In addition, FCM fuel may also use a cladding made of a variety of possible material, again including SiC as an admissible choice. The FCM fuel used in the Deep Burn (DB) project showed promising results in terms of fission product retention at high burnup values and during high-temperature transients. In the case of DB applications, the fuel loading within a TRISO particle is constituted entirely of fissile or fissionable isotopes. Consequently, the fuel was shown to be capable of achieving reasonable burnup levels and cycle lengths, especially in the case of mixed cores (with coexisting DB and regular LWR UO2 fuels). In contrast, as shown below, the use of UO2-only FCM fuel in a LWR results in considerably shorter cycle length when compared to current-generation ordinary LWR designs. Indeed, the constraint of limited space availability for heavy metal loading within the TRISO particles of FCM fuel and the constraint of low (i.e., below 20 w/0) 235U enrichment combine to result in shorter cycle lengths compared to ordinary LWRs if typical LWR power densities are also assumed and if typical TRISO particle dimensions and UO2 kernels are specified. The primary focus of this summary is on using TRISO particles with up to 20 w/0 enriched uranium kernels loaded in Pressurized Water Reactor (PWR) assemblies. In addition to consideration of this 'naive' use of TRISO fuel in LWRs, several refined options are briefly examined and others are identified for further consideration including the use of advanced, high density fuel forms and larger kernel diameters and TRISO packing fractions. The combination of 800 {micro}m diameter kernels of 20% enriched UN and 50% TRISO packing fraction yielded reactivity sufficient to achieve comparable burnup to present-day PWR fuel.« less
Surface engineering of low enriched uranium-molybdenum
NASA Astrophysics Data System (ADS)
Leenaers, A.; Van den Berghe, S.; Detavernier, C.
2013-09-01
Recent attempts to qualify the LEU(Mo) dispersion plate fuel with Si addition to the Al matrix up to high power and burn-up have not yet been successful due to unacceptable fuel plate swelling at a local burn-up above 60% 235U. The root cause of the failures is clearly related directly to the formation of the U(Mo)-Al(Si) interaction layer. Excessive formation of these layers around the fuel kernels severely weakens the local mechanical integrity and eventually leads to pillowing of the plate. In 2008, SCK·CEN has launched the SELENIUM U(Mo) dispersion fuel development project in an attempt to find an alternative way to reduce the interaction between U(Mo) fuel kernels and the Al matrix to a significantly low level: by applying a coating on the U(Mo) kernels. Two fuel plates containing 8gU/cc U(Mo) coated with respectively 600 nm Si and 1000 nm ZrN in a pure Al matrix were manufactured. These plates were irradiated in the BR2 reactor up to a maximum heat flux of 470 W/cm2 until a maximum local burn-up of approximately 70% 235U (˜50% plate average) was reached. Awaiting the PIE results, the advantages of applying a coating are discussed in this paper through annealing experiments and TRIM (the Transport of Ions in Matter) calculations.
Fission Product Inventory and Burnup Evaluation of the AGR-2 Irradiation by Gamma Spectrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harp, Jason Michael; Stempien, John Dennis; Demkowicz, Paul Andrew
Gamma spectrometry has been used to evaluate the burnup and fission product inventory of different components from the US Advanced Gas Reactor Fuel Development and Qualification Program's second TRISO-coated particle fuel irradiation test (AGR-2). TRISO fuel in this irradiation included both uranium carbide / uranium oxide (UCO) kernels and uranium oxide (UO 2) kernels. Four of the 6 capsules contained fuel from the US Advanced Gas Reactor program, and only those capsules will be discussed in this work. The inventories of gamma-emitting fission products from the fuel compacts, graphite compact holders, graphite spacers and test capsule shell were evaluated. Thesemore » data were used to measure the fractional release of fission products such as Cs-137, Cs-134, Eu-154, Ce-144, and Ag-110m from the compacts. The fraction of Ag-110m retained in the compacts ranged from 1.8% to full retention. Additionally, the activities of the radioactive cesium isotopes (Cs-134 and Cs-137) have been used to evaluate the burnup of all US TRISO fuel compacts in the irradiation. The experimental burnup evaluations compare favorably with burnups predicted from physics simulations. Predicted burnups for UCO compacts range from 7.26 to 13.15 % fission per initial metal atom (FIMA) and 9.01 to 10.69 % FIMA for UO 2 compacts. Measured burnup ranged from 7.3 to 13.1 % FIMA for UCO compacts and 8.5 to 10.6 % FIMA for UO 2 compacts. Results from gamma emission computed tomography performed on compacts and graphite holders that reveal the distribution of different fission products in a component will also be discussed. Gamma tomography of graphite holders was also used to locate the position of TRISO fuel particles suspected of having silicon carbide layer failures that lead to in-pile cesium release.« less
Fission Product Inventory and Burnup Evaluation of the AGR-2 Irradiation by Gamma Spectrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harp, Jason M.; Demkowicz, Paul A.; Stempien, John D.
Gamma spectrometry has been used to evaluate the burnup and fission product inventory of different components from the US Advanced Gas Reactor Fuel Development and Qualification Program's second TRISO-coated particle fuel irradiation test (AGR-2). TRISO fuel in this irradiation included both uranium carbide / uranium oxide (UCO) kernels and uranium oxide (UO2) kernels. Four of the 6 capsules contained fuel from the US Advanced Gas Reactor program, and only those capsules will be discussed in this work. The inventories of gamma-emitting fission products from the fuel compacts, graphite compact holders, graphite spacers and test capsule shell were evaluated. These datamore » were used to measure the fractional release of fission products such as Cs-137, Cs-134, Eu-154, Ce-144, and Ag-110m from the compacts. The fraction of Ag-110m retained in the compacts ranged from 1.8% to full retention. Additionally, the activities of the radioactive cesium isotopes (Cs-134 and Cs-137) have been used to evaluate the burnup of all US TRISO fuel compacts in the irradiation. The experimental burnup evaluations compare favorably with burnups predicted from physics simulations. Predicted burnups for UCO compacts range from 7.26 to 13.15 % fission per initial metal atom (FIMA) and 9.01 to 10.69 % FIMA for UO2 compacts. Measured burnup ranged from 7.3 to 13.1 % FIMA for UCO compacts and 8.5 to 10.6 % FIMA for UO2 compacts. Results from gamma emission computed tomography performed on compacts and graphite holders that reveal the distribution of different fission products in a component will also be discussed. Gamma tomography of graphite holders was also used to locate the position of TRISO fuel particles suspected of having silicon carbide layer failures that lead to in-pile cesium release.« less
Computational investigation of intense short-wavelength laser interaction with rare gas clusters
NASA Astrophysics Data System (ADS)
Bigaouette, Nicolas
Current Very High Temperature Reactor designs incorporate TRi-structural ISOtropic (TRISO) particle fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel by dropping a cold precursor solution into a column of hot trichloroethylene (TCE). The temperature difference drives the liquid precursor solution to precipitate the metal solution into gel spheres before reaching the bottom of a production column. Over time, gelation byproducts inhibit complete gelation and the TCE must be purified or discarded. The resulting mixed-waste stream is expensive to dispose of or recycle, and changing the forming fluid to a non-hazardous alternative could greatly improve the economics of kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacements. The physical properties of the alternatives were measured as a function of temperature between 25 °C and 80 °C. Calculated terminal velocities and heat transfer rates provided an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane were selected for further testing, and surrogate yttria-stabilized zirconia (YSZ) kernels were produced using these selected fluids. The kernels were characterized for density, geometry, composition, and crystallinity and compared to a control group of kernels produced in silicone oil. Production in 1-bromotetradecane showed positive results, producing dense (93.8 %TD) and spherical (1.03 aspect ratio) kernels, but proper gelation did not occur in the other alternative forming fluids. With many of the YSZ kernels not properly gelling within the length of the column, this project further investigated the heat transfer properties of the forming fluids and precursor solution. A sensitivity study revealed that the heat transfer properties of the precursor solution have the strongest impact on gelation time. A COMSOL heat transfer model estimated an effective thermal diffusivity range for the YSZ precursor solution as 1.13x10 -8 m2/s to 3.35x10-8 m 2/s, which is an order of magnitude smaller than the value used in previous studies. 1-bromotetradecane is recommended for further investigation with the production of uranium-based kernels.
A survey of kernel-type estimators for copula and their applications
NASA Astrophysics Data System (ADS)
Sumarjaya, I. W.
2017-10-01
Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.
Low temperature chemical processing of graphite-clad nuclear fuels
Pierce, Robert A.
2017-10-17
A reduced-temperature method for treatment of a fuel element is described. The method includes molten salt treatment of a fuel element with a nitrate salt. The nitrate salt can oxidize the outer graphite matrix of a fuel element. The method can also include reduced temperature degradation of the carbide layer of a fuel element and low temperature solubilization of the fuel in a kernel of a fuel element.
Neutronics Studies of Uranium-bearing Fully Ceramic Micro-encapsulated Fuel for PWRs
George, Nathan M.; Maldonado, G. Ivan; Terrani, Kurt A.; ...
2014-12-01
Our study evaluated the neutronics and some of the fuel cycle characteristics of using uranium-based fully ceramic microencapsulated (FCM) fuel in a pressurized water reactor (PWR). Specific PWR lattice designs with FCM fuel have been developed that are expected to achieve higher specific burnup levels in the fuel while also increasing the tolerance to reactor accidents. The SCALE software system was the primary analysis tool used to model the lattice designs. A parametric study was performed by varying tristructural isotropic particle design features (e.g., kernel diameter, coating layer thicknesses, and packing fraction) to understand the impact on reactivity and resultingmore » operating cycle length. Moreover, to match the lifetime of an 18-month PWR cycle, the FCM particle fuel design required roughly 10% additional fissile material at beginning of life compared with that of a standard uranium dioxide (UO 2) rod. Uranium mononitride proved to be a favorable fuel for the fuel kernel due to its higher heavy metal loading density compared with UO 2. The FCM fuel designs evaluated maintain acceptable neutronics design features for fuel lifetime, lattice peaking factors, and nonproliferation figure of merit.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerczak, Tyler J.; Smith, Kurt R.; Petrie, Christian M.
Tristructural-isotropic (TRISO)–coated particle fuel is a promising advanced fuel concept consisting of a spherical fuel kernel made of uranium oxide and uranium carbide, surrounded by a porous carbonaceous buffer layer and successive layers of dense inner pyrolytic carbon (IPyC), silicon carbide (SiC) deposited by chemical vapor , and dense outer pyrolytic carbon (OPyC). This fuel concept is being considered for advanced reactor applications such as high temperature gas-cooled reactors (HTGRs) and molten salt reactors (MSRs), as well as for accident-tolerant fuel for light water reactors (LWRs). Development and implementation of TRISO fuel for these reactor concepts support the US Departmentmore » of Energy (DOE) Office of Nuclear Energy mission to promote safe, reliable nuclear energy that is sustainable and environmentally friendly. During operation, the SiC layer serves as the primary barrier to metallic fission products and actinides not retained in the kernel. It has been observed that certain fission products are released from TRISO fuel during operation, notably, Ag, Eu, and Sr [1]. Release of these radioisotopes causes safety and maintenance concerns.« less
Pyrolytic carbon-coated nuclear fuel
Lindemer, Terrence B.; Long, Jr., Ernest L.; Beatty, Ronald L.
1978-01-01
An improved nuclear fuel kernel having at least one pyrolytic carbon coating and a silicon carbon layer is provided in which extensive interaction of fission product lanthanides with the silicon carbon layer is avoided by providing sufficient UO.sub.2 to maintain the lanthanides as oxides during in-reactor use of said fuel.
Cid, Jaime A; von Davier, Alina A
2015-05-01
Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Rooyen, Isabella Johanna; Demkowicz, Paul Andrew; Riesterer, Jessica Lori
2012-12-01
The electron microscopic examination of selected irradiated TRISO coated particles of the AGR-1 experiment of fuel compact 6-3-2 are presented in this report. Compact 6-3-2 refers to the compact in Capsule 6 at level 3 of Stack 2. The fuel used in capsule 6 compacts, are called the “baseline” fuel as it is fabricated with refined coating process conditions used to fabricate historic German fuel, because of its excellent irradiation performance with UO2 kernels. The AGR-1 fuel is however made of low-enriched uranium oxycarbide (UCO). Kernel diameters are approximately 350 µm with a U-235 enrichment of approximately 19.7%. Compact 6-3-2more » has been irradiated to 11.3% FIMA compact average burn-up with a time average, volume average temperature of 1070.2°C and with a compact average fast fluence of 2.38E21 n/cm« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Rooyen, Isabella Johanna; Demkowicz, Paul Andrew; Riesterer, Jessica Lori
2012-12-01
The electron microscopic examination of selected irradiated TRISO coated particles of the AGR-1 experiment of fuel compact 6-3-2 are presented in this report. Compact 6-3-2 refers to the compact in Capsule 6 at level 3 of Stack 2. The fuel used in capsule 6 compacts, are called the “baseline” fuel as it is fabricated with refined coating process conditions used to fabricate historic German fuel, because of its excellent irradiation performance with UO 2 kernels. The AGR-1 fuel is however made of low-enriched uranium oxycarbide (UCO). Kernel diameters are approximately 350 µm with a U-235 enrichment of approximately 19.7%. Compactmore » 6-3-2 has been irradiated to 11.3% FIMA compact average burn-up with a time average, volume average temperature of 1070.2°C and with a compact average fast fluence of 2.38E21 n/cm« less
An Agent-Based Modeling Framework and Application for the Generic Nuclear Fuel Cycle
NASA Astrophysics Data System (ADS)
Gidden, Matthew J.
Key components of a novel methodology and implementation of an agent-based, dynamic nuclear fuel cycle simulator, Cyclus , are presented. The nuclear fuel cycle is a complex, physics-dependent supply chain. To date, existing dynamic simulators have not treated constrained fuel supply, time-dependent, isotopic-quality based demand, or fuel fungibility particularly well. Utilizing an agent-based methodology that incorporates sophisticated graph theory and operations research techniques can overcome these deficiencies. This work describes a simulation kernel and agents that interact with it, highlighting the Dynamic Resource Exchange (DRE), the supply-demand framework at the heart of the kernel. The key agent-DRE interaction mechanisms are described, which enable complex entity interaction through the use of physics and socio-economic models. The translation of an exchange instance to a variant of the Multicommodity Transportation Problem, which can be solved feasibly or optimally, follows. An extensive investigation of solution performance and fidelity is then presented. Finally, recommendations for future users of Cyclus and the DRE are provided.
Bazargan, Alireza; Rough, Sarah L; McKay, Gordon
2018-04-01
Palm kernel shell biochars (PKSB) ejected as residues from a gasifier have been used for solid fuel briquette production. With this approach, palm kernel shells can be used for energy production twice: first, by producing rich syngas during gasification; second, by compacting the leftover residues from gasification into high calorific value briquettes. Herein, the process parameters for the manufacture of PKSB biomass briquettes via compaction are optimized. Two possible optimum process scenarios are considered. In the first, the compaction speed is increased from 0.5 to 10 mm/s, the compaction pressure is decreased from 80 Pa to 40 MPa, the retention time is reduced from 10 s to zero, and the starch binder content of the briquette is halved from 0.1 to 0.05 kg/kg. With these adjustments, the briquette production rate increases by more than 20-fold; hence capital and operational costs can be reduced and the service life of compaction equipment can be increased. The resulting product satisfactorily passes tensile (compressive) crushing strength and impact resistance tests. The second scenario involves reducing the starch weight content to 0.03 kg/kg, while reducing the compaction pressure to a value no lower than 60 MPa. Overall, in both cases, the PKSB biomass briquettes show excellent potential as a solid fuel with calorific values on par with good-quality coal. CHNS: carbon, hydrogen, nitrogen, sulfur; FFB: fresh fruit bunch(es); HHV: higher heating value [J/kg]; LHV: lower heating value [J/kg]; PKS: palm kernel shell(s); PKSB: palm kernel shell biochar(s); POME: palm oil mill effluent; RDF: refuse-derived fuel; TGA: thermogravimetric analysis.
Analysis and Development of A Robust Fuel for Gas-Cooled Fast Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knight, Travis W.
2010-01-31
The focus of this effort was on the development of an advanced fuel for gas-cooled fast reactor (GFR) applications. This composite design is based on carbide fuel kernels dispersed in a ZrC matrix. The choice of ZrC is based on its high temperature properties and good thermal conductivity and improved retention of fission products to temperatures beyond that of traditional SiC based coated particle fuels. A key component of this study was the development and understanding of advanced fabrication techniques for GFR fuels that have potential to reduce minor actinide (MA) losses during fabrication owing to their higher vapor pressuresmore » and greater volatility. The major accomplishments of this work were the study of combustion synthesis methods for fabrication of the ZrC matrix, fabrication of high density UC electrodes for use in the rotating electrode process, production of UC particles by rotating electrode method, integration of UC kernels in the ZrC matrix, and the full characterization of each component. Major accomplishments in the near-term have been the greater characterization of the UC kernels produced by the rotating electrode method and their condition following the integration in the composite (ZrC matrix) following the short time but high temperature combustion synthesis process. This work has generated four journal publications, one conference proceeding paper, and one additional journal paper submitted for publication (under review). The greater significance of the work can be understood in that it achieved an objective of the DOE Generation IV (GenIV) roadmap for GFR Fuel—namely the demonstration of a composite carbide fuel with 30% volume fuel. This near-term accomplishment is even more significant given the expected or possible time frame for implementation of the GFR in the years 2030 -2050 or beyond.« less
Irradiation performance of HTGR fuel rods in HFIR experiments HRB-7 and -8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valentine, K.H.; Homan, F.J.; Long, E.L. Jr.
1977-05-01
The HRB-7 and -8 experiments were designed as a comprehensive test of mixed thorium-uranium oxide fissile particles with Th:U ratios from 0 to 8 for HTGR recycle application. In addition, fissile particles derived from Weak-Acid Resin (WAR) were tested as a potential backup type of fissile particle for HTGR recycle. These experiments were conducted at two temperatures (1250 and 1500/sup 0/C) to determine the influence of operating temperature on the performance parameters studied. The minor objectives were comparison of advanced coating designs where ZrC replaced SiC in the Triso design, testing of fuel coated in laboratory-scale equipment with fuel coatedmore » in production-scale coaters, comparison of the performance of /sup 233/U-bearing particles with that of /sup 235/U-bearing particles, comparison of the performance of Biso coatings with Triso coatings for particles containing the same type of kernel, and testing of multijunction tungsten-rhenium thermocouples. All objectives were accomplished. As a result of these experiments the mixed thorium-uranium oxide fissile kernel was replaced by a WAR-derived particle in the reference recycle design. A tentative decision to make this change had been reached before the HRB-7 and -8 capsules were examined, and the results of the examination confirmed the accuracy of the previous decision. Even maximum dilution (Th/U approximately equal to 8) of the mixed thorium-uranium oxide kernel was insufficient to prevent amoeba of the kernels at rates that are unacceptable in a large HTGR. Other results showed the performance of /sup 233/U-bearing particles to be identical to that of /sup 235/U-bearing particles, the performance of fuel coated in production-scale equipment to be at least as good as that of fuel coated in laboratory-scale coaters, the performance of ZrC coatings to be very promising, and Biso coatings to be inferior to Triso coatings relative to fission product retention.« less
1994-04-18
because they represent a microkernel and monolithic kernel approach to MLS operating system issues. TMACH is I based on MACH, a distributed operating...the operating system is [L.sed on a microkernel design or a monolithic kernel design. This distinction requires some caution since monolithic operating...are provided by 3 user-level processes, in contrast to standard UNIX, which has a large monolithic kernel that pro- I - 22 - Distributed O)perating
Kernel-aligned multi-view canonical correlation analysis for image recognition
NASA Astrophysics Data System (ADS)
Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao
2016-09-01
Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.
Cavity ignition of liquid kerosene in supersonic flow with a laser-induced plasma.
Li, Xiaohui; Yang, Leichao; Peng, Jiangbo; Yu, Xin; Liang, Jianhan; Sun, Rui
2016-10-31
We have for the first time achieved cavity ignition and sustainable combustion of liquid kerosene in supersonic flow of Mach number 2.52 using a laser-induced plasma (LIP) on a model supersonic combustor equipped with dual cavities in tandem as flameholders. The liquid kerosene of ambient temperature is injected from the front wall of the upstream cavity, while the ignitions have been conducted in both cavities. High-speed chemiluminescence imaging shows that the flame kernel initiated in the downstream cavity can propagate contraflow into upstream cavity and establish full sustainable combustion. Based on the qualitative distribution of the kerosene vapor in the cavity, obtained using the kerosene planar laser-induced fluorescence technique, we find that the fuel atomization and evaporation, local hydrodynamic and mixing conditions in the vicinity of the ignition position and in the leading edge area of the cavity have combined effects on the flame kernel evolution and the eventual ignition results.
NASA Astrophysics Data System (ADS)
Dora, Nagaraju; Jothi, T. J. Sarvoththama
2018-05-01
The present study investigates the effectiveness of using di-ethyl ether (DEE) as the fuel additive in engine performance and emissions. Experiments are carried out in a single cylinder four stroke diesel engine at constant speed. Two different fuels namely liquefied petroleum gas (LPG) and palm kernel methyl ester (PKME) are used as primary fuels with DEE as the fuel additive. LPG flow rates of 0.6 and 0.8 kg/h are considered, and flow rate of DEE is varied to maintain the constant engine speed. In case of PKME fuel, it is blended with diesel in the latter to the former ratio of 80:20, and DEE is varied in the volumetric proportion of 1 and 2%. Results indicate that for the engine operating in LPG-DEE mode at 0.6 kg/h of LPG, the brake thermal efficiency is lowered by 26%; however, NOx is subsequently reduced by around 30% compared to the engine running with only diesel fuel at 70% load. Similarly, results of PKME blended fuel showed a drastic reduction in the NOx and CO emissions. In these two modes of operation, DEE is observed to be significant fuel additive regarding emissions reduction.
USDA-ARS?s Scientific Manuscript database
Dark, hard, and vitreous kernel content is an important grading characteristic for hard red spring (HRS) wheat in the U.S. This research aimed to determine the associations of kernel vitreousness (KV) with protein molecular weight distribution (MWD) and quality traits that were not biased by quanti...
Development of a Radial Deconsolidation Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmreich, Grant W.; Montgomery, Fred C.; Hunn, John D.
2015-12-01
A series of experiments have been initiated to determine the retention or mobility of fission products* in AGR fuel compacts [Petti, et al. 2010]. This information is needed to refine fission product transport models. The AGR-3/4 irradiation test involved half-inch-long compacts that each contained twenty designed-to-fail (DTF) particles, with 20-μm thick carbon-coated kernels whose coatings were deliberately fabricated such that they would crack under irradiation, providing a known source of post-irradiation isotopes. The DTF particles in these compacts were axially distributed along the compact centerline so that the diffusion of fission products released from the DTF kernels would be radiallymore » symmetric [Hunn, et al. 2012; Hunn et al. 2011; Kercher, et al. 2011; Hunn, et al. 2007]. Compacts containing DTF particles were irradiated at Idaho National Laboratory (INL) at the Advanced Test Reactor (ATR) [Collin, 2015]. Analysis of the diffusion of these various post-irradiation isotopes through the compact requires a method to radially deconsolidate the compacts so that nested-annular volumes may be analyzed for post-irradiation isotope inventory in the compact matrix, TRISO outer pyrolytic carbon (OPyC), and DTF kernels. An effective radial deconsolidation method and apparatus appropriate to this application has been developed and parametrically characterized.« less
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; von Davier, Alina A.
2008-01-01
The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…
Fission Product Release and Survivability of UN-Kernel LWR TRISO Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besmann, Theodore M; Ferber, Mattison K; Lin, Hua-Tay
2014-01-01
A thermomechanical assessment of the LWR application of TRISO fuel with UN kernels was performed. Fission product release under operational and transient temperature conditions was determined by extrapolation from range calculations and limited data from irradiated UN pellets. Both fission recoil and diffusive release were considered and internal particle pressures computed for both 650 and 800 m diameter kernels as a function of buffer layer thickness. These pressures were used in conjunction with a finite element program to compute the radial and tangential stresses generated with a TRISO particle as a function of fluence. Creep and swelling of the innermore » and outer pyrolytic carbon layers were included in the analyses. A measure of reliability of the TRISO particle was obtained by measuring the probability of survival of the SiC barrier layer and the maximum tensile stress generated in the pyrolytic carbon layers as a function of fluence. These reliability estimates were obtained as functions of the kernel diameter, buffer layer thickness, and pyrolytic carbon layer thickness. The value of the probability of survival at the end of irradiation was inversely proportional to the maximum pressure.« less
Fission product release and survivability of UN-kernel LWR TRISO fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. M. Besmann; M. K. Ferber; H.-T. Lin
2014-05-01
A thermomechanical assessment of the LWR application of TRISO fuel with UN kernels was performed. Fission product release under operational and transient temperature conditions was determined by extrapolation from fission product recoil calculations and limited data from irradiated UN pellets. Both fission recoil and diffusive release were considered and internal particle pressures computed for both 650 and 800 um diameter kernels as a function of buffer layer thickness. These pressures were used in conjunction with a finite element program to compute the radial and tangential stresses generated within a TRISO particle undergoing burnup. Creep and swelling of the inner andmore » outer pyrolytic carbon layers were included in the analyses. A measure of reliability of the TRISO particle was obtained by computing the probability of survival of the SiC barrier layer and the maximum tensile stress generated in the pyrolytic carbon layers from internal pressure and thermomechanics of the layers. These reliability estimates were obtained as functions of the kernel diameter, buffer layer thickness, and pyrolytic carbon layer thickness. The value of the probability of survival at the end of irradiation was inversely proportional to the maximum pressure.« less
Fission product palladium-silicon carbide interaction in htgr fuel particles
NASA Astrophysics Data System (ADS)
Minato, Kazuo; Ogawa, Toru; Kashimura, Satoru; Fukuda, Kousaku; Shimizu, Michio; Tayama, Yoshinobu; Takahashi, Ishio
1990-07-01
Interaction of fission product palladium (Pd) with the silicon carbide (SiC) layer was observed in irradiated Triso-coated uranium dioxide particles for high temperature gas-cooled reactors (HTGR) with an optical microscope and electron probe microanalyzers. The SiC layers were attacked locally or the reaction product formed nodules at the attack site. Although the main element concerned with the reaction was palladium, rhodium and ruthenium were also detected at the corroded areas in some particles. Palladium was detected on both the hot and cold sides of the particles, but the corroded areas and the palladium accumulations were distributed particularly on the cold side of the particles. The observed Pd-SiC reaction depths were analyzed on the assumption that the release of palladium from the fuel kernel controls the whole Pd-SiC reaction.
Neutron dose rate analysis on HTGR-10 reactor using Monte Carlo code
NASA Astrophysics Data System (ADS)
Suwoto; Adrial, H.; Hamzah, A.; Zuhair; Bakhri, S.; Sunaryo, G. R.
2018-02-01
The HTGR-10 reactor is cylinder-shaped core fuelled with kernel TRISO coated fuel particles in the spherical pebble with helium cooling system. The outlet helium gas coolant temperature outputted from the reactor core is designed to 700 °C. One advantage HTGR type reactor is capable of co-generation, as an addition to generating electricity, the reactor was designed to produce heat at high temperature can be used for other processes. The spherical fuel pebble contains 8335 TRISO UO2 kernel coated particles with enrichment of 10% and 17% are dispersed in a graphite matrix. The main purpose of this study was to analysis the distribution of neutron dose rates generated from HTGR-10 reactors. The calculation and analysis result of neutron dose rate in the HTGR-10 reactor core was performed using Monte Carlo MCNP5v1.6 code. The problems of double heterogeneity in kernel fuel coated particles TRISO and spherical fuel pebble in the HTGR-10 core are modelled well with MCNP5v1.6 code. The neutron flux to dose conversion factors taken from the International Commission on Radiological Protection (ICRP-74) was used to determine the dose rate that passes through the active core, reflectors, core barrel, reactor pressure vessel (RPV) and a biological shield. The calculated results of neutron dose rate with MCNP5v1.6 code using a conversion factor of ICRP-74 (2009) for radiation workers in the radial direction on the outside of the RPV (radial position = 220 cm from the center of the patio HTGR-10) provides the respective value of 9.22E-4 μSv/h and 9.58E-4 μSv/h for enrichment 10% and 17%, respectively. The calculated values of neutron dose rates are compliant with BAPETEN Chairman’s Regulation Number 4 Year 2013 on Radiation Protection and Safety in Nuclear Energy Utilization which sets the limit value for the average effective dose for radiation workers 20 mSv/year or 10μSv/h. Thus the protection and safety for radiation workers to be safe from the radiation source has been fulfilled. From the result analysis, it can be concluded that the model of calculation result of neutron dose rate for HTGR-10 core has met the required radiation safety standards.
Advanced Fuels for LWRs: Fully-Ceramic Microencapsulated and Related Concepts FY 2012 Interim Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Sonat Sen; Brian Boer; John D. Bess
2012-03-01
This report summarizes the progress in the Deep Burn project at Idaho National Laboratory during the first half of fiscal year 2012 (FY2012). The current focus of this work is on Fully-Ceramic Microencapsulated (FCM) fuel containing low-enriched uranium (LEU) uranium nitride (UN) fuel kernels. UO2 fuel kernels have not been ruled out, and will be examined as later work in FY2012. Reactor physics calculations confirmed that the FCM fuel containing 500 mm diameter kernels of UN fuel has positive MTC with a conventional fuel pellet radius of 4.1 mm. The methodology was put into place and validated against MCNP tomore » perform whole-core calculations using DONJON, which can interpolate cross sections from a library generated using DRAGON. Comparisons to MCNP were performed on the whole core to confirm the accuracy of the DRAGON/DONJON schemes. A thermal fluid coupling scheme was also developed and implemented with DONJON. This is currently able to iterate between diffusion calculations and thermal fluid calculations in order to update fuel temperatures and cross sections in whole-core calculations. Now that the DRAGON/DONJON calculation capability is in place and has been validated against MCNP results, and a thermal-hydraulic capability has been implemented in the DONJON methodology, the work will proceed to more realistic reactor calculations. MTC calculations at the lattice level without the correct burnable poison are inadequate to guarantee zero or negative values in a realistic mode of operation. Using the DONJON calculation methodology described in this report, a startup core with enrichment zoning and burnable poisons will be designed. Larger fuel pins will be evaluated for their ability to (1) alleviate the problem of positive MTC and (2) increase reactivity-limited burnup. Once the critical boron concentration of the startup core is determined, MTC will be calculated to verify a non-positive value. If the value is positive, the design will be changed to require less soluble boron by, for example, increasing the reactivity hold-down by burnable poisons. Then, the whole core analysis will be repeated until an acceptable design is found. Calculations of departure from nucleate boiling ratio (DNBR) will be included in the safety evaluation as well. Once a startup core is shown to be viable, subsequent reloads will be simulated by shuffling fuel and introducing fresh fuel. The PASTA code has been updated with material properties of UN fuel from literature and a model for the diffusion and release of volatile fission products from the SiC matrix material . Preliminary simulations have been performed for both normal conditions and elevated temperatures. These results indicated that the fuel performs well and that the SiC matrix has a good retention of the fission products. The path forward for fuel performance work includes improvement of metallic fission product release from the kernel. Results should be considered preliminary and further validation is required.« less
Fixed and Data Adaptive Kernels in Cohen’s Class of Time-Frequency Distributions
1992-09-01
translated into its associated analytic signal by using the techniques discussed in Chapter Four. 1. Wigner - Ville Distribution function PS = wvd (data,winlen...step,begin,theend) % PS = wvd (data,winlen,step,begin,theend) % ’wvd.ml returns the Wigner - Ville time-frequency distribution % for the input data...12 IV. FIXED KERNEL DISTRIBUTIONS .................................................................. 19 A. WIGNER - VILLE DISTRIBUTION
NASA Technical Reports Server (NTRS)
Watkins, Charles E; Berman, Julian H
1956-01-01
This report treats the Kernel function of the integral equation that relates a known or prescribed downwash distribution to an unknown lift distribution for harmonically oscillating wings in supersonic flow. The treatment is essentially an extension to supersonic flow of the treatment given in NACA report 1234 for subsonic flow. For the supersonic case the Kernel function is derived by use of a suitable form of acoustic doublet potential which employs a cutoff or Heaviside unit function. The Kernel functions are reduced to forms that can be accurately evaluated by considering the functions in two parts: a part in which the singularities are isolated and analytically expressed, and a nonsingular part which can be tabulated.
Payments Through the Bioenergy Program for Advanced Biofuels (Section 9005), eligible producers of advanced biofuels, or fuels derived from renewable biomass other than corn kernel starch, may receive payments to support expanded production of advanced biofuels. Payment amounts will depend on the quantity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise P.; Demkowicz, Paul A.; Baldwin, Charles A.
2016-11-01
The PARFUME (PARticle FUel ModEl) code was used to predict silver release from tristructural isotropic (TRISO) coated fuel particles and compacts during the second irradiation experiment (AGR-2) of the Advanced Gas Reactor Fuel Development and Qualification program. The PARFUME model for the AGR-2 experiment used the fuel compact volume average temperature for each of the 559 days of irradiation to calculate the release of fission product silver from a representative particle for a select number of AGR-2 compacts and individual fuel particles containing either mixed uranium carbide/oxide (UCO) or 100% uranium dioxide (UO2) kernels. Post-irradiation examination (PIE) measurements were performedmore » to provide data on release of silver from these compacts and individual fuel particles. The available experimental fractional releases of silver were compared to their corresponding PARFUME predictions. Preliminary comparisons show that PARFUME under-predicts the PIE results in UCO compacts and is in reasonable agreement with experimental data for UO2 compacts. The accuracy of PARFUME predictions is impacted by the code limitations in the modeling of the temporal and spatial distributions of the temperature across the compacts. Nevertheless, the comparisons on silver release lie within the same order of magnitude.« less
NASA Technical Reports Server (NTRS)
Campbell, R. H.; Essick, R. B.; Grass, J.; Johnston, G.; Kenny, K.; Russo, V.
1986-01-01
The EOS project is investigating the design and construction of a family of real-time distributed embedded operating systems for reliable, distributed aerospace applications. Using the real-time programming techniques developed in co-operation with NASA in earlier research, the project staff is building a kernel for a multiple processor networked system. The first six months of the grant included a study of scheduling in an object-oriented system, the design philosophy of the kernel, and the architectural overview of the operating system. In this report, the operating system and kernel concepts are described. An environment for the experiments has been built and several of the key concepts of the system have been prototyped. The kernel and operating system is intended to support future experimental studies in multiprocessing, load-balancing, routing, software fault-tolerance, distributed data base design, and real-time processing.
Time-Frequency Signal Representations Using Interpolations in Joint-Variable Domains
2016-06-14
distribution kernels,” IEEE Trans. Signal Process., vol. 42, no. 5, pp. 1156–1165, May 1994. [25] G. S. Cunningham and W. J. Williams , “Kernel...interpolated data. For comparison, we include sparse reconstruction and WVD and Choi– Williams distribution (CWD) [23], which are directly applied to...Prentice-Hall, 1995. [23] H. I. Choi and W. J. Williams , “Improved time-frequency representa- tion of multicomponent signals using exponential kernels
Quantification of process variables for carbothermic synthesis of UC 1-xN x fuel microspheres
Lindemer, Terrance B.; Silva, Chinthaka M.; Henry, Jr, John James; ...
2016-11-05
This report details the continued investigation of process variables involved in converting sol-gel-derived, urania-carbon microspheres to ~820-μm-dia. UC 1-xN x fuel kernels in flow-through, vertical Mo and W crucibles at temperatures up to 2123 K. Experiments included calcining of air-dried UO 3-H 2O-C microspheres in Ar and H 2-containing gases, conversion of the resulting UO 2-C kernels to dense UO2:2UC in the same gases and vacuum, and its conversion in N 2 to UC 1-xN x (x = ~0.85). The thermodynamics of the relevant reactions were applied extensively to interpret and control the process variables. Producing the precursor UO 2:2UCmore » kernel of ~96% theoretical density was required, but its subsequent conversion to UC 1-xN x at 2123 K was not accompanied by sintering and resulted in ~83-86% of theoretical density. Increasing the UC 1-xN x kernel nitride component to ~0.98 in flowing N 2-H 2 mixtures to evolve HCN was shown to be quantitatively consistent with present and past experiments and the only useful application of H 2 in the entire process.« less
Quantification of process variables for carbothermic synthesis of UC1-xNx fuel microspheres
NASA Astrophysics Data System (ADS)
Lindemer, T. B.; Silva, C. M.; Henry, J. J.; McMurray, J. W.; Voit, S. L.; Collins, J. L.; Hunt, R. D.
2017-01-01
This report details the continued investigation of process variables involved in converting sol-gel-derived, urania-carbon microspheres to ∼820-μm-dia. UC1-xNx fuel kernels in flow-through, vertical Mo and W crucibles at temperatures up to 2123 K. Experiments included calcining of air-dried UO3-H2O-C microspheres in Ar and H2-containing gases, conversion of the resulting UO2-C kernels to dense UO2:2UC in the same gases and vacuum, and its conversion in N2 to UC1-xNx (x = ∼0.85). The thermodynamics of the relevant reactions were applied extensively to interpret and control the process variables. Producing the precursor UO2:2UC kernel of ∼96% theoretical density was required, but its subsequent conversion to UC1-xNx at 2123 K was not accompanied by sintering and resulted in ∼83-86% of theoretical density. Increasing the UC1-xNx kernel nitride component to ∼0.98 in flowing N2-H2 mixtures to evolve HCN was shown to be quantitatively consistent with present and past experiments and the only useful application of H2 in the entire process.
Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate
NASA Astrophysics Data System (ADS)
Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.
2008-08-01
The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.
1979-09-30
University, Pittsburgh, Pennsylvania (1976). 14. R. L. Kirby, "ULISP for PDP-11s with Memory Management ," Report MCS-76-23763, University of Maryland...teletVpe or 9 raphIc S output. The recor iuL, po , uitist il so mon itot its owvn ( Onmand queue and a( knowlede commands Sent to It hN the UsCtr interfa I...kernel. By a net- work kernel we mean a multicomputer distributed operating system kernel that includes proces- sor schedulers, "core" memory managers , and
Anisotropic Azimuthal Power and Temperature distribution on FuelRod. Impact on Hydride Distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motta, Arthur; Ivanov, Kostadin; Arramova, Maria
2015-04-29
The degradation of the zirconium cladding may limit nuclear fuel performance. In the high temperature environment of a reactor, the zirconium in the cladding corrodes, releasing hydrogen in the process. Some of this hydrogen is absorbed by the cladding in a highly inhomogeneous manner. The distribution of the absorbed hydrogen is extremely sensitive to temperature and stress concentration gradients. The absorbed hydrogen tends to concentrate near lower temperatures. This hydrogen absorption and hydride formation can cause cladding failure. This project set out to improve the hydrogen distribution prediction capabilities of the BISON fuel performance code. The project was split intomore » two primary sections, first was the use of a high fidelity multi-physics coupling to accurately predict temperature gradients as a function of r, θ , and z, and the second was to use experimental data to create an analytical hydrogen precipitation model. The Penn State version of thermal hydraulics code COBRA-TF (CTF) was successfully coupled to the DeCART neutronics code. This coupled system was verified by testing and validated by comparison to FRAPCON data. The hydrogen diffusion and precipitation experiments successfully calculated the heat of transport and precipitation rate constant values to be used within the hydrogen model in BISON. These values can only be determined experimentally. These values were successfully implemented in precipitation, diffusion and dissolution kernels that were implemented in the BISON code. The coupled output was fed into BISON models and the hydrogen and hydride distributions behaved as expected. Simulations were conducted in the radial, axial and azimuthal directions to showcase the full capabilities of the hydrogen model.« less
Panda, Jibitesh Kumar; Sastry, Gadepalli Ravi Kiran; Rai, Ram Naresh
2018-05-25
The energy situation and the concerns about global warming nowadays have ignited research interest in non-conventional and alternative fuel resources to decrease the emission and the continuous dependency on fossil fuels, particularly for various sectors like power generation, transportation, and agriculture. In the present work, the research is focused on evaluating the performance, emission characteristics, and combustion of biodiesel such as palm kernel methyl ester with the addition of diesel additive "triacetin" in it. A timed manifold injection (TMI) system was taken up to examine the influence of durations of several blends induced on the emission and performance characteristics as compared to normal diesel operation. This experimental study shows better performance and releases less emission as compared with mineral diesel and in turn, indicates that high performance and low emission is promising in PKME-triacetin fuel operation. This analysis also attempts to describe the application of the fuzzy logic-based Taguchi analysis to optimize the emission and performance parameters.
Rafal Podlaski; Francis A. Roesch
2014-01-01
Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...
Notes on a storage manager for the Clouds kernel
NASA Technical Reports Server (NTRS)
Pitts, David V.; Spafford, Eugene H.
1986-01-01
The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.
1993-03-01
CLUSTER A CLUSTER B .UDP D "Orequeqes ProxyDistribute 0 Figure 4-4: HOSTALL Implementation HOST_ALL is implemented as follows. The kernel looks up the...it includes the HOSTALL request as an argument. The generic CronusHost object is managed by the Cronus Kernel. A kernel that receives a ProxyDistnbute...request uses its cached service information to send the HOSTALL request to each host in its cluster via UDP. If the kernel has no cached information
Feasibility of detecting Aflatoxin B1 in single maize kernels using hyperspectral imaging
USDA-ARS?s Scientific Manuscript database
The feasibility of detecting Aflatoxin B1 (AFB1) in single maize kernel inoculated with Aspergillus flavus conidia in the field, as well as its spatial distribution in the kernels, was assessed using near-infrared hyperspectral imaging (HSI) technique. Firstly, an image mask was applied to a pixel-b...
Presumed PDF Modeling of Early Flame Propagation in Moderate to Intense Turbulence Environments
NASA Technical Reports Server (NTRS)
Carmen, Christina; Feikema, Douglas A.
2003-01-01
The present paper describes the results obtained from a one-dimensional time dependent numerical technique that simulates early flame propagation in a moderate to intense turbulent environment. Attention is focused on the development of a spark-ignited, premixed, lean methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. A Monte-Carlo particle tracking method, based upon the method of fractional steps, is utilized to simulate the phenomena represented by a probability density function (PDF) transport equation. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on three primary parameters that influence the initial flame kernel growth: the detailed ignition system characteristics, the mixture composition, and the nature of the flow field. The computational results of moderate and intense isotropic turbulence suggests that flames within the distributed reaction zone are not as vulnerable, as traditionally believed, to the adverse effects of increased turbulence intensity. It is also shown that the magnitude of the flame front thickness significantly impacts the turbulent consumption flame speed. Flame conditions studied have fuel equivalence ratio s in the range phi = 0.6 to 0.9 at standard temperature and pressure.
LoCoH: Non-parameteric kernel methods for constructing home ranges and utilization distributions
Getz, Wayne M.; Fortmann-Roe, Scott; Cross, Paul C.; Lyons, Andrew J.; Ryan, Sadie J.; Wilmers, Christopher C.
2007-01-01
Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: ‘‘fixed sphere-of-influence,’’ or r -LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an ‘‘adaptive sphere-of-influence,’’ or a -LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a ), and compare them to the original ‘‘fixed-number-of-points,’’ or k -LoCoH (all kernels constructed from k -1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a -LoCoH is generally superior to k - and r -LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).
The effect of carbon crystal structure on treat reactor physics calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swanson, R.W.; Harrison, L.J.
1988-01-01
The Transient Reactor Test Facility (TREAT) at Argonne National Laboratory-West (ANL-W) is fueled with urania in a graphite and carbon mixture. This fuel was fabricated from a mixture of graphite flour, thermax (a thermatomic carbon produced by ''cracking'' natural gas), coal-tar resin and U/sub 3/O/sub 8/. During the fabrication process, the fuel was baked to dissociate the resin, but the high temperature necessary to graphitize the carbon in the thermax and in the resin was avoided. Therefore, the carbon crystal structure is a complex mixture of graphite particles in a nongraphitized elemental carbon matrix. Results of calculations using macroscopic carbonmore » cross sections obtained by mixing bound-kernel graphite cross sections for the graphitized carbon and free-gas carbon cross sections for the remainder of the carbon and calculations using only bound-kernel graphite cross sections are compared to experimental data. It is shown that the use of the hybridized cross sections which reflect the allotropic mixture of the carbon in the TREAT fuel results in a significant improvement in the accuracy of calculated neutronics parameters for the TREAT reactor. 6 refs., 2 figs., 3 tabs.« less
Data Compilation for AGR-3/4 Designed-to-Fail (DTF) Fuel Particle Batch LEU04-02DTF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunn, John D; Miller, James Henry
2008-10-01
This document is a compilation of coating and characterization data for the AGR-3/4 designed-to-fail (DTF) particles. The DTF coating is a high density, high anisotropy pyrocarbon coating of nominal 20 {micro}m thickness that is deposited directly on the kernel. The purpose of this coating is to fail early in the irradiation, resulting in a controlled release of fission products which can be analyzed to provide data on fission product transport. A small number of DTF particles will be included with standard TRISO driver fuel particles in the AGR-3 and AGR-4 compacts. The ORNL Coated Particle Fuel Development Laboratory 50-mm diametermore » fluidized bed coater was used to coat the DTF particles. The coatings were produced using procedures and process parameters that were developed in an earlier phase of the project as documented in 'Summary Report on the Development of Procedures for the Fabrication of AGR-3/4 Design-to-Fail Particles', ORNL/TM-2008/161. Two coating runs were conducted using the approved coating parameters. NUCO425-06DTF was a final process qualification batch using natural enrichment uranium carbide/uranium oxide (UCO) kernels. After the qualification run, LEU04-02DTF was produced using low enriched UCO kernels. Both runs were inspected and determined to meet the specifications for DTF particles in section 5 of the AGR-3 & 4 Fuel Product Specification (EDF-6638, Rev.1). Table 1 provides a summary of key properties of the DTF layer. For comparison purposes, an archive sample of DTF particles produced by General Atomics was characterized using identical methods. This data is also summarized in Table 1.« less
Credit scoring analysis using kernel discriminant
NASA Astrophysics Data System (ADS)
Widiharih, T.; Mukid, M. A.; Mustafid
2018-05-01
Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.
NASA Astrophysics Data System (ADS)
Xie, Shi-Peng; Luo, Li-Min
2012-06-01
The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.
NASA Astrophysics Data System (ADS)
Irimescu, A.; Merola, S. S.
2017-10-01
Extensive application of downsizing, as well as the application of alternative combustion control with respect to well established stoichiometric operation, have determined a continuous increase in the energy that is delivered to the working fluid in order to achieve stable and repeatable ignition. Apart from the complexity of fluid-arc interactions, the extreme thermodynamic conditions of this initial combustion stage make its characterization difficult, both through experimental and numerical techniques. Within this context, the present investigation looks at the analysis of spark discharge and flame kernel formation, through the application of UV-visible spectroscopy. Characterization of the energy transfer from the spark plug’s electrodes to the air-fuel mixture was achieved by the evaluation of vibrational and rotational temperatures during ignition, for stoichiometric and lean fuelling of a direct injection spark ignition engine. Optical accessibility was ensured from below the combustion chamber through an elongated piston design, that allowed the central region of the cylinder to be investigated. Fuel effects were evaluated for gasoline and n-butanol; roughly the same load was investigated in throttled and wide-open throttle conditions for both fuels. A brief thermodynamic analysis confirmed that significant gains in efficiency can be obtained with lean fuelling, mainly due to the reduction of pumping losses. Minimal effect of fuel type was observed, while mixture strength was found to have a stronger influence on calculated temperature values, especially during the initial stage of ignition. In-cylinder pressure was found to directly determine emission intensity during ignition, but the vibrational and rotational temperatures featured reduced dependence on this parameter. As expected, at the end of kernel formation, temperature values converged towards those typically found for adiabatic flames. The results show that indeed only a relatively small part of the electrical energy is actually used for promoting chemical reactions and that temperature during the arc and kernel phases are influenced to a reduced extent by fuel concentrations.
NASA Astrophysics Data System (ADS)
Webb, Jonathan A.
The optimized development path for the fabrication of ultra-high temperature W-UO2 CERMET fuel elements were explored within this dissertation. A robust literature search was conducted, which concluded that a W-UO 2 fuel element must contain a fine tungsten microstructure and spherical UO2 kernels throughout the entire consolidation process. Combined Monte Carlo and Computational Fluid Dynamics (CFD) analysis were used to determine the effects of rhenium and gadolinia additions on the performance of W-UO 2 fuel elements at refractory temperatures and in dry and water submerged environments. The computational analysis also led to the design of quasi-optimized fuel elements that can meet thermal-hydraulic and neutronic requirements A rigorous set of experiments were conducted to determine if Pulsed Electric Current Sintering (PECS) can fabricate tungsten and W-Ce02 specimens to the required geometries, densities and microstructures required for high temperature fuel elements as well as determine the mechanisms involved within the PECS consolidation process. The CeO2 acts as a surrogate for UO 2 fuel kernels in these experiments. The experiments seemed to confirm that PECS consolidation takes place via diffusional mass transfer methods; however, the densification process is rapidly accelerated due to the effects of current densities within the consolidating specimen. Fortunately the grain growth proceeds at a traditional rate and the PECS process can yield near fully dense W and W-Ce02 specimens with a finer microstructure than other sintering techniques. PECS consolidation techniques were also shown to be capable of producing W-UO2 segments at near-prototypic geometries; however, great care must be taken to coat the fuel particles with tungsten prior to sintering. Also, great care must be taken to ensure that the particles remain spherical in geometry under the influence of a uniaxial stress as applied during PECS, which involves mixing different fuel kernel sizes in order to reduce the porosity in the initial green compact. Particle mixing techniques were also shown to be capable of producing consolidated CERMETs, but with a less than desirable microstructure. The work presented herin will help in the development of very high temperature reactors for terrestrial and space missions in the future.
Results from the DOE Advanced Gas Reactor Fuel Development and Qualification Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Petti
2014-06-01
Modular HTGR designs were developed to provide natural safety, which prevents core damage under all design basis accidents and presently envisioned severe accidents. The principle that guides their design concepts is to passively maintain core temperatures below fission product release thresholds under all accident scenarios. This level of fuel performance and fission product retention reduces the radioactive source term by many orders of magnitude and allows potential elimination of the need for evacuation and sheltering beyond a small exclusion area. This level, however, is predicated on exceptionally high fuel fabrication quality and performance under normal operation and accident conditions. Germanymore » produced and demonstrated high quality fuel for their pebble bed HTGRs in the 1980s, but no U.S. manufactured fuel had exhibited equivalent performance prior to the Advanced Gas Reactor (AGR) Fuel Development and Qualification Program. The design goal of the modular HTGRs is to allow elimination of an exclusion zone and an emergency planning zone outside the plant boundary fence, typically interpreted as being about 400 meters from the reactor. To achieve this, the reactor design concepts require a level of fuel integrity that is better than that claimed for all prior US manufactured TRISO fuel, by a few orders of magnitude. The improved performance level is about a factor of three better than qualified for German TRISO fuel in the 1980’s. At the start of the AGR program, without a reactor design concept selected, the AGR fuel program selected to qualify fuel to an operating envelope that would bound both pebble bed and prismatic options. This resulted in needing a fuel form that could survive at peak fuel temperatures of 1250°C on a time-averaged basis and high burnups in the range of 150 to 200 GWd/MTHM (metric tons of heavy metal) or 16.4 to 21.8% fissions per initial metal atom (FIMA). Although Germany has demonstrated excellent performance of TRISO-coated UO2 particle fuel up to about 10% FIMA and 1150°C, UO2 fuel is known to have limitations because of CO formation and kernel migration at the high burnups, power densities, temperatures, and temperature gradients that may be encountered in the prismatic modular HTGRs. With uranium oxycarbide (UCO) fuel, the kernel composition is engineered to prevent CO formation and kernel migration, which are key threats to fuel integrity at higher burnups, temperatures, and temperature gradients. Furthermore, the recent poor fuel performance of UO2 TRISO fuel pebbles measured in Chinese irradiation testing in Russia and in German pebbles irradiated at 1250°C, and historic data on poorer fuel performance in safety testing of German pebbles that experienced burnups in excess of 10% FIMA [1] have each raised concern about the use of UO2 TRISO above 10% FIMA and 1150°C and the degree of margin available in the fuel system. This continues to be an active area of study internationally.« less
Embedded real-time operating system micro kernel design
NASA Astrophysics Data System (ADS)
Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng
2005-12-01
Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.
Effects of sample size on KERNEL home range estimates
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
1999-01-01
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
Parameter Study of the LIFE Engine Nuclear Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kramer, K J; Meier, W R; Latkowski, J F
2009-07-10
LLNL is developing the nuclear fusion based Laser Inertial Fusion Energy (LIFE) power plant concept. The baseline design uses a depleted uranium (DU) fission fuel blanket with a flowing molten salt coolant (flibe) that also breeds the tritium needed to sustain the fusion energy source. Indirect drive targets, similar to those that will be demonstrated on the National Ignition Facility (NIF), are ignited at {approx}13 Hz providing a 500 MW fusion source. The DU is in the form of a uranium oxycarbide kernel in modified TRISO-like fuel particles distributed in a carbon matrix forming 2-cm-diameter pebbles. The thermal power ismore » held at 2000 MW by continuously varying the 6Li enrichment in the coolants. There are many options to be considered in the engine design including target yield, U-to-C ratio in the fuel, fission blanket thickness, etc. Here we report results of design variations and compare them in terms of various figures of merit such as time to reach a desired burnup, full-power years of operation, time and maximum burnup at power ramp down and the overall balance of plant utilization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, S; Takayanagi, T; Fujii, Y
2014-06-15
Purpose: To present the validity of our beam modeling with double and triple Gaussian dose kernels for spot scanning proton beams in Nagoya Proton Therapy Center. This study investigates the conformance between the measurements and calculation results in absolute dose with two types of beam kernel. Methods: A dose kernel is one of the important input data required for the treatment planning software. The dose kernel is the 3D dose distribution of an infinitesimal pencil beam of protons in water and consists of integral depth doses and lateral distributions. We have adopted double and triple Gaussian model as lateral distributionmore » in order to take account of the large angle scattering due to nuclear reaction by fitting simulated inwater lateral dose profile for needle proton beam at various depths. The fitted parameters were interpolated as a function of depth in water and were stored as a separate look-up table for the each beam energy. The process of beam modeling is based on the method of MDACC [X.R.Zhu 2013]. Results: From the comparison results between the absolute doses calculated by double Gaussian model and those measured at the center of SOBP, the difference is increased up to 3.5% in the high-energy region because the large angle scattering due to nuclear reaction is not sufficiently considered at intermediate depths in the double Gaussian model. In case of employing triple Gaussian dose kernels, the measured absolute dose at the center of SOBP agrees with calculation within ±1% regardless of the SOBP width and maximum range. Conclusion: We have demonstrated the beam modeling results of dose distribution employing double and triple Gaussian dose kernel. Treatment planning system with the triple Gaussian dose kernel has been successfully verified and applied to the patient treatment with a spot scanning technique in Nagoya Proton Therapy Center.« less
Preparation of UC0.07-0.10N0.90-0.93 spheres for TRISO coated fuel particles
NASA Astrophysics Data System (ADS)
Hunt, R. D.; Silva, C. M.; Lindemer, T. B.; Johnson, J. A.; Collins, J. L.
2014-05-01
The US Department of Energy is considering a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with dense uranium nitride (UN) kernels with diameters of 650 or 800 μm. The objectives of this effort are to make uranium oxide microspheres with adequately dispersed carbon nanoparticles and to convert these microspheres into UN spheres, which could be then sintered into kernels. Recent improvements to the internal gelation process were successfully applied to the production of uranium gel spheres with different concentrations of carbon black. After the spheres were washed and dried, a simple two-step heat profile was used to produce porous microspheres with a chemical composition of UC0.07-0.10N0.90-0.93. The first step involved heating the microspheres to 2023 K in a vacuum, and in the second step, the microspheres were held at 1873 K for 6 h in flowing nitrogen.
NASA Astrophysics Data System (ADS)
Wei, Haiqiao; Zhao, Wanhui; Zhou, Lei; Chen, Ceyuan; Shu, Gequn
2018-03-01
Large eddy simulation coupled with the linear eddy model (LEM) is employed for the simulation of n-heptane spray flames to investigate the low temperature ignition and combustion process in a constant-volume combustion vessel under diesel-engine relevant conditions. Parametric studies are performed to give a comprehensive understanding of the ignition processes. The non-reacting case is firstly carried out to validate the present model by comparing the predicted results with the experimental data from the Engine Combustion Network (ECN). Good agreements are observed in terms of liquid and vapour penetration length, as well as the mixture fraction distributions at different times and different axial locations. For the reacting cases, the flame index was introduced to distinguish between the premixed and non-premixed combustion. A reaction region (RR) parameter is used to investigate the ignition and combustion characteristics, and to distinguish the different combustion stages. Results show that the two-stage combustion process can be identified in spray flames, and different ignition positions in the mixture fraction versus RR space are well described at low and high initial ambient temperatures. At an initial condition of 850 K, the first-stage ignition is initiated at the fuel-lean region, followed by the reactions in fuel-rich regions. Then high-temperature reaction occurs mainly at the places with mixture concentration around stoichiometric mixture fraction. While at an initial temperature of 1000 K, the first-stage ignition occurs at the fuel-rich region first, then it moves towards fuel-richer region. Afterwards, the high-temperature reactions move back to the stoichiometric mixture fraction region. For all of the initial temperatures considered, high-temperature ignition kernels are initiated at the regions richer than stoichiometric mixture fraction. By increasing the initial ambient temperature, the high-temperature ignition kernels move towards richer mixture regions. And after the spray flames gets quasi-steady, most heat is released at the stoichiometric mixture fraction regions. In addition, combustion mode analysis based on key intermediate species illustrates three-mode combustion processes in diesel spray flames.
Automated skin lesion segmentation with kernel density estimation
NASA Astrophysics Data System (ADS)
Pardo, A.; Real, E.; Fernandez-Barreras, G.; Madruga, F. J.; López-Higuera, J. M.; Conde, O. M.
2017-07-01
Skin lesion segmentation is a complex step for dermoscopy pathological diagnosis. Kernel density estimation is proposed as a segmentation technique based on the statistic distribution of color intensities in the lesion and non-lesion regions.
NASA Technical Reports Server (NTRS)
Momonoki, Y. S.; Bandurski, R. S. (Principal Investigator)
1988-01-01
Indole-3-acetyl-myo-inositol occurs in both the kernel and vegetative shoot of germinating Zea mays seedlings. The effect of a gravitational stimulus on the transport of [3H]-5-indole-3-acetyl-myo-inositol and [U-14C]-D-glucose from the kernel to the seedling shoot was studied. Both labeled glucose and labeled indole-3-acetyl-myo-inositol become asymmetrically distributed in the mesocotyl cortex of the shoot with more radioactivity occurring in the bottom half of a horizontally placed seedling. Asymmetric distribution of [3H]indole-3-acetic acid, derived from the applied [3H]indole-3-acetyl-myo-inositol, occurred more rapidly than distribution of total 3H-radioactivity. These findings demonstrate that the gravitational stimulus can induce an asymmetric distribution of substances being transported from kernel to shoot. They also indicate that, in addition to the transport asymmetry, gravity affects the steady state amount of indole-3-acetic acid derived from indole-3-acetyl-myo-inositol.
Proteome analysis of the almond kernel (Prunus dulcis).
Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu
2016-08-01
Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Adaptive Fault-Resistant Systems
1994-10-01
An Architectural Overview of the Alpha Real-Time Distributed Kernel . In Proceeding., of the USEN[X Workshop on Microkernels and Other Kernel ...system and the controller are monolithic . We have noted earlier some of the problems of distributed systems-for exam- ple, the need to bound the...are monolithic . In practice, designers employ a layered structuring for their systems in order to manage complexity, and we expect that practical
USDA-ARS?s Scientific Manuscript database
Kernel vitreousness is an important grading characteristic for segregation of sub-classes of hard red spring (HRS) wheat in the U.S. This research investigated the protein molecular weight distribution (MWD), and flour and baking quality characteristics of different HRS wheat market sub-classes. T...
AGR-5/6/7 LEUCO Kernel Fabrication Readiness Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Douglas W.; Bailey, Kirk W.
2015-02-01
In preparation for forming low-enriched uranium carbide/oxide (LEUCO) fuel kernels for the Advanced Gas Reactor (AGR) fuel development and qualification program, Idaho National Laboratory conducted an operational readiness review of the Babcock & Wilcox Nuclear Operations Group – Lynchburg (B&W NOG-L) procedures, processes, and equipment from January 14 – January 16, 2015. The readiness review focused on requirements taken from the American Society Mechanical Engineers (ASME) Nuclear Quality Assurance Standard (NQA-1-2008, 1a-2009), a recent occurrence at the B&W NOG-L facility related to preparation of acid-deficient uranyl nitrate solution (ADUN), and a relook at concerns noted in a previous review. Topicmore » areas open for the review were communicated to B&W NOG-L in advance of the on-site visit to facilitate the collection of objective evidences attesting to the state of readiness.« less
Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki
2016-03-01
The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference between double and triple Gaussian kernel models. The authors found that the difference between the two studied kernel models appeared at mid-depths and the accuracy of predicting the double Gaussian model deteriorated at the low-dose bump that appeared at mid-depths. When the authors employed the double Gaussian kernel model, the accuracy of calculations for the absolute dose at the center of the SOBP varied with irradiation conditions and the maximum difference was 3.4%. In contrast, the results obtained from calculations with the triple Gaussian kernel model indicated good agreement with the measurements within ±1.1%, regardless of the irradiation conditions. The difference between the results obtained with the two types of studied kernel models was distinct in the high energy region. The accuracy of calculations with the double Gaussian kernel model varied with the field size and SOBP width because the accuracy of prediction with the double Gaussian model was insufficient at the low-dose bump. The evaluation was only qualitative under limited volumetric irradiation conditions. Further accumulation of measured data would be needed to quantitatively comprehend what influence the double and triple Gaussian kernel models had on the accuracy of dose calculations.
Carbothermic Synthesis of ~820- m UN Kernels. Investigation of Process Variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindemer, Terrence; Silva, Chinthaka M; Henry, Jr, John James
2015-06-01
This report details the continued investigation of process variables involved in converting sol-gel-derived, urainia-carbon microspheres to ~820-μm-dia. UN fuel kernels in flow-through, vertical refractory-metal crucibles at temperatures up to 2123 K. Experiments included calcining of air-dried UO 3-H 2O-C microspheres in Ar and H 2-containing gases, conversion of the resulting UO 2-C kernels to dense UO 2:2UC in the same gases and vacuum, and its conversion in N 2 to in UC 1-xN x. The thermodynamics of the relevant reactions were applied extensively to interpret and control the process variables. Producing the precursor UO 2:2UC kernel of ~96% theoretical densitymore » was required, but its subsequent conversion to UC 1-xN x at 2123 K was not accompanied by sintering and resulted in ~83-86% of theoretical density. Decreasing the UC 1-xN x kernel carbide component via HCN evolution was shown to be quantitatively consistent with present and past experiments and the only useful application of H2 in the entire process.« less
Bhattacharya, Abhishek; Dunson, David B.
2012-01-01
This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295
Kernel-based whole-genome prediction of complex traits: a review.
Morota, Gota; Gianola, Daniel
2014-01-01
Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways), thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.
Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.
Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D
2016-04-01
Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the kernel fraction was redried at 60°C for 48 h in a forced-air oven and dry sieved to determine GMPS and surface area. Linear relationships between CSPS from WPCS (n=80) and kernel fraction GMPS, surface area, and proportion passing through the 4.75-mm screen were poor. Strong quadratic relationships between proportion of kernel fraction passing through the 4.75-mm screen and kernel fraction GMPS and surface area were observed. These findings suggest that hydrodynamic separation and dry sieving of the kernel fraction may provide a better assessment of kernel breakage in WPCS than CSPS. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Searching for efficient Markov chain Monte Carlo proposal kernels
Yang, Ziheng; Rodríguez, Carlos E.
2013-01-01
Markov chain Monte Carlo (MCMC) or the Metropolis–Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis–Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals. PMID:24218600
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, Rodney Dale; Johnson, Jared A.; Collins, Jack Lee
A comparison study on carbon blacks and dispersing agents was performed to determine their impacts on the final properties of uranium fuel kernels with carbon. The main target compositions in this internal gelation study were 10 and 20 mol % uranium dicarbide (UC 2), which is UC 1.86, with the balance uranium dioxide. After heat treatment at 1900 K in flowing carbon monoxide in argon for 12 h, the density of the kernels produced using a X-energy proprietary carbon suspension, which is commercially available, ranged from 96% to 100% of theoretical density (TD), with full conversion of UC to UCmore » 2 at both carbon concentrations. However, higher carbon concentrations such as a 2.5 mol ratio of carbon to uranium in the feed solutions failed to produce gel spheres with the proprietary carbon suspension. The kernels using our former baseline of Mogul L carbon black and Tamol SN were 90–92% of TD with full conversion of UC to UC 2 at a variety of carbon levels. Raven 5000 carbon black and Tamol SN were used to produce 10 mol % UC2 kernels with 95% of TD. However, an increase in the Raven 5000 concentration led to a kernel density below 90% of TD. Raven 3500 carbon black and Tamol SN were used to make very dense kernels without complete conversion to UC 2. Lastly, the selection of the carbon black and dispersing agent is highly dependent on the desired final properties of the target kernels.« less
NASA Astrophysics Data System (ADS)
Hunt, R. D.; Johnson, J. A.; Collins, J. L.; McMurray, J. W.; Reif, T. J.; Brown, D. R.
2018-01-01
A comparison study on carbon blacks and dispersing agents was performed to determine their impacts on the final properties of uranium fuel kernels with carbon. The main target compositions in this internal gelation study were 10 and 20 mol % uranium dicarbide (UC2), which is UC1.86, with the balance uranium dioxide. After heat treatment at 1900 K in flowing carbon monoxide in argon for 12 h, the density of the kernels produced using a X-energy proprietary carbon suspension, which is commercially available, ranged from 96% to 100% of theoretical density (TD), with full conversion of UC to UC2 at both carbon concentrations. However, higher carbon concentrations such as a 2.5 mol ratio of carbon to uranium in the feed solutions failed to produce gel spheres with the proprietary carbon suspension. The kernels using our former baseline of Mogul L carbon black and Tamol SN were 90-92% of TD with full conversion of UC to UC2 at a variety of carbon levels. Raven 5000 carbon black and Tamol SN were used to produce 10 mol % UC2 kernels with 95% of TD. However, an increase in the Raven 5000 concentration led to a kernel density below 90% of TD. Raven 3500 carbon black and Tamol SN were used to make very dense kernels without complete conversion to UC2. The selection of the carbon black and dispersing agent is highly dependent on the desired final properties of the target kernels.
Hunt, Rodney Dale; Johnson, Jared A.; Collins, Jack Lee; ...
2017-10-12
A comparison study on carbon blacks and dispersing agents was performed to determine their impacts on the final properties of uranium fuel kernels with carbon. The main target compositions in this internal gelation study were 10 and 20 mol % uranium dicarbide (UC 2), which is UC 1.86, with the balance uranium dioxide. After heat treatment at 1900 K in flowing carbon monoxide in argon for 12 h, the density of the kernels produced using a X-energy proprietary carbon suspension, which is commercially available, ranged from 96% to 100% of theoretical density (TD), with full conversion of UC to UCmore » 2 at both carbon concentrations. However, higher carbon concentrations such as a 2.5 mol ratio of carbon to uranium in the feed solutions failed to produce gel spheres with the proprietary carbon suspension. The kernels using our former baseline of Mogul L carbon black and Tamol SN were 90–92% of TD with full conversion of UC to UC 2 at a variety of carbon levels. Raven 5000 carbon black and Tamol SN were used to produce 10 mol % UC2 kernels with 95% of TD. However, an increase in the Raven 5000 concentration led to a kernel density below 90% of TD. Raven 3500 carbon black and Tamol SN were used to make very dense kernels without complete conversion to UC 2. Lastly, the selection of the carbon black and dispersing agent is highly dependent on the desired final properties of the target kernels.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biswas, Pratim; Al-Dahhan, Muthanna
2012-11-01
Tri-isotropic (TRISO) fuel particle coating is critical for the future use of nuclear energy produced byadvanced gas reactors (AGRs). The fuel kernels are coated using chemical vapor deposition in a spouted fluidized bed. The challenges encountered in operating TRISO fuel coaters are due to the fact that in modern AGRs, such as High Temperature Gas Reactors (HTGRs), the acceptable level of defective/failed coated particles is essentially zero. This specification requires processes that produce coated spherical particles with even coatings having extremely low defect fractions. Unfortunately, the scale-up and design of the current processes and coaters have been based on empiricalmore » approaches and are operated as black boxes. Hence, a voluminous amount of experimental development and trial and error work has been conducted. It has been clearly demonstrated that the quality of the coating applied to the fuel kernels is impacted by the hydrodynamics, solids flow field, and flow regime characteristics of the spouted bed coaters, which themselves are influenced by design parameters and operating variables. Further complicating the outlook for future fuel-coating technology and nuclear energy production is the fact that a variety of new concepts will involve fuel kernels of different sizes and with compositions of different densities. Therefore, without a fundamental understanding the underlying phenomena of the spouted bed TRISO coater, a significant amount of effort is required for production of each type of particle with a significant risk of not meeting the specifications. This difficulty will significantly and negatively impact the applications of AGRs for power generation and cause further challenges to them as an alternative source of commercial energy production. Accordingly, the proposed work seeks to overcome such hurdles and advance the scale-up, design, and performance of TRISO fuel particle spouted bed coaters. The overall objectives of the proposed work are to advance the fundamental understanding of the hydrodynamics by systematically investigating the effect of design and operating variables, to evaluate the reported dimensionless groups as scaling factors, and to establish a reliable scale-up methodology for the TRISO fuel particle spouted bed coaters based on hydrodynamic similarity via advanced measurement and computational techniques. An additional objective is to develop an on-line non-invasive measurement technique based on gamma ray densitometry (i.e. Nuclear Gauge Densitometry) that can be installed and used for coater process monitoring to ensure proper performance and operation and to facilitate the developed scale-up methodology. To achieve the objectives set for the project, the work will use optical probes and gamma ray computed tomography (CT) (for the measurements of solids/voidage holdup cross-sectional distribution and radial profiles along the bed height, spouted diameter, and fountain height) and radioactive particle tracking (RPT) (for the measurements of the 3D solids flow field, velocity, turbulent parameters, circulation time, solids lagrangian trajectories, and many other of spouted bed related hydrodynamic parameters). In addition, gas dynamic measurement techniques and pressure transducers will be utilized to complement the obtained information. The measurements obtained by these techniques will be used as benchmark data to evaluate and validate the computational fluid dynamic (CFD) models (two fluid model or discrete particle model) and their closures. The validated CFD models and closures will be used to facilitate the developed methodology for scale-up, design and hydrodynamic similarity. Successful execution of this work and the proposed tasks will advance the fundamental understanding of the coater flow field and quantify it for proper and safe design, scale-up, and performance. Such achievements will overcome the barriers to AGR applications and will help assure that the US maintains nuclear energy as a feasible option to meet the nation's needs for energy and environmental safety. In addition, the outcome of the proposed study will have a broader impact on other processes that utilize spouted beds, such as coal gasification, granulation, drying, catalytic reactions, etc.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmreich, Grant W.; Hunn, John D.; Skitt, Darren J.
2017-02-01
Coated particle fuel batch J52O-16-93164 was produced by Babcock and Wilcox Technologies (BWXT) for possible selection as fuel for the Advanced Gas Reactor Fuel Development and Qualification (AGR) Program’s AGR-5/6/7 irradiation test in the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), or may be used as demonstration production-scale coated particle fuel for other experiments. The tristructural-isotropic (TRISO) coatings were deposited in a 150-mm-diameter production-scale fluidizedbed chemical vapor deposition (CVD) furnace onto 425-μm-nominal-diameter spherical kernels from BWXT lot J52L-16-69316. Each kernel contained a mixture of 15.5%-enriched uranium carbide and uranium oxide (UCO) and was coated with four consecutive CVD layers:more » a ~50% dense carbon buffer layer with 100-μm-nominal thickness, a dense inner pyrolytic carbon (IPyC) layer with 40-μm-nominal thickness, a silicon carbide (SiC) layer with 35-μm-nominal thickness, and a dense outer pyrolytic carbon (OPyC) layer with 40-μm-nominal thickness. The TRISO-coated particle batch was sieved to upgrade the particles by removing over-sized and under-sized material, and the upgraded batch was designated by appending the letter A to the end of the batch number (i.e., 93164A).« less
In-pile test results of U-silicide or U-nitride coated U-7Mo particle dispersion fuel in Al
NASA Astrophysics Data System (ADS)
Kim, Yeon Soo; Park, J. M.; Lee, K. H.; Yoo, B. O.; Ryu, H. J.; Ye, B.
2014-11-01
U-silicide or U-nitride coated U-Mo particle dispersion fuel in Al (U-Mo/Al) was in-pile tested to examine the effectiveness of the coating as a diffusion barrier between the U-7Mo fuel kernels and Al matrix. This paper reports the PIE data and analyses focusing on the effectiveness of the coating in terms of interaction layer (IL) growth and general fuel performance. The U-silicide coating showed considerable success, but it also provided evidence for additional improvement for coating process. The U-nitride coated specimen showed largely inefficient results in reducing IL growth. From the test, important observations were also made that can be utilized to improve U-Mo/Al fuel performance. The heating process for coating turned out to be beneficial to suppress fuel swelling. The use of larger fuel particles confirmed favorable effects on fuel performance.
Irradiation performance of HTGR recycle fissile fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Homan, F.J.; Long, E.L. Jr.
1976-08-01
The irradiation performance of candidate HTGR recycle fissile fuel under accelerated testing conditions is reviewed. Failure modes for coated-particle fuels are described, and the performance of candidate recycle fissile fuels is discussed in terms of these failure modes. The bases on which UO/sub 2/ and (Th,U)O/sub 2/ were rejected as candidate recycle fissile fuels are outlined, along with the bases on which the weak-acid resin (WAR)-derived fissile fuel was selected as the reference recycle kernel. Comparisons are made relative to the irradiation behavior of WAR-derived fuels of varying stoichiometry and conclusions are drawn about the optimum stoichiometry and the rangemore » of acceptable values. Plans for future testing in support of specification development, confirmation of the results of accelerated testing by real-time experiments, and improvement in fuel performance and reliability are described.« less
NASA Astrophysics Data System (ADS)
Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei
2018-05-01
A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.
Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party
NASA Astrophysics Data System (ADS)
Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi
The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.
Carbon monoxide formation in UO 2 kerneled HTR fuel particles containing oxygen getters
NASA Astrophysics Data System (ADS)
Proksch, E.; Strigl, A.; Nabielek, H.
1986-06-01
Mass spectrometric measurements of CO in irradiated UO 2 kerneled HTR fuel particles containing various oxygen getters are summarized and evaluated. Uranium carbide addition in the 3 to 15% range reduces the CO release by factors between 25 and 80, up to burn-up levels as high as 70% FIMA. Unintentional gettering by SiC in TRISO coated particles with failed inner pyrocarbon layers results in CO reduction factors between 15 and 110. For ZrC, only somewhat ambiguous results have been obtained; most likely, ZrC results in CO reduction by a factor of about 40. Ce 2O 3 and La 2O 3 seem to be somewhat less effective than the three carbides; for Ce 2O 3, reduction factors between 3 and 15 have been found. However, these results are possibly incorrect due to premature oxidation of the getter already during fabrication. Addition of SiO 2 + Al 2O 3 has no influence on CO release at all.
Preparation of Simulated LBL Defects for Round Robin Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerczak, Tyler J.; Baldwin, Charles A.; Hunn, John D.
2016-01-01
A critical characteristic of the TRISO fuel design is its ability to retain fission products. During reactor operation, the TRISO layers act as barriers to release of fission products not stabilized in the kernel. Each component of the TRISO particle and compact construction plays a unique role in retaining select fission products, and layer performance is often interrelated. The IPyC, SiC, and OPyC layers are barriers to the release of fission product gases such as Kr and Xe. The SiC layer provides the primary barrier to release of metallic fission products not retained in the kernel, as transport across themore » SiC layer is rate limiting due to the greater permeability of the IPyC and OPyC layers to many metallic fission products. These attributes allow intact TRISO coatings to successfully retain most fission products released from the kernel, with the majority of released fission products during operation being due to defective, damaged, or failed coatings. This dominant release of fission products from compromised particles contributes to the overall source term in reactor; causing safety and maintenance concerns and limiting the lifetime of the fuel. Under these considerations, an understanding of the nature and frequency of compromised particles is an important part of predicting the expected fission product release and ensuring safe and efficient operation.« less
Discrete element method as an approach to model the wheat milling process
USDA-ARS?s Scientific Manuscript database
It is a well-known phenomenon that break-release, particle size, and size distribution of wheat milling are functions of machine operational parameters and grain properties. Due to the non-uniformity of characteristics and properties of wheat kernels, the kernel physical and mechanical properties af...
Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan
2018-02-01
In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.
NASA Astrophysics Data System (ADS)
Cho, Jeonghyun; Han, Cheolheui; Cho, Leesang; Cho, Jinsoo
2003-08-01
This paper treats the kernel function of an integral equation that relates a known or prescribed upwash distribution to an unknown lift distribution for a finite wing. The pressure kernel functions of the singular integral equation are summarized for all speed range in the Laplace transform domain. The sonic kernel function has been reduced to a form, which can be conveniently evaluated as a finite limit from both the subsonic and supersonic sides when the Mach number tends to one. Several examples are solved including rectangular wings, swept wings, a supersonic transport wing and a harmonically oscillating wing. Present results are given with other numerical data, showing continuous results through the unit Mach number. Computed results are in good agreement with other numerical results.
Dielectric relaxation measurement and analysis of restricted water structure in rice kernels
NASA Astrophysics Data System (ADS)
Yagihara, Shin; Oyama, Mikio; Inoue, Akio; Asano, Megumi; Sudo, Seiichi; Shinyashiki, Naoki
2007-04-01
Dielectric relaxation measurements were performed for rice kernels by time domain reflectometry (TDR) with flat-end coaxial electrodes. Difficulties in good contact between the surfaces of the electrodes and the kernels are eliminated by a TDR set-up with a sample holder for a kernel, and the water content could be evaluated from relaxation curves. Dielectric measurements were performed for rice kernels, rice flour and boiled rice with various water contents, and the water amount and dynamic behaviour of water molecules were explained from restricted dynamics of water molecules and also from the τ-β (relaxation time versus the relaxation-time distribution parameter of the Cole-Cole equation) diagram. In comparison with other aqueous systems, the dynamic structure of water in moist rice is more similar to aqueous dispersion systems than to aqueous solutions.
Modeling utilization distributions in space and time
Keating, K.A.; Cherry, S.
2009-01-01
W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Liao, Meng; To, Quy-Dong; Léonard, Céline; Monchiet, Vincent
2018-03-01
In this paper, we use the molecular dynamics simulation method to study gas-wall boundary conditions. Discrete scattering information of gas molecules at the wall surface is obtained from collision simulations. The collision data can be used to identify the accommodation coefficients for parametric wall models such as Maxwell and Cercignani-Lampis scattering kernels. Since these scattering kernels are based on a limited number of accommodation coefficients, we adopt non-parametric statistical methods to construct the kernel to overcome these issues. Different from parametric kernels, the non-parametric kernels require no parameter (i.e. accommodation coefficients) and no predefined distribution. We also propose approaches to derive directly the Navier friction and Kapitza thermal resistance coefficients as well as other interface coefficients associated with moment equations from the non-parametric kernels. The methods are applied successfully to systems composed of CH4 or CO2 and graphite, which are of interest to the petroleum industry.
A Kernel Embedding-Based Approach for Nonstationary Causal Model Inference.
Hu, Shoubo; Chen, Zhitang; Chan, Laiwan
2018-05-01
Although nonstationary data are more common in the real world, most existing causal discovery methods do not take nonstationarity into consideration. In this letter, we propose a kernel embedding-based approach, ENCI, for nonstationary causal model inference where data are collected from multiple domains with varying distributions. In ENCI, we transform the complicated relation of a cause-effect pair into a linear model of variables of which observations correspond to the kernel embeddings of the cause-and-effect distributions in different domains. In this way, we are able to estimate the causal direction by exploiting the causal asymmetry of the transformed linear model. Furthermore, we extend ENCI to causal graph discovery for multiple variables by transforming the relations among them into a linear nongaussian acyclic model. We show that by exploiting the nonstationarity of distributions, both cause-effect pairs and two kinds of causal graphs are identifiable under mild conditions. Experiments on synthetic and real-world data are conducted to justify the efficacy of ENCI over major existing methods.
NASA Astrophysics Data System (ADS)
Sardet, Laure; Patilea, Valentin
When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.
Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila
2018-05-07
Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.
USDA-ARS?s Scientific Manuscript database
Specific wheat protein fractions are known to have distinct associations with wheat quality traits. Research was conducted on 10 hard spring wheat cultivars grown at two North Dakota locations to identify protein fractions that affected wheat kernel characteristics and breadmaking quality. SDS ext...
A Frequency-List of Sentence Structures: Distribution of Kernel Sentences
ERIC Educational Resources Information Center
Geens, Dirk
1974-01-01
A corpus of 10,000 sentences extracted from British theatrical texts was used to construct a frequency list of kernel sentence structures. Thirty-one charts illustrate the analyzed results. The procedures used and an interpretation of the frequencies are given. Such lists might aid foreign language teachers in course organization. Available from…
Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen
2016-07-07
Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.
Mapping QTLs controlling kernel dimensions in a wheat inter-varietal RIL mapping population.
Cheng, Ruiru; Kong, Zhongxin; Zhang, Liwei; Xie, Quan; Jia, Haiyan; Yu, Dong; Huang, Yulong; Ma, Zhengqiang
2017-07-01
Seven kernel dimension QTLs were identified in wheat, and kernel thickness was found to be the most important dimension for grain weight improvement. Kernel morphology and weight of wheat (Triticum aestivum L.) affect both yield and quality; however, the genetic basis of these traits and their interactions has not been fully understood. In this study, to investigate the genetic factors affecting kernel morphology and the association of kernel morphology traits with kernel weight, kernel length (KL), width (KW) and thickness (KT) were evaluated, together with hundred-grain weight (HGW), in a recombinant inbred line population derived from Nanda2419 × Wangshuibai, with data from five trials (two different locations over 3 years). The results showed that HGW was more closely correlated with KT and KW than with KL. A whole genome scan revealed four QTLs for KL, one for KW and two for KT, distributed on five different chromosomes. Of them, QKl.nau-2D for KL, and QKt.nau-4B and QKt.nau-5A for KT were newly identified major QTLs for the respective traits, explaining up to 32.6 and 41.5% of the phenotypic variations, respectively. Increase of KW and KT and reduction of KL/KT and KW/KT ratios always resulted in significant higher grain weight. Lines combining the Nanda 2419 alleles of the 4B and 5A intervals had wider, thicker, rounder kernels and a 14% higher grain weight in the genotype-based analysis. A strong, negative linear relationship of the KW/KT ratio with grain weight was observed. It thus appears that kernel thickness is the most important kernel dimension factor in wheat improvement for higher yield. Mapping and marker identification of the kernel dimension-related QTLs definitely help realize the breeding goals.
Spatial patterns of aflatoxin levels in relation to ear-feeding insect damage in pre-harvest corn.
Ni, Xinzhi; Wilson, Jeffrey P; Buntin, G David; Guo, Baozhu; Krakowsky, Matthew D; Lee, R Dewey; Cottrell, Ted E; Scully, Brian T; Huffaker, Alisa; Schmelz, Eric A
2011-07-01
Key impediments to increased corn yield and quality in the southeastern US coastal plain region are damage by ear-feeding insects and aflatoxin contamination caused by infection of Aspergillus flavus. Key ear-feeding insects are corn earworm, Helicoverpa zea, fall armyworm, Spodoptera frugiperda, maize weevil, Sitophilus zeamais, and brown stink bug, Euschistus servus. In 2006 and 2007, aflatoxin contamination and insect damage were sampled before harvest in three 0.4-hectare corn fields using a grid sampling method. The feeding damage by each of ear/kernel-feeding insects (i.e., corn earworm/fall armyworm damage on the silk/cob, and discoloration of corn kernels by stink bugs), and maize weevil population were assessed at each grid point with five ears. The spatial distribution pattern of aflatoxin contamination was also assessed using the corn samples collected at each sampling point. Aflatoxin level was correlated to the number of maize weevils and stink bug-discolored kernels, but not closely correlated to either husk coverage or corn earworm damage. Contour maps of the maize weevil populations, stink bug-damaged kernels, and aflatoxin levels exhibited an aggregated distribution pattern with a strong edge effect on all three parameters. The separation of silk- and cob-feeding insects from kernel-feeding insects, as well as chewing (i.e., the corn earworm and maize weevil) and piercing-sucking insects (i.e., the stink bugs) and their damage in relation to aflatoxin accumulation is economically important. Both theoretic and applied ramifications of this study were discussed by proposing a hypothesis on the underlying mechanisms of the aggregated distribution patterns and strong edge effect of insect damage and aflatoxin contamination, and by discussing possible management tactics for aflatoxin reduction by proper management of kernel-feeding insects. Future directions on basic and applied research related to aflatoxin contamination are also discussed.
Spatial Patterns of Aflatoxin Levels in Relation to Ear-Feeding Insect Damage in Pre-Harvest Corn
Ni, Xinzhi; Wilson, Jeffrey P.; Buntin, G. David; Guo, Baozhu; Krakowsky, Matthew D.; Lee, R. Dewey; Cottrell, Ted E.; Scully, Brian T.; Huffaker, Alisa; Schmelz, Eric A.
2011-01-01
Key impediments to increased corn yield and quality in the southeastern US coastal plain region are damage by ear-feeding insects and aflatoxin contamination caused by infection of Aspergillus flavus. Key ear-feeding insects are corn earworm, Helicoverpa zea, fall armyworm, Spodoptera frugiperda, maize weevil, Sitophilus zeamais, and brown stink bug, Euschistus servus. In 2006 and 2007, aflatoxin contamination and insect damage were sampled before harvest in three 0.4-hectare corn fields using a grid sampling method. The feeding damage by each of ear/kernel-feeding insects (i.e., corn earworm/fall armyworm damage on the silk/cob, and discoloration of corn kernels by stink bugs), and maize weevil population were assessed at each grid point with five ears. The spatial distribution pattern of aflatoxin contamination was also assessed using the corn samples collected at each sampling point. Aflatoxin level was correlated to the number of maize weevils and stink bug-discolored kernels, but not closely correlated to either husk coverage or corn earworm damage. Contour maps of the maize weevil populations, stink bug-damaged kernels, and aflatoxin levels exhibited an aggregated distribution pattern with a strong edge effect on all three parameters. The separation of silk- and cob-feeding insects from kernel-feeding insects, as well as chewing (i.e., the corn earworm and maize weevil) and piercing-sucking insects (i.e., the stink bugs) and their damage in relation to aflatoxin accumulation is economically important. Both theoretic and applied ramifications of this study were discussed by proposing a hypothesis on the underlying mechanisms of the aggregated distribution patterns and strong edge effect of insect damage and aflatoxin contamination, and by discussing possible management tactics for aflatoxin reduction by proper management of kernel-feeding insects. Future directions on basic and applied research related to aflatoxin contamination are also discussed. PMID:22069748
NASA Astrophysics Data System (ADS)
Papageorge, Michael J.; Arndt, Christoph; Fuest, Frederik; Meier, Wolfgang; Sutton, Jeffrey A.
2014-07-01
In this manuscript, we describe an experimental approach to simultaneously measure high-speed image sequences of the mixture fraction and temperature fields during pulsed, turbulent fuel injection into a high-temperature, co-flowing, and vitiated oxidizer stream. The quantitative mixture fraction and temperature measurements are determined from 10-kHz-rate planar Rayleigh scattering and a robust data processing methodology which is accurate from fuel injection to the onset of auto-ignition. In addition, the data processing is shown to yield accurate temperature measurements following ignition to observe the initial evolution of the "burning" temperature field. High-speed OH* chemiluminescence (CL) was used to determine the spatial location of the initial auto-ignition kernel. In order to ensure that the ignition kernel formed inside of the Rayleigh scattering laser light sheet, OH* CL was observed in two viewing planes, one near-parallel to the laser sheet and one perpendicular to the laser sheet. The high-speed laser measurements are enabled through the use of the unique high-energy pulse burst laser system which generates long-duration bursts of ultra-high pulse energies at 532 nm (>1 J) suitable for planar Rayleigh scattering imaging. A particular focus of this study was to characterize the fidelity of the measurements both in the context of the precision and accuracy, which includes facility operating and boundary conditions and measurement of signal-to-noise ratio (SNR). The mixture fraction and temperature fields deduced from the high-speed planar Rayleigh scattering measurements exhibited SNR values greater than 100 at temperatures exceeding 1,300 K. The accuracy of the measurements was determined by comparing the current mixture fraction results to that of "cold", isothermal, non-reacting jets. All profiles, when properly normalized, exhibited self-similarity and collapsed upon one another. Finally, example mixture fraction, temperature, and OH* emission sequences are presented for a variety for fuel and vitiated oxidizer combinations. For all cases considered, auto-ignition occurred at the periphery of the fuel jet, under very "lean" conditions, where the local mixture fraction was less than the stoichiometric mixture fraction ( ξ < ξ s). Furthermore, the ignition kernel formed in regions of low scalar dissipation rate, which agrees with previous results from direct numerical simulations.
Using kernel density estimation to understand the influence of neighbourhood destinations on BMI
King, Tania L; Bentley, Rebecca J; Thornton, Lukar E; Kavanagh, Anne M
2016-01-01
Objectives Little is known about how the distribution of destinations in the local neighbourhood is related to body mass index (BMI). Kernel density estimation (KDE) is a spatial analysis technique that accounts for the location of features relative to each other. Using KDE, this study investigated whether individuals living near destinations (shops and service facilities) that are more intensely distributed rather than dispersed, have lower BMIs. Study design and setting A cross-sectional study of 2349 residents of 50 urban areas in metropolitan Melbourne, Australia. Methods Destinations were geocoded, and kernel density estimates of destination intensity were created using kernels of 400, 800 and 1200 m. Using multilevel linear regression, the association between destination intensity (classified in quintiles Q1(least)–Q5(most)) and BMI was estimated in models that adjusted for the following confounders: age, sex, country of birth, education, dominant household occupation, household type, disability/injury and area disadvantage. Separate models included a physical activity variable. Results For kernels of 800 and 1200 m, there was an inverse relationship between BMI and more intensely distributed destinations (compared to areas with least destination intensity). Effects were significant at 1200 m: Q4, β −0.86, 95% CI −1.58 to −0.13, p=0.022; Q5, β −1.03 95% CI −1.65 to −0.41, p=0.001. Inclusion of physical activity in the models attenuated effects, although effects remained marginally significant for Q5 at 1200 m: β −0.77 95% CI −1.52, −0.02, p=0.045. Conclusions This study conducted within urban Melbourne, Australia, found that participants living in areas of greater destination intensity within 1200 m of home had lower BMIs. Effects were partly explained by physical activity. The results suggest that increasing the intensity of destination distribution could reduce BMI levels by encouraging higher levels of physical activity. PMID:26883235
Muñoz, Jesús Escrivá; Gambús, Pedro; Jensen, Erik W; Vallverdú, Montserrat
2018-01-01
This works investigates the time-frequency content of impedance cardiography signals during a propofol-remifentanil anesthesia. In the last years, impedance cardiography (ICG) is a technique which has gained much attention. However, ICG signals need further investigation. Time-Frequency Distributions (TFDs) with 5 different kernels are used in order to analyze impedance cardiography signals (ICG) before the start of the anesthesia and after the loss of consciousness. In total, ICG signals from one hundred and thirty-one consecutive patients undergoing major surgery under general anesthesia were analyzed. Several features were extracted from the calculated TFDs in order to characterize the time-frequency content of the ICG signals. Differences between those features before and after the loss of consciousness were studied. The Extended Modified Beta Distribution (EMBD) was the kernel for which most features shows statistically significant changes between before and after the loss of consciousness. Among all analyzed features, those based on entropy showed a sensibility, specificity and area under the curve of the receiver operating characteristic above 60%. The anesthetic state of the patient is reflected on linear and non-linear features extracted from the TFDs of the ICG signals. Especially, the EMBD is a suitable kernel for the analysis of ICG signals and offers a great range of features which change according to the patient's anesthesia state in a statistically significant way. Schattauer GmbH.
3D thermal modeling of TRISO fuel coupled with neutronic simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Jianwei; Uddin, Rizwan
2010-01-01
The Very High Temperature Gas Reactor (VHTR) is widely considered as one of the top candidates identified in the Next Generation Nuclear Power-plant (NGNP) Technology Roadmap under the U.S . Depanment of Energy's Generation IV program. TRlSO particle is a common element among different VHTR designs and its performance is critical to the safety and reliability of the whole reactor. A TRISO particle experiences complex thermo-mechanical changes during reactor operation in high temperature and high burnup conditions. TRISO fuel performance analysis requires evaluation of these changes on micro scale. Since most of these changes are temperature dependent, 3D thermal modelingmore » of TRISO fuel is a crucial step of the whole analysis package. In this paper, a 3D numerical thermal model was developed to calculate temperature distribution inside TRISO and pebble under different scenarios. 3D simulation is required because pebbles or TRISOs are always subjected to asymmetric thermal conditions since they are randomly packed together. The numerical model was developed using finite difference method and it was benchmarked against ID analytical results and also results reported from literature. Monte-Carlo models were set up to calculate radial power density profile. Complex convective boundary condition was applied on the pebble outer surface. Three reactors were simulated using this model to calculate temperature distribution under different power levels. Two asymmetric boundary conditions were applied to the pebble to test the 3D capabilities. A gas bubble was hypothesized inside the TRISO kernel and 3D simulation was also carried out under this scenario. Intuition-coherent results were obtained and reported in this paper.« less
Xu, Xiaoping; Huang, Qingming; Chen, Shanshan; Yang, Peiqiang; Chen, Shaojiang; Song, Yiqiao
2016-01-01
One of the modern crop breeding techniques uses doubled haploid plants that contain an identical pair of chromosomes in order to accelerate the breeding process. Rapid haploid identification method is critical for large-scale selections of double haploids. The conventional methods based on the color of the endosperm and embryo seeds are slow, manual and prone to error. On the other hand, there exists a significant difference between diploid and haploid seeds generated by high oil inducer, which makes it possible to use oil content to identify the haploid. This paper describes a fully-automated high-throughput NMR screening system for maize haploid kernel identification. The system is comprised of a sampler unit to select a single kernel to feed for measurement of NMR and weight, and a kernel sorter to distribute the kernel according to the measurement result. Tests of the system show a consistent accuracy of 94% with an average screening time of 4 seconds per kernel. Field test result is described and the directions for future improvement are discussed. PMID:27454427
Adsorption of mercury from aqueous solutions using palm oil fuel ash as an adsorbent - batch studies
NASA Astrophysics Data System (ADS)
Imla Syafiqah, M. S.; Yussof, H. W.
2018-03-01
Palm oil fuel ash (POFA) is one of the most abundantly produced waste materials. POFA is widely used by the oil palm industry which was collected as ash from the burning of empty fruit bunches fiber (EFB) and palm oil kernel shells (POKS) in the boiler as fuel to generate electricity. Mercury adsorption was conducted in a batch process to study the effects of contact time, initial Hg(II) ion concentration, and temperature. In this study, POFA was prepared and used for the removal of mercury(II) ion from the aqueous phase. The effects of various parameters such as contact time (0- 360 min), temperature (15 – 45 °C) and initial Hg(II) ion concentration (1 – 5 mg/L) for the removal of Hg(II) ion were studied in a batch process. The surface characterization was examined by scanning electron microscopy (SEM) and particle size distribution analysis. From this study, it was found that the highest Hg(II) ion removal was 99.60 % at pH 7, contact time of 4 h, initial Hg(II) ion concentration of 1 mg/L, adsorbent dosage 0.25 g and agitation speed of 100 rpm. The results implied that POFA has the potential as a low-cost and environmental friendly adsorbent for the removal of mercury from aqueous solution.
Perspective of laser-induced plasma ignition of hydrocarbon fuel in Scramjet engine
NASA Astrophysics Data System (ADS)
Yang, Leichao; Li, Xiaohui; Liang, Jianhan; Yu, Xin; Li, Xipeng
2016-01-01
Laser-induced plasma ignition of an ethylene fuelled cavity was successfully conducted in a model scramjet engine combustor. The ethylene was injected 10mm upstream of cavity flameholder from 3 orifices 60 degree inclined relative to freestream direction. The 1064nm laser beam, from a Q-switched Nd:YAG laser source running at 3Hz and 200mJ per pulse, was focused into cavity for ignition. High speed photography was used to capture the transient ignition process. The laser-induced gas breakdown, flame kernel generation and propagation were all recorded and ensuing stable supersonic combustion was established in cavity. The flame kernel is found rotating anti-clockwise and gradually moves upwards as the entrainment of circulation flow in cavity. The flame is then stretched from leading edge to trailing edge to fully fill the entire cavity. Eventually, a stable combustion is achieved roughly 900μs after the laser pulse. The results show promising potentials for practical application. The perspective of laser-induced plasma ignition of hydrocarbon fuel in scramjet engine is outlined.
Visualization of Oil Body Distribution in Jatropha curcas L. by Four-Wave Mixing Microscopy
NASA Astrophysics Data System (ADS)
Ishii, Makiko; Uchiyama, Susumu; Ozeki, Yasuyuki; Kajiyama, Sin'ichiro; Itoh, Kazuyoshi; Fukui, Kiichi
2013-06-01
Jatropha curcas L. (jatropha) is a superior oil crop for biofuel production. To improve the oil yield of jatropha by breeding, the development of effective and reliable tools to evaluate the oil production efficiency is essential. The characteristics of the jatropha kernel, which contains a large amount of oil, are not fully understood yet. Here, we demonstrate the application of four-wave mixing (FWM) microscopy to visualize the distribution of oil bodies in a jatropha kernel without staining. FWM microscopy enables us to visualize the size and morphology of oil bodies and to determine the oil content in the kernel to be 33.2%. The signal obtained from FWM microscopy comprises both of stimulated parametric emission (SPE) and coherent anti-Stokes Raman scattering (CARS) signals. In the present situation, where a very short pump pulse is employed, the SPE signal is believed to dominate the FWM signal.
NASA Technical Reports Server (NTRS)
Cunningham, A. M., Jr.
1973-01-01
The method presented uses a collocation technique with the nonplanar kernel function to solve supersonic lifting surface problems with and without interference. A set of pressure functions are developed based on conical flow theory solutions which account for discontinuities in the supersonic pressure distributions. These functions permit faster solution convergence than is possible with conventional supersonic pressure functions. An improper integral of a 3/2 power singularity along the Mach hyperbola of the nonplanar supersonic kernel function is described and treated. The method is compared with other theories and experiment for a variety of cases.
Skerjanc, William F.; Maki, John T.; Collin, Blaise P.; ...
2015-12-02
The success of modular high temperature gas-cooled reactors is highly dependent on the performance of the tristructural-isotopic (TRISO) coated fuel particle and the quality to which it can be manufactured. During irradiation, TRISO-coated fuel particles act as a pressure vessel to contain fission gas and mitigate the diffusion of fission products to the coolant boundary. The fuel specifications place limits on key attributes to minimize fuel particle failure under irradiation and postulated accident conditions. PARFUME (an integrated mechanistic coated particle fuel performance code developed at the Idaho National Laboratory) was used to calculate fuel particle failure probabilities. By systematically varyingmore » key TRISO-coated particle attributes, failure probability functions were developed to understand how each attribute contributes to fuel particle failure. Critical manufacturing limits were calculated for the key attributes of a low enriched TRISO-coated nuclear fuel particle with a kernel diameter of 425 μm. As a result, these critical manufacturing limits identify ranges beyond where an increase in fuel particle failure probability is expected to occur.« less
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Analysis of nonlocal neural fields for both general and gamma-distributed connectivities
NASA Astrophysics Data System (ADS)
Hutt, Axel; Atay, Fatihcan M.
2005-04-01
This work studies the stability of equilibria in spatially extended neuronal ensembles. We first derive the model equation from statistical properties of the neuron population. The obtained integro-differential equation includes synaptic and space-dependent transmission delay for both general and gamma-distributed synaptic connectivities. The latter connectivity type reveals infinite, finite, and vanishing self-connectivities. The work derives conditions for stationary and nonstationary instabilities for both kernel types. In addition, a nonlinear analysis for general kernels yields the order parameter equation of the Turing instability. To compare the results to findings for partial differential equations (PDEs), two typical PDE-types are derived from the examined model equation, namely the general reaction-diffusion equation and the Swift-Hohenberg equation. Hence, the discussed integro-differential equation generalizes these PDEs. In the case of the gamma-distributed kernels, the stability conditions are formulated in terms of the mean excitatory and inhibitory interaction ranges. As a novel finding, we obtain Turing instabilities in fields with local inhibition-lateral excitation, while wave instabilities occur in fields with local excitation and lateral inhibition. Numerical simulations support the analytical results.
NASA Astrophysics Data System (ADS)
Hsieh, M.; Zhao, L.; Ma, K.
2010-12-01
Finite-frequency approach enables seismic tomography to fully utilize the spatial and temporal distributions of the seismic wavefield to improve resolution. In achieving this goal, one of the most important tasks is to compute efficiently and accurately the (Fréchet) sensitivity kernels of finite-frequency seismic observables such as traveltime and amplitude to the perturbations of model parameters. In scattering-integral approach, the Fréchet kernels are expressed in terms of the strain Green tensors (SGTs), and a pre-established SGT database is necessary to achieve practical efficiency for a three-dimensional reference model in which the SGTs must be calculated numerically. Methods for computing Fréchet kernels for seismic velocities have long been established. In this study, we develop algorithms based on the finite-difference method for calculating Fréchet kernels for the quality factor Qμ and seismic boundary topography. Kernels for the quality factor can be obtained in a way similar to those for seismic velocities with the help of the Hilbert transform. The effects of seismic velocities and quality factor on either traveltime or amplitude are coupled. Kernels for boundary topography involve spatial gradient of the SGTs and they also exhibit interesting finite-frequency characteristics. Examples of quality factor and boundary topography kernels will be shown for a realistic model for the Taiwan region with three-dimensional velocity variation as well as surface and Moho discontinuity topography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryu, H.; Asanuma, T.
1989-01-01
Two-dimensional combustion processes in a spark ignition engine with and without an unscavenged horizontal prechamber are calculated numerically using a {kappa}-{epsilon} turbulence model, a flame kernel ignition model and an irreversible reaction model to obtain a better understanding of the spatial and temporal distributions of flow and combustion. The simulation results are compared with the measured results under the same operating conditions of experiments, that is, the minimum spark advance for best torque (MBT), volumetric efficiency of 80 +- 2%, air-fuel ratio of 15 and engine speed of 1000 rpm, with various torch nozzle areas and an open chamber. Consequently,more » the flow and combustion characteristics calculated for the S.I. engine with and without prechamber are discussed to examine the effect of torch jet on the velocity vectors, contour maps of turbulence and gas temperature.« less
Should I Stay or Should I Go? A Habitat-Dependent Dispersal Kernel Improves Prediction of Movement
Vinatier, Fabrice; Lescourret, Françoise; Duyck, Pierre-François; Martin, Olivier; Senoussi, Rachid; Tixier, Philippe
2011-01-01
The analysis of animal movement within different landscapes may increase our understanding of how landscape features affect the perceptual range of animals. Perceptual range is linked to movement probability of an animal via a dispersal kernel, the latter being generally considered as spatially invariant but could be spatially affected. We hypothesize that spatial plasticity of an animal's dispersal kernel could greatly modify its distribution in time and space. After radio tracking the movements of walking insects (Cosmopolites sordidus) in banana plantations, we considered the movements of individuals as states of a Markov chain whose transition probabilities depended on the habitat characteristics of current and target locations. Combining a likelihood procedure and pattern-oriented modelling, we tested the hypothesis that dispersal kernel depended on habitat features. Our results were consistent with the concept that animal dispersal kernel depends on habitat features. Recognizing the plasticity of animal movement probabilities will provide insight into landscape-level ecological processes. PMID:21765890
Should I stay or should I go? A habitat-dependent dispersal kernel improves prediction of movement.
Vinatier, Fabrice; Lescourret, Françoise; Duyck, Pierre-François; Martin, Olivier; Senoussi, Rachid; Tixier, Philippe
2011-01-01
The analysis of animal movement within different landscapes may increase our understanding of how landscape features affect the perceptual range of animals. Perceptual range is linked to movement probability of an animal via a dispersal kernel, the latter being generally considered as spatially invariant but could be spatially affected. We hypothesize that spatial plasticity of an animal's dispersal kernel could greatly modify its distribution in time and space. After radio tracking the movements of walking insects (Cosmopolites sordidus) in banana plantations, we considered the movements of individuals as states of a Markov chain whose transition probabilities depended on the habitat characteristics of current and target locations. Combining a likelihood procedure and pattern-oriented modelling, we tested the hypothesis that dispersal kernel depended on habitat features. Our results were consistent with the concept that animal dispersal kernel depends on habitat features. Recognizing the plasticity of animal movement probabilities will provide insight into landscape-level ecological processes.
Fruit position within the canopy affects kernel lipid composition of hazelnuts.
Pannico, Antonio; Cirillo, Chiara; Giaccone, Matteo; Scognamiglio, Pasquale; Romano, Raffaele; Caporaso, Nicola; Sacchi, Raffaele; Basile, Boris
2017-11-01
The aim of this research was to study the variability in kernel composition within the canopy of hazelnut trees. Kernel fresh and dry weight increased linearly with fruit height above the ground. Fat content decreased, while protein and ash content increased, from the bottom to the top layers of the canopy. The level of unsaturation of fatty acids decreased from the bottom to the top of the canopy. Thus, the kernels located in the bottom layers of the canopy appear to be more interesting from a nutritional point of view, but their lipids may be more exposed to oxidation. The content of different phytosterols increased progressively from bottom to top canopy layers. Most of these effects correlated with the pattern in light distribution inside the canopy. The results of this study indicate that fruit position within the canopy is an important factor in determining hazelnut kernel growth and composition. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Efficient Multiple Kernel Learning Algorithms Using Low-Rank Representation.
Niu, Wenjia; Xia, Kewen; Zu, Baokai; Bai, Jianchuan
2017-01-01
Unlike Support Vector Machine (SVM), Multiple Kernel Learning (MKL) allows datasets to be free to choose the useful kernels based on their distribution characteristics rather than a precise one. It has been shown in the literature that MKL holds superior recognition accuracy compared with SVM, however, at the expense of time consuming computations. This creates analytical and computational difficulties in solving MKL algorithms. To overcome this issue, we first develop a novel kernel approximation approach for MKL and then propose an efficient Low-Rank MKL (LR-MKL) algorithm by using the Low-Rank Representation (LRR). It is well-acknowledged that LRR can reduce dimension while retaining the data features under a global low-rank constraint. Furthermore, we redesign the binary-class MKL as the multiclass MKL based on pairwise strategy. Finally, the recognition effect and efficiency of LR-MKL are verified on the datasets Yale, ORL, LSVT, and Digit. Experimental results show that the proposed LR-MKL algorithm is an efficient kernel weights allocation method in MKL and boosts the performance of MKL largely.
Reduction of Aflatoxins in Apricot Kernels by Electronic and Manual Color Sorting.
Zivoli, Rosanna; Gambacorta, Lucia; Piemontese, Luca; Solfrizzo, Michele
2016-01-19
The efficacy of color sorting on reducing aflatoxin levels in shelled apricot kernels was assessed. Naturally-contaminated kernels were submitted to an electronic optical sorter or blanched, peeled, and manually sorted to visually identify and sort discolored kernels (dark and spotted) from healthy ones. The samples obtained from the two sorting approaches were ground, homogenized, and analysed by HPLC-FLD for their aflatoxin content. A mass balance approach was used to measure the distribution of aflatoxins in the collected fractions. Aflatoxin B₁ and B₂ were identified and quantitated in all collected fractions at levels ranging from 1.7 to 22,451.5 µg/kg of AFB₁ + AFB₂, whereas AFG₁ and AFG₂ were not detected. Excellent results were obtained by manual sorting of peeled kernels since the removal of discolored kernels (2.6%-19.9% of total peeled kernels) removed 97.3%-99.5% of total aflatoxins. The combination of peeling and visual/manual separation of discolored kernels is a feasible strategy to remove 97%-99% of aflatoxins accumulated in naturally-contaminated samples. Electronic optical sorter gave highly variable results since the amount of AFB₁ + AFB₂ measured in rejected fractions (15%-18% of total kernels) ranged from 13% to 59% of total aflatoxins. An improved immunoaffinity-based HPLC-FLD method having low limits of detection for the four aflatoxins (0.01-0.05 µg/kg) was developed and used to monitor the occurrence of aflatoxins in 47 commercial products containing apricot kernels and/or almonds commercialized in Italy. Low aflatoxin levels were found in 38% of the tested samples and ranged from 0.06 to 1.50 μg/kg for AFB₁ and from 0.06 to 1.79 μg/kg for total aflatoxins.
On the critical flame radius and minimum ignition energy for spherical flame initiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Zheng; Burke, M. P.; Ju, Yiguang
2011-01-01
Spherical flame initiation from an ignition kernel is studied theoretically and numerically using different fuel/oxygen/helium/argon mixtures (fuel: hydrogen, methane, and propane). The emphasis is placed on investigating the critical flame radius controlling spherical flame initiation and its correlation with the minimum ignition energy. It is found that the critical flame radius is different from the flame thickness and the flame ball radius and that their relationship depends strongly on the Lewis number. Three different flame regimes in terms of the Lewis number are observed and a new criterion for the critical flame radius is introduced. For mixtures with Lewis numbermore » larger than a critical Lewis number above unity, the critical flame radius is smaller than the flame ball radius but larger than the flame thickness. As a result, the minimum ignition energy can be substantially over-predicted (under-predicted) based on the flame ball radius (the flame thickness). The results also show that the minimum ignition energy for successful spherical flame initiation is proportional to the cube of the critical flame radius. Furthermore, preferential diffusion of heat and mass (i.e. the Lewis number effect) is found to play an important role in both spherical flame initiation and flame kernel evolution after ignition. It is shown that the critical flame radius and the minimum ignition energy increase significantly with the Lewis number. Therefore, for transportation fuels with large Lewis numbers, blending of small molecule fuels or thermal and catalytic cracking will significantly reduce the minimum ignition energy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanft, J.M.; Jones, R.J.
This study was designed to compare the uptake and distribution of /sup 14/C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35/sup 0/C were transferred to (/sup 14/C)sucrose media 10 days after pollination. Kernels cultured at 35/sup 0/C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected aftermore » 24 hours in culture on atlageled media. After 8 days in culture on (/sup 14/C)sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35/sup 0/C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35/sup 0/C compared to kernels cultured at 30/sup 0/C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35/sup 0/C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30/sup 0/C (89%). Kernels cultured at 35/sup 0/C had a correspondingly higher proportion of /sup 14/C in endosperm fructose, glucose, and sucrose.« less
Analysis of the spatial distribution of dengue cases in the city of Rio de Janeiro, 2011 and 2012
Carvalho, Silvia; Magalhães, Mônica de Avelar Figueiredo Mafra; Medronho, Roberto de Andrade
2017-01-01
ABSTRACT OBJECTIVE Analyze the spatial distribution of classical dengue and severe dengue cases in the city of Rio de Janeiro. METHODS Exploratory study, considering cases of classical dengue and severe dengue with laboratory confirmation of the infection in the city of Rio de Janeiro during the years 2011/2012. The georeferencing technique was applied for the cases notified in the Notification Increase Information System in the period of 2011 and 2012. For this process, the fields “street” and “number” were used. The ArcGis10 program’s Geocoding tool’s automatic process was performed. The spatial analysis was done through the kernel density estimator. RESULTS Kernel density pointed out hotspots for classic dengue that did not coincide geographically with severe dengue and were in or near favelas. The kernel ratio did not show a notable change in the spatial distribution pattern observed in the kernel density analysis. The georeferencing process showed a loss of 41% of classic dengue registries and 17% of severe dengue registries due to the address in the Notification Increase Information System form. CONCLUSIONS The hotspots near the favelas suggest that the social vulnerability of these localities can be an influencing factor for the occurrence of this aggravation since there is a deficiency of the supply and access to essential goods and services for the population. To reduce this vulnerability, interventions must be related to macroeconomic policies. PMID:28832752
NASA Technical Reports Server (NTRS)
Kershaw, David S.; Prasad, Manoj K.; Beason, J. Douglas
1986-01-01
The Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution is analytically reduced to a single integral, which can then be rapidly evaluated in a variety of ways. A particularly fast method for numerically computing this single integral is presented. This is, to the authors' knowledge, the first correct computation of the Compton scattering kernel.
AGR-1 Post Irradiation Examination Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demkowicz, Paul Andrew
The post-irradiation examination (PIE) of the Advanced Gas Reactor (AGR)-1 experiment was a multi-year, collaborative effort between Idaho National Laboratory (INL) and Oak Ridge National Laboratory (ORNL) to study the performance of UCO (uranium carbide, uranium oxide) tristructural isotropic (TRISO) coated particle fuel fabricated in the U.S. and irradiated at the Advanced Test Reactor at INL to a peak burnup of 19.6% fissions per initial metal atom. This work involved a broad array of experiments and analyses to evaluate the level of fission product retention by the fuel particles and compacts (both during irradiation and during post-irradiation heating tests tomore » simulate reactor accident conditions), investigate the kernel and coating layer morphology evolution and the causes of coating failure, and explore the migration of fission products through the coating layers. The results have generally confirmed the excellent performance of the AGR-1 fuel, first indicated during the irradiation by the observation of zero TRISO coated particle failures out of 298,000 particles in the experiment. Overall release of fission products was determined by PIE to have been relatively low during the irradiation. A significant finding was the extremely low levels of cesium released through intact coatings. This was true both during the irradiation and during post-irradiation heating tests to temperatures as high as 1800°C. Post-irradiation safety test fuel performance was generally excellent. Silver release from the particles and compacts during irradiation was often very high. Extensive microanalysis of fuel particles was performed after irradiation and after high-temperature safety testing. The results of particle microanalysis indicate that the UCO fuel is effective at controlling the oxygen partial pressure within the particle and limiting kernel migration. Post-irradiation examination has provided the final body of data that speaks to the quality of the AGR-1 fuel, building on the as-fabricated fuel characterization and irradiation data. In addition to the extensive volume of results generated, the work also resulted in a number of novel analysis techniques and lessons learned that are being applied to the examination of fuel from subsequent TRISO fuel irradiations. This report provides a summary of the results obtained as part of the AGR-1 PIE campaign over its approximately 5-year duration.« less
Local coding based matching kernel method for image classification.
Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong
2014-01-01
This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.
Multiple kernel SVR based on the MRE for remote sensing water depth fusion detection
NASA Astrophysics Data System (ADS)
Wang, Jinjin; Ma, Yi; Zhang, Jingyu
2018-03-01
Remote sensing has an important means of water depth detection in coastal shallow waters and reefs. Support vector regression (SVR) is a machine learning method which is widely used in data regression. In this paper, SVR is used to remote sensing multispectral bathymetry. Aiming at the problem that the single-kernel SVR method has a large error in shallow water depth inversion, the mean relative error (MRE) of different water depth is retrieved as a decision fusion factor with single kernel SVR method, a multi kernel SVR fusion method based on the MRE is put forward. And taking the North Island of the Xisha Islands in China as an experimentation area, the comparison experiments with the single kernel SVR method and the traditional multi-bands bathymetric method are carried out. The results show that: 1) In range of 0 to 25 meters, the mean absolute error(MAE)of the multi kernel SVR fusion method is 1.5m,the MRE is 13.2%; 2) Compared to the 4 single kernel SVR method, the MRE of the fusion method reduced 1.2% (1.9%) 3.4% (1.8%), and compared to traditional multi-bands method, the MRE reduced 1.9%; 3) In 0-5m depth section, compared to the single kernel method and the multi-bands method, the MRE of fusion method reduced 13.5% to 44.4%, and the distribution of points is more concentrated relative to y=x.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mcwilliams, A. J.
2015-09-08
This report reviews literature on reprocessing high temperature gas-cooled reactor graphite fuel components. A basic review of the various fuel components used in the pebble bed type reactors is provided along with a survey of synthesis methods for the fabrication of the fuel components. Several disposal options are considered for the graphite pebble fuel elements including the storage of intact pebbles, volume reduction by separating the graphite from fuel kernels, and complete processing of the pebbles for waste storage. Existing methods for graphite removal are presented and generally consist of mechanical separation techniques such as crushing and grinding chemical techniquesmore » through the use of acid digestion and oxidation. Potential methods for reprocessing the graphite pebbles include improvements to existing methods and novel technologies that have not previously been investigated for nuclear graphite waste applications. The best overall method will be dependent on the desired final waste form and needs to factor in the technical efficiency, political concerns, cost, and implementation.« less
A Experimental Study of the Growth of Laser Spark and Electric Spark Ignited Flame Kernels.
NASA Astrophysics Data System (ADS)
Ho, Chi Ming
1995-01-01
Better ignition sources are constantly in demand for enhancing the spark ignition in practical applications such as automotive and liquid rocket engines. In response to this practical challenge, the present experimental study was conducted with the major objective to obtain a better understanding on how spark formation and hence spark characteristics affect the flame kernel growth. Two laser sparks and one electric spark were studied in air, propane-air, propane -air-nitrogen, methane-air, and methane-oxygen mixtures that were initially at ambient pressure and temperature. The growth of the kernels was monitored by imaging the kernels with shadowgraph systems, and by imaging the planar laser -induced fluorescence of the hydroxyl radicals inside the kernels. Characteristic dimensions and kernel structures were obtained from these images. Since different energy transfer mechanisms are involved in the formation of a laser spark as compared to that of an electric spark; a laser spark is insensitive to changes in mixture ratio and mixture type, while an electric spark is sensitive to changes in both. The detailed structures of the kernels in air and propane-air mixtures primarily depend on the spark characteristics. But the combustion heat released rapidly in methane-oxygen mixtures significantly modifies the kernel structure. Uneven spark energy distribution causes remarkably asymmetric kernel structure. The breakdown energy of a spark creates a blast wave that shows good agreement with the numerical point blast solution, and a succeeding complex spark-induced flow that agrees reasonably well with a simple puff model. The transient growth rates of the propane-air, propane-air -nitrogen, and methane-air flame kernels can be interpreted in terms of spark effects, flame stretch, and preferential diffusion. For a given mixture, a spark with higher breakdown energy produces a greater and longer-lasting enhancing effect on the kernel growth rate. By comparing the growth rates of the appropriate mixtures, the positive and negative effects of preferential diffusion and flame stretch on the developing flame are clearly demonstrated.
Hanft, Jonathan M.; Jones, Robert J.
1986-01-01
This study was designed to compare the uptake and distribution of 14C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 30 and 35°C were transferred to [14C]sucrose media 10 days after pollination. Kernels cultured at 35°C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on labeled media. After 8 days in culture on [14C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35°C, respectively. This indicates that some of the sucrose taken up by the cob tissue was cleaved to fructose and glucose in the cob. Of the total carbohydrates, a higher percentage of label was associated with sucrose and a lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35°C compared to kernels cultured at 30°C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35°C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30°C (89%). Kernels cultured at 35°C had a correspondingly higher proportion of 14C in endosperm fructose, glucose, and sucrose. These results indicate that starch synthesis in the endosperm is strongly inhibited in kernels induced to abort by high temperature even though there is an adequate supply of sugar. PMID:16664847
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Rooyen, I. J.; Lillo, T. M.; Wen, H. M.
Advanced microscopic and microanalysis techniques were developed and applied to study irradiation effects and fission product behavior in selected low-enriched uranium oxide/uranium carbide TRISO-coated particles from fuel compacts in six capsules irradiated to burnups of 11.2 to 19.6% FIMA. Although no TRISO coating failures were detected during the irradiation, the fraction of Ag-110m retained in individual particles often varied considerably within a single compact and at the capsule level. At the capsule level Ag-110m release fractions ranged from 1.2 to 38% and within a single compact, silver release from individual particles often spanned a range that extended from 100% retentionmore » to nearly 100% release. In this paper, selected irradiated particles from Baseline, Variant 1 and Variant 3 type fueled TRISO coated particles were examined using Scanning Electron Microscopy, Atom Probe Tomography; Electron Energy Loss Spectroscopy; Precession Electron Diffraction, Transmission Electron Microscopy, Scanning Transmission Electron Microscopy (STEM), High Resolution Electron Microscopy (HRTEM) examinations and Electron Probe Micro-Analyzer. Particle selection in this study allowed for comparison of the fission product distribution with Ag retention, fuel type and irradiation level. Nano sized Ag-containing features were predominantly identified in SiC grain boundaries and/or triple points in contrast with only two sitings of Ag inside a SiC grain in two different compacts (Baseline and Variant 3 fueled compacts). STEM and HRTEM analysis showed evidence of Ag and Pd co-existence in some cases and it was found that fission product precipitates can consist of multiple or single phases. STEM analysis also showed differences in precipitate compositions between Baseline and Variant 3 fuels. A higher density of fission product precipitate clusters were identified in the SiC layer in particles from the Variant 3 compact compared with the Variant 1 compact. Trend analysis shows precipitates were randomly distributed along the perimeter of the IPyC-SiC interlayer but only weakly associated with kernel protrusion and buffer fractures. There has been no evidence that the general release of silver is related to cracks or significant degradation of the microstructure. The results presented in this paper provide new insights to Ag transport mechanism(s) in intact SiC layer of TRISO coated particles.« less
Little, C L; Jemmott, W; Surman-Lee, S; Hucklesby, L; de Pinnal, E
2009-04-01
There is little published information on the prevalence of Salmonella in edible nut kernels. A study in early 2008 of edible roasted nut kernels on retail sale in England was undertaken to assess the microbiological safety of this product. A total of 727 nut kernel samples of different varieties were examined. Overall, Salmonella and Escherichia coli were detected from 0.2 and 0.4% of edible roasted nut kernels. Of the nut varieties examined, Salmonella Havana was detected from 1 (4.0%) sample of pistachio nuts, indicating a risk to health. The United Kingdom Food Standards Agency was immediately informed, and full investigations were undertaken. Further examination established the contamination to be associated with the pistachio kernels and not the partly opened shells. Salmonella was not detected in other varieties tested (almonds, Brazils, cashews, hazelnuts, macadamia, peanuts, pecans, pine nuts, and walnuts). E. coli was found at low levels (range of 3.6 to 4/g) in walnuts (1.4%), almonds (1.2%), and Brazils (0.5%). The presence of Salmonella is unacceptable in edible nut kernels. Prevention of microbial contamination in these products lies in the application of good agricultural, manufacturing, and storage practices together with a hazard analysis and critical control points system that encompass all stages of production, processing, and distribution.
NASA Astrophysics Data System (ADS)
An, Bin; Wang, Zhenguo; Yang, Leichao; Li, Xipeng; Zhu, Jiajian
2017-08-01
Cavity ignition of a model scramjet combustor fueled by ethylene was achieved through laser induced plasma, with inflow conditions of Ma = 2.92, total temperature T0 = 1650 K and stagnation pressure P0 = 2.6 MPa. The overall equivalent ratio was kept at 0.152 for all the tests. The ignition processes at different ignition energies and various ignition positions were captured by CH∗ and OH∗ chemiluminescence imaging. The results reveal that the initial flame kernel is carried to the cavity leading edge by the recirculation flow, and resides there for ∼100 μs before spreading downstream. The ignition time can be reduced, and the possibility of successful ignition for single laser pulse can be promoted by enhancing ignition energy. The scale and strength of the initial flame kernel is influenced by both the ignition energy and position. In present study, the middle part of the cavity is the best position for ignition, as it keeps a good balance between the strength of initial flame kernel and the impacts of strain rate in recirculation flow.
The structure of the clouds distributed operating system
NASA Technical Reports Server (NTRS)
Dasgupta, Partha; Leblanc, Richard J., Jr.
1989-01-01
A novel system architecture, based on the object model, is the central structuring concept used in the Clouds distributed operating system. This architecture makes Clouds attractive over a wide class of machines and environments. Clouds is a native operating system, designed and implemented at Georgia Tech. and runs on a set of generated purpose computers connected via a local area network. The system architecture of Clouds is composed of a system-wide global set of persistent (long-lived) virtual address spaces, called objects that contain persistent data and code. The object concept is implemented at the operating system level, thus presenting a single level storage view to the user. Lightweight treads carry computational activity through the code stored in the objects. The persistent objects and threads gives rise to a programming environment composed of shared permanent memory, dispensing with the need for hardware-derived concepts such as the file systems and message systems. Though the hardware may be distributed and may have disks and networks, the Clouds provides the applications with a logically centralized system, based on a shared, structured, single level store. The current design of Clouds uses a minimalist philosophy with respect to both the kernel and the operating system. That is, the kernel and the operating system support a bare minimum of functionality. Clouds also adheres to the concept of separation of policy and mechanism. Most low-level operating system services are implemented above the kernel and most high level services are implemented at the user level. From the measured performance of using the kernel mechanisms, we are able to demonstrate that efficient implementations are feasible for the object model on commercially available hardware. Clouds provides a rich environment for conducting research in distributed systems. Some of the topics addressed in this paper include distributed programming environments, consistency of persistent data and fault-tolerance.
Gluten-containing grains skew gluten assessment in oats due to sample grind non-homogeneity.
Fritz, Ronald D; Chen, Yumin; Contreras, Veronica
2017-02-01
Oats are easily contaminated with gluten-rich kernels of wheat, rye and barley. These contaminants are like gluten 'pills', shown here to skew gluten analysis results. Using R-Biopharm R5 ELISA, we quantified gluten in gluten-free oatmeal servings from an in-market survey. For samples with a 5-20ppm reading on a first test, replicate analyses provided results ranging <5ppm to >160ppm. This suggests sample grinding may inadequately disperse gluten to allow a single accurate gluten assessment. To ascertain this, and characterize the distribution of 0.25-g gluten test results for kernel contaminated oats, twelve 50g samples of pure oats, each spiked with a wheat kernel, showed that 0.25g test results followed log-normal-like distributions. With this, we estimate probabilities of mis-assessment for a 'single measure/sample' relative to the <20ppm regulatory threshold, and derive an equation relating the probability of mis-assessment to sample average gluten content. Copyright © 2016 Elsevier Ltd. All rights reserved.
Miller, Nathan D; Haase, Nicholas J; Lee, Jonghyun; Kaeppler, Shawn M; de Leon, Natalia; Spalding, Edgar P
2017-01-01
Grain yield of the maize plant depends on the sizes, shapes, and numbers of ears and the kernels they bear. An automated pipeline that can measure these components of yield from easily-obtained digital images is needed to advance our understanding of this globally important crop. Here we present three custom algorithms designed to compute such yield components automatically from digital images acquired by a low-cost platform. One algorithm determines the average space each kernel occupies along the cob axis using a sliding-window Fourier transform analysis of image intensity features. A second counts individual kernels removed from ears, including those in clusters. A third measures each kernel's major and minor axis after a Bayesian analysis of contour points identifies the kernel tip. Dimensionless ear and kernel shape traits that may interrelate yield components are measured by principal components analysis of contour point sets. Increased objectivity and speed compared to typical manual methods are achieved without loss of accuracy as evidenced by high correlations with ground truth measurements and simulated data. Millimeter-scale differences among ear, cob, and kernel traits that ranged more than 2.5-fold across a diverse group of inbred maize lines were resolved. This system for measuring maize ear, cob, and kernel attributes is being used by multiple research groups as an automated Web service running on community high-throughput computing and distributed data storage infrastructure. Users may create their own workflow using the source code that is staged for download on a public repository. © 2016 The Authors. The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.
Production of astaxanthin from corn fiber as a value-added co-product of fuel ethanol fermentation
USDA-ARS?s Scientific Manuscript database
Five strains of the yeast Phaffia rhodozyma, NRRL Y-17268, NRRL Y-17270, ATCC 96594 (CBS 6938), ATCC 24202 (UCD 67-210), and ATCC 74219 (UBV-AX2) were tested for astaxanthin production using the major sugars derived from corn fiber, a byproduct from the wet milling of corn kernels that contains prim...
Self spectrum window method in wigner-ville distribution.
Liu, Zhongguo; Liu, Changchun; Liu, Boqiang; Lv, Yangsheng; Lei, Yinsheng; Yu, Mengsun
2005-01-01
Wigner-Ville distribution (WVD) is an important type of time-frequency analysis in biomedical signal processing. The cross-term interference in WVD has a disadvantageous influence on its application. In this research, the Self Spectrum Window (SSW) method was put forward to suppress the cross-term interference, based on the fact that the cross-term and auto-WVD- terms in integral kernel function are orthogonal. With the Self Spectrum Window (SSW) algorithm, a real auto-WVD function was used as a template to cross-correlate with the integral kernel function, and the Short Time Fourier Transform (STFT) spectrum of the signal was used as window function to process the WVD in time-frequency plane. The SSW method was confirmed by computer simulation with good analysis results. Satisfactory time- frequency distribution was obtained.
Lin, Miao; Chu, Qing-Cui; Tian, Xiu-Hui; Ye, Jian-Nong
2007-01-01
Corn has been known for its accumulation of flavones and phenolic acids. However, many parts of corn, except kernel, have not drawn much attention. In this work, a method based on capillary zone electrophoresis with electrochemical detection has been used for the separation and determination of epicatechin, rutin, ascorbic acid (Vc), kaempferol, chlorogenic acid, and quercetin in corn silk, leaf, and kernel. The distribution comparison of the ingredients among silk, leaf, and kernel is discussed. Several important factors--including running buffer acidity, separation voltage, and working electrode potential--were evaluated to acquire the optimum analysis conditions. Under the optimum conditions, the analytes could be well separated within 19 min in a 40-mmol/L borate buffer (pH 9.2). The response was linear over three orders of magnitude with detection limits (S/N = 3) ranging from 4.97 x 10(-8) to 9.75 x 10(-8) g/mL. The method has been successfully applied for the analysis of corn silk, leaf, and kernel with satisfactory results.
Design and Analysis of Architectures for Structural Health Monitoring Systems
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi; Sixto, S. L. (Technical Monitor)
2002-01-01
During the two-year project period, we have worked on several aspects of Health Usage and Monitoring Systems for structural health monitoring. In particular, we have made contributions in the following areas. 1. Reference HUMS architecture: We developed a high-level architecture for health monitoring and usage systems (HUMS). The proposed reference architecture is shown. It is compatible with the Generic Open Architecture (GOA) proposed as a standard for avionics systems. 2. HUMS kernel: One of the critical layers of HUMS reference architecture is the HUMS kernel. We developed a detailed design of a kernel to implement the high level architecture.3. Prototype implementation of HUMS kernel: We have implemented a preliminary version of the HUMS kernel on a Unix platform.We have implemented both a centralized system version and a distributed version. 4. SCRAMNet and HUMS: SCRAMNet (Shared Common Random Access Memory Network) is a system that is found to be suitable to implement HUMS. For this reason, we have conducted a simulation study to determine its stability in handling the input data rates in HUMS. 5. Architectural specification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
IJ van Rooyen; DE Janney; BD Miller
2014-05-01
Post-irradiation examination of coated particle fuel from the AGR-1 experiment is in progress at Idaho National Laboratory and Oak Ridge National Laboratory. In this paper a brief summary of results from characterization of microstructures in the coating layers of selected irradiated fuel particles with burnup of 11.3% and 19.3% FIMA will be given. The main objectives of the characterization were to study irradiation effects, fuel kernel porosity, layer debonding, layer degradation or corrosion, fission-product precipitation, grain sizes, and transport of fission products from the kernels across the TRISO layers. Characterization techniques such as scanning electron microscopy, transmission electron microscopy, energymore » dispersive spectroscopy, and wavelength dispersive spectroscopy were used. A new approach to microscopic quantification of fission-product precipitates is also briefly demonstrated. Microstructural characterization focused on fission-product precipitates in the SiC-IPyC interface, the SiC layer and the fuel-buffer interlayer. The results provide significant new insights into mechanisms of fission-product transport. Although Pd-rich precipitates were identified at the SiC-IPyC interlayer, no significant SiC-layer thinning was observed for the particles investigated. Characterization of these precipitates highlighted the difficulty of measuring low concentrations of Ag in precipitates with significantly higher concentrations of Pd and U. Different approaches to resolving this problem are discussed. An initial hypothesis is provided to explain fission-product precipitate compositions and locations. No SiC phase transformations were observed and no debonding of the SiC-IPyC interlayer as a result of irradiation was observed for the samples investigated. Lessons learned from the post-irradiation examination are described and future actions are recommended.« less
Méndez, Nelson; Oviedo-Pastrana, Misael; Mattar, Salim; Caicedo-Castro, Isaac; Arrieta, German
2017-01-01
The Zika virus disease (ZVD) has had a huge impact on public health in Colombia for the numbers of people affected and the presentation of Guillain-Barre syndrome (GBS) and microcephaly cases associated to ZVD. A retrospective descriptive study was carried out, we analyze the epidemiological situation of ZVD and its association with microcephaly and GBS during a 21-month period, from October 2015 to June 2017. The variables studied were: (i) ZVD cases, (ii) ZVD cases in pregnant women, (iii) laboratory-confirmed ZVD in pregnant women, (iv) ZVD cases associated with microcephaly, (v) laboratory-confirmed ZVD associated with microcephaly, and (vi) ZVD associated to GBS cases. Average number of cases, attack rates (AR) and proportions were also calculated. The studied variables were plotted by epidemiological weeks and months. The distribution of ZVD cases in Colombia was mapped across the time using Kernel density estimator and QGIS software; we adopted Kernel Ridge Regression (KRR) and the Gaussian Kernel to estimate the number of Guillain Barre cases given the number of ZVD cases. One hundred eight thousand eighty-seven ZVD cases had been reported in Colombia, including 19,963 (18.5%) in pregnant women, 710 (0.66%) associated with microcephaly (AR, 4.87 cases per 10,000 live births) and 453 (0.42%) ZVD associated to GBS cases (AR, 41.9 GBS cases per 10,000 ZVD cases). It appears the cases of GBS increased in parallel with the cases of ZVD, cases of microcephaly appeared 5 months after recognition of the outbreak. The kernel density map shows that throughout the study period, the states most affected by the Zika outbreak in Colombia were mainly San Andrés and Providencia islands, Casanare, Norte de Santander, Arauca and Huila. The KRR shows that there is no proportional relationship between the number of GBS and ZVD cases. During the cross validation, the RMSE achieved for the second order polynomial kernel, the linear kernel, the sigmoid kernel, and the Gaussian kernel are 9.15, 9.2, 10.7, and 7.2 respectively. This study updates the epidemiological analysis of the ZVD situation in Colombia describes the geographical distribution of ZVD and shows the functional relationship between ZVD cases and GBS.
Reduction of Aflatoxins in Apricot Kernels by Electronic and Manual Color Sorting
Zivoli, Rosanna; Gambacorta, Lucia; Piemontese, Luca; Solfrizzo, Michele
2016-01-01
The efficacy of color sorting on reducing aflatoxin levels in shelled apricot kernels was assessed. Naturally-contaminated kernels were submitted to an electronic optical sorter or blanched, peeled, and manually sorted to visually identify and sort discolored kernels (dark and spotted) from healthy ones. The samples obtained from the two sorting approaches were ground, homogenized, and analysed by HPLC-FLD for their aflatoxin content. A mass balance approach was used to measure the distribution of aflatoxins in the collected fractions. Aflatoxin B1 and B2 were identified and quantitated in all collected fractions at levels ranging from 1.7 to 22,451.5 µg/kg of AFB1 + AFB2, whereas AFG1 and AFG2 were not detected. Excellent results were obtained by manual sorting of peeled kernels since the removal of discolored kernels (2.6%–19.9% of total peeled kernels) removed 97.3%–99.5% of total aflatoxins. The combination of peeling and visual/manual separation of discolored kernels is a feasible strategy to remove 97%–99% of aflatoxins accumulated in naturally-contaminated samples. Electronic optical sorter gave highly variable results since the amount of AFB1 + AFB2 measured in rejected fractions (15%–18% of total kernels) ranged from 13% to 59% of total aflatoxins. An improved immunoaffinity-based HPLC-FLD method having low limits of detection for the four aflatoxins (0.01–0.05 µg/kg) was developed and used to monitor the occurrence of aflatoxins in 47 commercial products containing apricot kernels and/or almonds commercialized in Italy. Low aflatoxin levels were found in 38% of the tested samples and ranged from 0.06 to 1.50 μg/kg for AFB1 and from 0.06 to 1.79 μg/kg for total aflatoxins. PMID:26797635
Cui, Fa; Fan, Xiaoli; Chen, Mei; Zhang, Na; Zhao, Chunhua; Zhang, Wei; Han, Jie; Ji, Jun; Zhao, Xueqiang; Yang, Lijuan; Zhao, Zongwu; Tong, Yiping; Wang, Tao; Li, Junming
2016-03-01
QTLs for kernel characteristics and tolerance to N stress were identified, and the functions of ten known genes with regard to these traits were specified. Kernel size and quality characteristics in wheat (Triticum aestivum L.) ultimately determine the end use of the grain and affect its commodity price, both of which are influenced by the application of nitrogen (N) fertilizer. This study characterized quantitative trait loci (QTLs) for kernel size and quality and examined the responses of these traits to low-N stress using a recombinant inbred line population derived from Kenong 9204 × Jing 411. Phenotypic analyses were conducted in five trials that each included low- and high-N treatments. We identified 109 putative additive QTLs for 11 kernel size and quality characteristics and 49 QTLs for tolerance to N stress, 27 and 14 of which were stable across the tested environments, respectively. These QTLs were distributed across all wheat chromosomes except for chromosomes 3A, 4D, 6D, and 7B. Eleven QTL clusters that simultaneously affected kernel size- and quality-related traits were identified. At nine locations, 25 of the 49 QTLs for N deficiency tolerance coincided with the QTLs for kernel characteristics, indicating their genetic independence. The feasibility of indirect selection of a superior genotype for kernel size and quality under high-N conditions in breeding programs designed for a lower input management system are discussed. In addition, we specified the functions of Glu-A1, Glu-B1, Glu-A3, Glu-B3, TaCwi-A1, TaSus2, TaGS2-D1, PPO-D1, Rht-B1, and Ha with regard to kernel characteristics and the sensitivities of these characteristics to N stress. This study provides useful information for the genetic improvement of wheat kernel size, quality, and resistance to N stress.
Conormal distributions in the Shubin calculus of pseudodifferential operators
NASA Astrophysics Data System (ADS)
Cappiello, Marco; Schulz, René; Wahlberg, Patrik
2018-02-01
We characterize the Schwartz kernels of pseudodifferential operators of Shubin type by means of a Fourier-Bros-Iagolnitzer transform. Based on this, we introduce as a generalization a new class of tempered distributions called Shubin conormal distributions. We study their transformation behavior, normal forms, and microlocal properties.
NASA Astrophysics Data System (ADS)
Indrawati, V.; Manaf, A.; Purwadi, G.
2009-09-01
This paper reports recent investigations on the use of biomass like rice husk, palm kernel shell, saw dust and municipal waste to reduce the use of fossil fuels energy in the cement production. Such waste materials have heat values in the range approximately from 2,000 to 4,000 kcal/kg. These are comparable to the average value of 5800 kcal/kg from fossil materials like coals which are widely applied in many industrial processing. Hence, such waste materials could be used as alternative fuels replacing the fossil one. It is shown that replacement of coals with such waste materials has a significant impact on cost effectiveness as well as sustainable development. Variation in moisture content of the waste materials, however should be taken into account because this is one of the parameter that could not be controlled. During fuel combustion, some amount of the total energy is used to evaporate the water content and thus the net effective heat value is less.
Initial results from safety testing of US AGR-2 irradiation test fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, Robert Noel; Hunn, John D.; Baldwin, Charles A.
Two cylindrical compacts containing tristructural isotropic (TRISO)-coated particles with kernels that contained a mixture of uranium carbide and uranium oxide (UCO) and two compacts with UO 2-kernel TRISO particles have undergone 1600°C safety testing. These compacts were irradiated in the US Advanced Gas Reactor Fuel Development and Qualification Program's second irradiation test (AGR-2). The time-dependent releases of several radioisotopes ( 110mAg, 134Cs, 137Cs, 154Eu, 155Eu, 90Sr, and 85Kr) were monitored while heating the fuel specimens to 1600°C in flowing helium for 300 h. The UCO compacts behaved similarly to previously reported 1600°C-safety-tested UCO compacts from the AGR-1 irradiation. No failedmore » TRISO or failed SiC were detected (based on krypton and cesium release), and cesium release through intact SiC was very low. Release behavior of silver, europium, and strontium appeared to be dominated by inventory originally released through intact coating layers during irradiation but retained in the compact matrix until it was released during safety testing. Both UO 2 compacts exhibited cesium release from multiple particles whose SiC failed during the safety test. Europium and strontium release from these two UO 2 compacts appeared to be dominated by release from the particles with failed SiC. Silver release was characteristically like the release from the UCO compacts in that an initial release of the majority of silver trapped in the matrix occurred during ramping to 1600°C. However, additional silver release was observed later in the safety testing due to the UO 2 TRISO with failed SiC. Failure of the SiC layer in the UO 2 fuel appears to have been dominated by CO corrosion, as opposed to the palladium degradation observed in AGR-1 UCO fuel.« less
Initial results from safety testing of US AGR-2 irradiation test fuel
Morris, Robert Noel; Hunn, John D.; Baldwin, Charles A.; ...
2017-08-18
Two cylindrical compacts containing tristructural isotropic (TRISO)-coated particles with kernels that contained a mixture of uranium carbide and uranium oxide (UCO) and two compacts with UO 2-kernel TRISO particles have undergone 1600°C safety testing. These compacts were irradiated in the US Advanced Gas Reactor Fuel Development and Qualification Program's second irradiation test (AGR-2). The time-dependent releases of several radioisotopes ( 110mAg, 134Cs, 137Cs, 154Eu, 155Eu, 90Sr, and 85Kr) were monitored while heating the fuel specimens to 1600°C in flowing helium for 300 h. The UCO compacts behaved similarly to previously reported 1600°C-safety-tested UCO compacts from the AGR-1 irradiation. No failedmore » TRISO or failed SiC were detected (based on krypton and cesium release), and cesium release through intact SiC was very low. Release behavior of silver, europium, and strontium appeared to be dominated by inventory originally released through intact coating layers during irradiation but retained in the compact matrix until it was released during safety testing. Both UO 2 compacts exhibited cesium release from multiple particles whose SiC failed during the safety test. Europium and strontium release from these two UO 2 compacts appeared to be dominated by release from the particles with failed SiC. Silver release was characteristically like the release from the UCO compacts in that an initial release of the majority of silver trapped in the matrix occurred during ramping to 1600°C. However, additional silver release was observed later in the safety testing due to the UO 2 TRISO with failed SiC. Failure of the SiC layer in the UO 2 fuel appears to have been dominated by CO corrosion, as opposed to the palladium degradation observed in AGR-1 UCO fuel.« less
Coalescence of repelling colloidal droplets: a route to monodisperse populations.
Roger, Kevin; Botet, Robert; Cabane, Bernard
2013-05-14
Populations of droplets or particles dispersed in a liquid may evolve through Brownian collisions, aggregation, and coalescence. We have found a set of conditions under which these populations evolve spontaneously toward a narrow size distribution. The experimental system consists of poly(methyl methacrylate) (PMMA) nanodroplets dispersed in a solvent (acetone) + nonsolvent (water) mixture. These droplets carry electrical charges, located on the ionic end groups of the macromolecules. We used time-resolved small angle X-ray scattering to determine their size distribution. We find that the droplets grow through coalescence events: the average radius (R) increases logarithmically with elapsed time while the relative width σR/(R) of the distribution decreases as the inverse square root of (R). We interpret this evolution as resulting from coalescence events that are hindered by ionic repulsions between droplets. We generalize this evolution through a simulation of the Smoluchowski kinetic equation, with a kernel that takes into account the interactions between droplets. In the case of vanishing or attractive interactions, all droplet encounters lead to coalescence. The corresponding kernel leads to the well-known "self-preserving" particle distribution of the coalescence process, where σR/(R) increases to a plateau value. However, for droplets that interact through long-range ionic repulsions, "large + small" droplet encounters are more successful at coalescence than "large + large" encounters. We show that the corresponding kernel leads to a particular scaling of the droplet-size distribution-known as the "second-scaling law" in the theory of critical phenomena, where σR/(R) decreases as 1/√(R) and becomes independent of the initial distribution. We argue that this scaling explains the narrow size distributions of colloidal dispersions that have been synthesized through aggregation processes.
SU-E-T-510: Calculation of High Resolution and Material-Specific Photon Energy Deposition Kernels.
Huang, J; Childress, N; Kry, S
2012-06-01
To calculate photon energy deposition kernels (EDKs) used for convolution/superposition dose calculation at a higher resolution than the original Mackie et al. 1988 kernels and to calculate material-specific kernels that describe how energy is transported and deposited by secondary particles when the incident photon interacts in a material other than water. The high resolution EDKs for various incident photon energies were generated using the EGSnrc user-code EDKnrc, which forces incident photons to interact at the center of a 60 cm radius sphere of water. The simulation geometry is essentially the same as the original Mackie calculation but with a greater number of scoring voxels (48 radial, 144 angular bins). For the material-specific EDKs, incident photons were forced to interact at the center of a 1 mm radius sphere of material (lung, cortical bone, silver, or titanium) surrounded by a 60 cm radius water sphere, using the original scoring voxel geometry implemented by Mackie et al. 1988 (24 radial, 48 angular bins). Our Monte Carlo-calculated high resolution EDKs showed excellent agreement with the Mackie kernels, with our kernels providing more information about energy deposition close to the interaction site. Furthermore, our EDKs resulted in smoother dose deposition functions due to the finer resolution and greater number of simulation histories. The material-specific EDK results show that the angular distribution of energy deposition is different for incident photons interacting in different materials. Calculated from the angular dose distribution for 300 keV incident photons, the expected polar angle for dose deposition (
Seismic Imaging of VTI, HTI and TTI based on Adjoint Methods
NASA Astrophysics Data System (ADS)
Rusmanugroho, H.; Tromp, J.
2014-12-01
Recent studies show that isotropic seismic imaging based on adjoint method reduces low-frequency artifact caused by diving waves, which commonly occur in two-wave wave-equation migration, such as Reverse Time Migration (RTM). Here, we derive new expressions of sensitivity kernels for Vertical Transverse Isotropy (VTI) using the Thomsen parameters (ɛ, δ, γ) plus the P-, and S-wave speeds (α, β) as well as via the Chen & Tromp (GJI 2005) parameters (A, C, N, L, F). For Horizontal Transverse Isotropy (HTI), these parameters depend on an azimuthal angle φ, where the tilt angle θ is equivalent to 90°, and for Tilted Transverse Isotropy (TTI), these parameters depend on both the azimuth and tilt angles. We calculate sensitivity kernels for each of these two approaches. Individual kernels ("images") are numerically constructed based on the interaction between the regular and adjoint wavefields in smoothed models which are in practice estimated through Full-Waveform Inversion (FWI). The final image is obtained as a result of summing all shots, which are well distributed to sample the target model properly. The impedance kernel, which is a sum of sensitivity kernels of density and the Thomsen or Chen & Tromp parameters, looks crisp and promising for seismic imaging. The other kernels suffer from low-frequency artifacts, similar to traditional seismic imaging conditions. However, all sensitivity kernels are important for estimating the gradient of the misfit function, which, in combination with a standard gradient-based inversion algorithm, is used to minimize the objective function in FWI.
Omnibus Risk Assessment via Accelerated Failure Time Kernel Machine Modeling
Sinnott, Jennifer A.; Cai, Tianxi
2013-01-01
Summary Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai et al., 2011). In this paper, we derive testing and prediction methods for KM regression under the accelerated failure time model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. PMID:24328713
A thermodynamic approach for advanced fuels of gas-cooled reactors
NASA Astrophysics Data System (ADS)
Guéneau, C.; Chatain, S.; Gossé, S.; Rado, C.; Rapaud, O.; Lechelle, J.; Dumas, J. C.; Chatillon, C.
2005-09-01
For both high temperature reactor (HTR) and gas cooled fast reactor (GFR) systems, the high operating temperature in normal and accidental conditions necessitates the assessment of the thermodynamic data and associated phase diagrams for the complex system constituted of the fuel kernel, the inert materials and the fission products. A classical CALPHAD approach, coupling experiments and thermodynamic calculations, is proposed. Some examples of studies are presented leading with the CO and CO 2 gas formation during the chemical interaction of [UO 2± x/C] in the HTR particle, and the chemical compatibility of the couples [UN/SiC], [(U, Pu)N/SiC], [(U, Pu)N/TiN] for the GFR system. A project of constitution of a thermodynamic database for advanced fuels of gas-cooled reactors is proposed.
Contribution to the identification of pyrolysis byproducts in fluidized bed soot and in pyrocarbon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfrum, E.; Rottmann, J.; Bueker, I.
1973-01-15
In order to develop improved fuel particles, pyrohysis byproducts of both the pyrocarbon separated in fluidized beds and the resulting soot were studied. The aim was to study the separation mechanism of pyrocarbon on fuel kernels during the thermal decomposition of low hydrocarbons. This study referred to pyrolysis products of acetylene and propylene. The extraction was performed with various methods. The extracts were separated gas- chromatographically and mass-spectrometrically; the single components were partially identified. 21 polycyclic and aromatic hydrocarbons were clearly identified in soot. Beyond that, pyrocarbon contains still higher molecular pohycyclic compounds. (18 figures, 12 tables, 34 references) (auth)
How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.
Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J
2014-09-01
Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Bissacco, Alessandro; Chiuso, Alessandro; Soatto, Stefano
2007-11-01
We address the problem of performing decision tasks, and in particular classification and recognition, in the space of dynamical models in order to compare time series of data. Motivated by the application of recognition of human motion in image sequences, we consider a class of models that include linear dynamics, both stable and marginally stable (periodic), both minimum and non-minimum phase, driven by non-Gaussian processes. This requires extending existing learning and system identification algorithms to handle periodic modes and nonminimum phase behavior, while taking into account higher-order statistics of the data. Once a model is identified, we define a kernel-based cord distance between models that includes their dynamics, their initial conditions as well as input distribution. This is made possible by a novel kernel defined between two arbitrary (non-Gaussian) distributions, which is computed by efficiently solving an optimal transport problem. We validate our choice of models, inference algorithm, and distance on the tasks of human motion synthesis (sample paths of the learned models), and recognition (nearest-neighbor classification in the computed distance). However, our work can be applied more broadly where one needs to compare historical data while taking into account periodic trends, non-minimum phase behavior, and non-Gaussian input distributions.
A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks
Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong
2016-01-01
In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298
Horner, T A; Dively, G P; Herbert, D A
2003-06-01
Helicoverpa zea (Boddie) development, survival, and feeding injury in MON810 transgenic ears of field corn (Zea mays L.) expressing Bacillus thuringiensis variety kurstaki (Bt) Cry1Ab endotoxins were compared with non-Bt ears at four geographic locations over two growing seasons. Expression of Cry1Ab endotoxin resulted in overall reductions in the percentage of damaged ears by 33% and in the amount of kernels consumed by 60%. Bt-induced effects varied significantly among locations, partly because of the overall level and timing of H. zea infestations, condition of silk tissue at the time of egg hatch, and the possible effects of plant stress. Larvae feeding on Bt ears produced scattered, discontinuous patches of partially consumed kernels, which were arranged more linearly than the compact feeding patterns in non-Bt ears. The feeding patterns suggest that larvae in Bt ears are moving about sampling kernels more frequently than larvae in non-Bt ears. Because not all kernels express the same level of endotoxin, the spatial heterogeneity of toxin distribution within Bt ears may provide an opportunity for development of behavioral responses in H. zea to avoid toxin. MON810 corn suppressed the establishment and development of H. zea to late instars by at least 75%. This level of control is considered a moderate dose, which may increase the risk of resistance development in areas where MON810 corn is widely adopted and H. zea overwinters successfully. Sublethal effects of MON810 corn resulted in prolonged larval and prepupal development, smaller pupae, and reduced fecundity of H. zea. The moderate dose effects and the spatial heterogeneity of toxin distribution among kernels could increase the additive genetic variance for both physiological and behavioral resistance in H. zea populations. Implications of localized population suppression are discussed.
Accelerating the Original Profile Kernel.
Hamp, Tobias; Goldberg, Tatyana; Rost, Burkhard
2013-01-01
One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel.
Delimiting Areas of Endemism through Kernel Interpolation
Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971
The Potential of Palm Oil Waste Biomass in Indonesia in 2020 and 2030
NASA Astrophysics Data System (ADS)
Hambali, E.; Rivai, M.
2017-05-01
During replanting activity in oil palm plantation, biomass including palm frond and trunk are produced. In palm oil mills, during the conversion process of fresh fruit bunches (FFB) into crude palm oil (CPO), several kinds of waste including empty fruit bunch (EFB), mesocarp fiber (MF), palm kernel shell (PKS), palm kernel meal (PKM), and palm oil mills effluent (POME) are produced. The production of these wastes is abundant as oil palm plantation area, FFB production, and palm oil mills spread all over 22 provinces in Indonesia. These wastes are still economical as they can be utilized as sources of alternative fuel, fertilizer, chemical compounds, and biomaterials. Therefore, breakthrough studies need to be done in order to improve the added value of oil palm, minimize the waste, and make oil palm industry more sustainable.
Dual-pulse laser ignition of ethylene-air mixtures in a supersonic combustor.
Yang, Leichao; An, Bin; Liang, Jianhan; Li, Xipeng; Wang, Zhenguo
2018-04-02
To reduce the energy of an individual laser pulse, dual-pulse laser ignitions (LIs) at various pulse intervals were investigated in a Mach 2.92 scramjet engine fueled with ethylene. For comparison, experiments on a single-pulse LI were also performed. Schlieren visualization and high-speed photography were employed to observe the ignition processes simultaneously. The results indicate that the energy of an individual laser pulse can be reduced by half via a dual-pulse LI method as compared with a single-pulse LI with the same total energy. The reduction of the individual laser pulse energy degrades the requirements on the laser source and the beam delivery system, which facilitates the practical application of LI in hypersonic vehicles. A pulse interval shorter than 40 μs is suggested for dual-pulse LI in the present study. Because of the intense heat loss and radical dissipation in high-speed flows, the pulse interval for dual-pulse LI should be short enough to narrow the spatial distribution of the initial flame kernel.
Combustor with two stage primary fuel assembly
Sharifi, Mehran; Zolyomi, Wendel; Whidden, Graydon Lane
2000-01-01
A combustor for a gas turbine having first and second passages for pre-mixing primary fuel and air supplied to a primary combustion zone. The flow of fuel to the first and second pre-mixing passages is separately regulated using a single annular fuel distribution ring having first and second row of fuel discharge ports. The interior portion of the fuel distribution ring is divided by a baffle into first and second fuel distribution manifolds and is located upstream of the inlets to the two pre-mixing passages. The annular fuel distribution ring is supplied with fuel by an annular fuel supply manifold, the interior portion of which is divided by a baffle into first and second fuel supply manifolds. A first flow of fuel is regulated by a first control valve and directed to the first fuel supply manifold, from which the fuel is distributed to first fuel supply tubes that direct it to the first fuel distribution manifold. From the first fuel distribution manifold, the first flow of fuel is distributed to the first row of fuel discharge ports, which direct it into the first pre-mixing passage. A second flow of fuel is regulated by a second control valve and directed to the second fuel supply manifold, from which the fuel is distributed to second fuel supply tubes that direct it to the second fuel distribution manifold. From the second fuel distribution manifold, the second flow of fuel is distributed to the second row of fuel discharge ports, which direct it into the second pre-mixing passage.
A simple method for computing the relativistic Compton scattering kernel for radiative transfer
NASA Technical Reports Server (NTRS)
Prasad, M. K.; Kershaw, D. S.; Beason, J. D.
1986-01-01
Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.
Data-Driven Hierarchical Structure Kernel for Multiscale Part-Based Object Recognition
Wang, Botao; Xiong, Hongkai; Jiang, Xiaoqian; Zheng, Yuan F.
2017-01-01
Detecting generic object categories in images and videos are a fundamental issue in computer vision. However, it faces the challenges from inter and intraclass diversity, as well as distortions caused by viewpoints, poses, deformations, and so on. To solve object variations, this paper constructs a structure kernel and proposes a multiscale part-based model incorporating the discriminative power of kernels. The structure kernel would measure the resemblance of part-based objects in three aspects: 1) the global similarity term to measure the resemblance of the global visual appearance of relevant objects; 2) the part similarity term to measure the resemblance of the visual appearance of distinctive parts; and 3) the spatial similarity term to measure the resemblance of the spatial layout of parts. In essence, the deformation of parts in the structure kernel is penalized in a multiscale space with respect to horizontal displacement, vertical displacement, and scale difference. Part similarities are combined with different weights, which are optimized efficiently to maximize the intraclass similarities and minimize the interclass similarities by the normalized stochastic gradient ascent algorithm. In addition, the parameters of the structure kernel are learned during the training process with regard to the distribution of the data in a more discriminative way. With flexible part sizes on scale and displacement, it can be more robust to the intraclass variations, poses, and viewpoints. Theoretical analysis and experimental evaluations demonstrate that the proposed multiscale part-based representation model with structure kernel exhibits accurate and robust performance, and outperforms state-of-the-art object classification approaches. PMID:24808345
Omnibus risk assessment via accelerated failure time kernel machine modeling.
Sinnott, Jennifer A; Cai, Tianxi
2013-12-01
Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.
Size and moisture distribution characteristics of walnuts and their components
USDA-ARS?s Scientific Manuscript database
The objective of this study was to determine the size characteristics and moisture content (MC) distributions of individual walnuts and their components, including hulls, shells and kernels under different harvest conditions. Measurements were carried out for three walnut varieties, Tulare, Howard a...
Preliminary CFD study of Pebble Size and its Effect on Heat Transfer in a Pebble Bed Reactor
NASA Astrophysics Data System (ADS)
Jones, Andrew; Enriquez, Christian; Spangler, Julian; Yee, Tein; Park, Jungkyu; Farfan, Eduardo
2017-11-01
In pebble bed reactors, the typical pebble diameter used is 6cm, and within each pebble is are thousands of nuclear fuel kernels. However, efficiency of the reactor does not solely depend on the number of kernels of fuel within each graphite sphere, but also depends on the type and motion of the coolant within the voids between the spheres and the reactor itself. In this work a physical analysis of the pebble bed nuclear reactor's fluid dynamics is undertaken using Computational Fluid Dynamics software. The primary goal of this work is to observe the relationship between the different pebble diameters in an idealized alignment and the thermal transport efficiency of the reactor. The model constructed of our idealized argument will consist on stacked 8 pebble columns that fixed at the inlet on the reactor. Two different pebble sizes 4 cm and 6 cm will be studied and helium will be supplied as coolant with a fixed flow rate of 96 kg/s, also a fixed pebble surface temperatures will be used. Comparison will then be made to evaluate the efficiency of coolant to transport heat due to the varying sizes of the pebbles. Assistant Professor for the Department of Civil and Construction Engineering PhD.
Online Distributed Learning Over Networks in RKH Spaces Using Random Fourier Features
NASA Astrophysics Data System (ADS)
Bouboulis, Pantelis; Chouvardas, Symeon; Theodoridis, Sergios
2018-04-01
We present a novel diffusion scheme for online kernel-based learning over networks. So far, a major drawback of any online learning algorithm, operating in a reproducing kernel Hilbert space (RKHS), is the need for updating a growing number of parameters as time iterations evolve. Besides complexity, this leads to an increased need of communication resources, in a distributed setting. In contrast, the proposed method approximates the solution as a fixed-size vector (of larger dimension than the input space) using Random Fourier Features. This paves the way to use standard linear combine-then-adapt techniques. To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented. Conditions for asymptotic convergence and boundness of the networkwise regret are also provided. The simulated tests illustrate the performance of the proposed scheme.
Erinjery, Joseph J; Kavana, T S; Singh, Mewa
2015-01-01
The distribution and availability of food was examined to see how it influenced ranging patterns and sleeping site selection in a group of lion-tailed macaques. The home range and core area were 130.48 ha (95% kernel) and 26.68 ha (50% kernel) respectively. The lion-tailed macaques had a longer day range, had a greater number of sleeping sites and used more core areas in the summer as compared to the monsoon and the post-monsoon seasons. The ranging patterns and sleeping site use were influenced by the major food resources used in a particular season. The ranging was mainly influenced by Artocarpus heterophyllus in monsoon, Cullenia exarillata and Toona ciliata in post- monsoon, and Artocarpus heterophyllus and Ficus amplissima in summer. The distribution of these four plant species is, therefore, critical to ranging, and thus to conservation of the lion-tailed macaque.
Thermomechanics of candidate coatings for advanced gas reactor fuels
NASA Astrophysics Data System (ADS)
Nosek, A.; Conzen, J.; Doescher, H.; Martin, C.; Blanchard, J.
2007-09-01
Candidate fuel/coating combinations for an advanced, coated-fuel particle for a gas-cooled fast reactor (GFR) have been evaluated. These all-ceramic fuel forms consist of a fuel kernel made of UC or UN, surrounded with two shells (a buffer and a coating) made of TiC, SiC, ZrC, TiN, or ZrN. These carbides and nitrides are analyzed with finite element models to determine the stresses produced in the micro fuel particles from differential thermal expansion, fission gas release, swelling, and creep during particle fabrication and reactor operation. This study will help determine the feasibility of different fuel and coating combinations and identify the critical loads. The analysis shows that differential thermal expansion of the fuel and coating dictate the amount of stress for changing temperatures (such as during fabrication), and that the coating creep is able to mitigate an otherwise overwhelming amount of stress from fuel swelling. Because fracture is a likely mode of failure, a fracture mechanics study is also included to identify the relative likelihood of catastrophic fracture of the coating and resulting gas release. Overall, the analysis predicts that UN/ZrC is the best thermomechanical fuel/coating combination for mitigating the stress within the new fuel particle, but UN/TiN and UN/ZrN could also be strong candidates if their unknown creep rates are sufficiently large.
A high performance parallel algorithm for 1-D FFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, R.C.; Gustavson, F.G.; Zubair, M.
1994-12-31
In this paper the authors propose a parallel high performance FFT algorithm based on a multi-dimensional formulation. They use this to solve a commonly encountered FFT based kernel on a distributed memory parallel machine, the IBM scalable parallel system, SP1. The kernel requires a forward FFT computation of an input sequence, multiplication of the transformed data by a coefficient array, and finally an inverse FFT computation of the resultant data. They show that the multi-dimensional formulation helps in reducing the communication costs and also improves the single node performance by effectively utilizing the memory system of the node. They implementedmore » this kernel on the IBM SP1 and observed a performance of 1.25 GFLOPS on a 64-node machine.« less
Voronoi cell patterns: Theoretical model and applications
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2011-11-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.
Voronoi Cell Patterns: theoretical model and application to submonolayer growth
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2012-02-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We apply our model to describe the Voronoi cell patterns of island nucleation for critical island sizes i=0,1,2,3. Experimental results for the Voronoi cells of InAs/GaAs quantum dots are also described by our model.
Efficient High Performance Collective Communication for Distributed Memory Environments
ERIC Educational Resources Information Center
Ali, Qasim
2009-01-01
Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…
Comparative analysis of genetic architectures for nine developmental traits of rye.
Masojć, Piotr; Milczarski, P; Kruszona, P
2017-08-01
Genetic architectures of plant height, stem thickness, spike length, awn length, heading date, thousand-kernel weight, kernel length, leaf area and chlorophyll content were aligned on the DArT-based high-density map of the 541 × Ot1-3 RILs population of rye using the genes interaction assorting by divergent selection (GIABDS) method. Complex sets of QTL for particular traits contained 1-5 loci of the epistatic D class and 10-28 loci of the hypostatic, mostly R and E classes controlling traits variation through D-E or D-R types of two-loci interactions. QTL were distributed on each of the seven rye chromosomes in unique positions or as a coinciding loci for 2-8 traits. Detection of considerable numbers of the reversed (D', E' and R') classes of QTL might be attributed to the transgression effects observed for most of the studied traits. First examples of E* and F QTL classes, defined in the model, are reported for awn length, leaf area, thousand-kernel weight and kernel length. The results of this study extend experimental data to 11 quantitative traits (together with pre-harvest sprouting and alpha-amylase activity) for which genetic architectures fit the model of mechanism underlying alleles distribution within tails of bi-parental populations. They are also a valuable starting point for map-based search of genes underlying detected QTL and for planning advanced marker-assisted multi-trait breeding strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Algan, O; Ahmad, S
Purpose: To model patient motion and produce four-dimensional (4D) optimized dose distributions that consider motion-artifacts in the dose calculation during the treatment planning process. Methods: An algorithm for dose calculation is developed where patient motion is considered in dose calculation at the stage of the treatment planning. First, optimal dose distributions are calculated for the stationary target volume where the dose distributions are optimized considering intensity-modulated radiation therapy (IMRT). Second, a convolution-kernel is produced from the best-fitting curve which matches the motion trajectory of the patient. Third, the motion kernel is deconvolved with the initial dose distribution optimized for themore » stationary target to produce a dose distribution that is optimized in four-dimensions. This algorithm is tested with measured doses using a mobile phantom that moves with controlled motion patterns. Results: A motion-optimized dose distribution is obtained from the initial dose distribution of the stationary target by deconvolution with the motion-kernel of the mobile target. This motion-optimized dose distribution is equivalent to that optimized for the stationary target using IMRT. The motion-optimized and measured dose distributions are tested with the gamma index with a passing rate of >95% considering 3% dose-difference and 3mm distance-to-agreement. If the dose delivery per beam takes place over several respiratory cycles, then the spread-out of the dose distributions is only dependent on the motion amplitude and not affected by motion frequency and phase. This algorithm is limited to motion amplitudes that are smaller than the length of the target along the direction of motion. Conclusion: An algorithm is developed to optimize dose in 4D. Besides IMRT that provides optimal dose coverage for a stationary target, it extends dose optimization to 4D considering target motion. This algorithm provides alternative to motion management techniques such as beam-gating or breath-holding and has potential applications in adaptive radiation therapy.« less
Production of Low Enriched Uranium Nitride Kernels for TRISO Particle Irradiation Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMurray, J. W.; Silva, C. M.; Helmreich, G. W.
2016-06-01
A large batch of UN microspheres to be used as kernels for TRISO particle fuel was produced using carbothermic reduction and nitriding of a sol-gel feedstock bearing tailored amounts of low-enriched uranium (LEU) oxide and carbon. The process parameters, established in a previous study, produced phasepure NaCl structure UN with dissolved C on the N sublattice. The composition, calculated by refinement of the lattice parameter from X-ray diffraction, was determined to be UC 0.27N 0.73. The final accepted product weighed 197.4 g. The microspheres had an average diameter of 797±1.35 μm and a composite mean theoretical density of 89.9±0.5% formore » a solid solution of UC and UN with the same atomic ratio; both values are reported with their corresponding calculated standard error.« less
Giordano, Debora; Reyneri, Amedeo; Blandino, Massimo
2016-03-30
Wholegrain cereals are an important source of folates. In this study, total folate was analysed in pearled fractions of barley and wheat cultivars employing AOAC Official Method 2004.05. In particular, the distribution of folate in the kernels was evaluated in three barley cultivars (two hulled types and a hulless one as well as two- and six-row types) and in a common and a durum wheat cultivar. A noticeable variation in the folate content was observed between the barley [653-1033 ng g(-1) dry matter (DM)] and wheat cultivars (1024-1119 ng g(-1) DM). The highest folate content was detected in the hulless barley cultivar (1033 ng g(-1) DM). A significant reduction in total folate, from 63% to 86%, was observed in all cultivars from the outermost to the innermost pearled fractions. Results proved that folates are mainly present in the germ and in the outer layers of the kernel. This is the first study reporting the folate distribution in kernels of both common and durum wheat and in a hulless barley cultivar. Results suggest that the pearling process could be useful for the selection of intermediate fractions that could be used in order to develop folate-enhanced ingredients and products. © 2015 Society of Chemical Industry.
Sub-Network Kernels for Measuring Similarity of Brain Connectivity Networks in Disease Diagnosis.
Jie, Biao; Liu, Mingxia; Zhang, Daoqiang; Shen, Dinggang
2018-05-01
As a simple representation of interactions among distributed brain regions, brain networks have been widely applied to automated diagnosis of brain diseases, such as Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment (MCI). In brain network analysis, a challenging task is how to measure the similarity between a pair of networks. Although many graph kernels (i.e., kernels defined on graphs) have been proposed for measuring the topological similarity of a pair of brain networks, most of them are defined using general graphs, thus ignoring the uniqueness of each node in brain networks. That is, each node in a brain network denotes a particular brain region, which is a specific characteristics of brain networks. Accordingly, in this paper, we construct a novel sub-network kernel for measuring the similarity between a pair of brain networks and then apply it to brain disease classification. Different from current graph kernels, our proposed sub-network kernel not only takes into account the inherent characteristic of brain networks, but also captures multi-level (from local to global) topological properties of nodes in brain networks, which are essential for defining the similarity measure of brain networks. To validate the efficacy of our method, we perform extensive experiments on subjects with baseline functional magnetic resonance imaging data obtained from the Alzheimer's disease neuroimaging initiative database. Experimental results demonstrate that the proposed method outperforms several state-of-the-art graph-based methods in MCI classification.
Scuba: scalable kernel-based gene prioritization.
Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio
2018-01-25
The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .
Flexibly imposing periodicity in kernel independent FMM: A multipole-to-local operator approach
NASA Astrophysics Data System (ADS)
Yan, Wen; Shelley, Michael
2018-02-01
An important but missing component in the application of the kernel independent fast multipole method (KIFMM) is the capability for flexibly and efficiently imposing singly, doubly, and triply periodic boundary conditions. In most popular packages such periodicities are imposed with the hierarchical repetition of periodic boxes, which may give an incorrect answer due to the conditional convergence of some kernel sums. Here we present an efficient method to properly impose periodic boundary conditions using a near-far splitting scheme. The near-field contribution is directly calculated with the KIFMM method, while the far-field contribution is calculated with a multipole-to-local (M2L) operator which is independent of the source and target point distribution. The M2L operator is constructed with the far-field portion of the kernel function to generate the far-field contribution with the downward equivalent source points in KIFMM. This method guarantees the sum of the near-field & far-field converge pointwise to results satisfying periodicity and compatibility conditions. The computational cost of the far-field calculation observes the same O (N) complexity as FMM and is designed to be small by reusing the data computed by KIFMM for the near-field. The far-field calculations require no additional control parameters, and observes the same theoretical error bound as KIFMM. We present accuracy and timing test results for the Laplace kernel in singly periodic domains and the Stokes velocity kernel in doubly and triply periodic domains.
Lévy processes on a generalized fractal comb
NASA Astrophysics Data System (ADS)
Sandev, Trifce; Iomin, Alexander; Méndez, Vicenç
2016-09-01
Comb geometry, constituted of a backbone and fingers, is one of the most simple paradigm of a two-dimensional structure, where anomalous diffusion can be realized in the framework of Markov processes. However, the intrinsic properties of the structure can destroy this Markovian transport. These effects can be described by the memory and spatial kernels. In particular, the fractal structure of the fingers, which is controlled by the spatial kernel in both the real and the Fourier spaces, leads to the Lévy processes (Lévy flights) and superdiffusion. This generalization of the fractional diffusion is described by the Riesz space fractional derivative. In the framework of this generalized fractal comb model, Lévy processes are considered, and exact solutions for the probability distribution functions are obtained in terms of the Fox H-function for a variety of the memory kernels, and the rate of the superdiffusive spreading is studied by calculating the fractional moments. For a special form of the memory kernels, we also observed a competition between long rests and long jumps. Finally, we considered the fractal structure of the fingers controlled by a Weierstrass function, which leads to the power-law kernel in the Fourier space. This is a special case, when the second moment exists for superdiffusion in this competition between long rests and long jumps.
Monte-Carlo computation of turbulent premixed methane/air ignition
NASA Astrophysics Data System (ADS)
Carmen, Christina Lieselotte
The present work describes the results obtained by a time dependent numerical technique that simulates the early flame development of a spark-ignited premixed, lean, gaseous methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. The algorithm described is based upon a sub-model developed by an international automobile research and manufacturing corporation in order to analyze turbulence conditions within internal combustion engines. Several developments and modifications to the original algorithm have been implemented including a revised chemical reaction scheme and the evaluation and calculation of various turbulent flame properties. Solution of the complete set of Navier-Stokes governing equations for a turbulent reactive flow is avoided by reducing the equations to a single transport equation. The transport equation is derived from the Navier-Stokes equations for a joint probability density function, thus requiring no closure assumptions for the Reynolds stresses. A Monte-Carlo method is also utilized to simulate phenomena represented by the probability density function transport equation by use of the method of fractional steps. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on the evaluation of the three primary parameters that influence the initial flame kernel growth-the ignition system characteristics, the mixture composition, and the nature of the flow field. Efforts are concentrated on the effects of moderate to intense turbulence on flames within the distributed reaction zone. Results are presented for lean conditions with the fuel equivalence ratio varying from 0.6 to 0.9. The present computational results, including flame regime analysis and the calculation of various flame speeds, provide excellent agreement with results obtained by other experimental and numerical researchers.
Cao, D-S; Zhao, J-C; Yang, Y-N; Zhao, C-X; Yan, J; Liu, S; Hu, Q-N; Xu, Q-S; Liang, Y-Z
2012-01-01
There is a great need to assess the harmful effects or toxicities of chemicals to which man is exposed. In the present paper, the simplified molecular input line entry specification (SMILES) representation-based string kernel, together with the state-of-the-art support vector machine (SVM) algorithm, were used to classify the toxicity of chemicals from the US Environmental Protection Agency Distributed Structure-Searchable Toxicity (DSSTox) database network. In this method, the molecular structure can be directly encoded by a series of SMILES substrings that represent the presence of some chemical elements and different kinds of chemical bonds (double, triple and stereochemistry) in the molecules. Thus, SMILES string kernel can accurately and directly measure the similarities of molecules by a series of local information hidden in the molecules. Two model validation approaches, five-fold cross-validation and independent validation set, were used for assessing the predictive capability of our developed models. The results obtained indicate that SVM based on the SMILES string kernel can be regarded as a very promising and alternative modelling approach for potential toxicity prediction of chemicals.
[Crop geometry identification based on inversion of semiempirical BRDF models].
Huang, Wen-jiang; Wang, Jin-di; Mu, Xi-han; Wang, Ji-hua; Liu, Liang-yun; Liu, Qiang; Niu, Zheng
2007-10-01
Investigations have been made on identification of erective and horizontal varieties by bidirectional canopy reflected spectrum and semi-empirical bidirectional reflectance distribution function (BRDF) models. The qualitative effect of leaf area index (LAI) and average leaf angle (ALA) on crop canopy reflected spectrum was studied. The structure parameter sensitive index (SPEI) based on the weight for the volumetric kernel (fvol), the weight for the geometric kernel (fgeo), and the weight for constant corresponding to isotropic reflectance (fiso), was defined in the present study for crop geometry identification. However, the weights associated with the kernels of semi-empirical BRDF model do not have a direct relationship with measurable biophysical parameters. Therefore, efforts have focused on trying to find the relation between these semi-empirical BRDF kernel weights and various vegetation structures. SPEI was proved to be more sensitive to identify crop geometry structures than structural scattering index (SSI) and normalized difference f-index (NDFI), SPEI could be used to distinguish erective and horizontal geometry varieties. So, it is feasible to identify horizontal and erective varieties of wheat by bidirectional canopy reflected spectrum.
Effect of finite sample size on feature selection and classification: a simulation study.
Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping
2010-02-01
The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.
Laser induced spark ignition of methane-oxygen mixtures
NASA Technical Reports Server (NTRS)
Santavicca, D. A.; Ho, C.; Reilly, B. J.; Lee, T.-W.
1991-01-01
Results from an experimental study of laser induced spark ignition of methane-oxygen mixtures are presented. The experiments were conducted at atmospheric pressure and 296 K under laminar pre-mixed and turbulent-incompletely mixed conditions. A pulsed, frequency doubled Nd:YAG laser was used as the ignition source. Laser sparks with energies of 10 mJ and 40 mJ were used, as well as a conventional electrode spark with an effective energy of 6 mJ. Measurements were made of the flame kernel radius as a function of time using pulsed laser shadowgraphy. The initial size of the spark ignited flame kernel was found to correlate reasonably well with breakdown energy as predicted by the Taylor spherical blast wave model. The subsequent growth rate of the flame kernel was found to increase with time from a value less than to a value greater than the adiabatic, unstretched laminar growth rate. This behavior was attributed to the combined effects of flame stretch and an apparent wrinkling of the flame surface due to the extremely rapid acceleration of the flame. The very large laminar flame speed of methane-oxygen mixtures appears to be the dominant factor affecting the growth rate of spark ignited flame kernels, with the mode of ignition having a small effect. The effect of incomplete fuel-oxidizer mixing was found to have a significant effect on the growth rate, one which was greater than could simply be accounted for by the effect of local variations in the equivalence ratio on the local flame speed.
X-ray Analysis of Defects and Anomalies in AGR-5/6/7 TRISO Particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmreich, Grant W.; Hunn, John D.; Skitt, Darren J.
2017-06-01
Coated particle fuel batches J52O-16-93164, 93165, 93166, 93168, 93169, 93170, and 93172 were produced by Babcock and Wilcox Technologies (BWXT) for possible selection as fuel for the Advanced Gas Reactor Fuel Development and Qualification (AGR) Program’s AGR-5/6/7 irradiation test in the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), or may be used for other tests. Each batch was coated in a 150-mm-diameter production-scale fluidized-bed chemical vapor deposition (CVD) furnace. Tristructural isotropic (TRISO) coatings were deposited on 425-μm-nominal-diameter spherical kernels from BWXT lot J52R-16-69317 containing a mixture of 15.4%-enriched uranium carbide and uranium oxide (UCO), with the exception of Batchmore » 93164, which used similar kernels from BWXT lot J52L-16-69316. The TRISO-coatings consisted of a ~50% dense carbon buffer layer with 100-μmnominal thickness, a dense inner pyrolytic carbon (IPyC) layer with 40-μm-nominal thickness, a silicon carbide (SiC) layer with 35-μm-nominal thickness, and a dense outer pyrolytic carbon (OPyC) layer with 40-μm-nominal thickness. Each coated particle batch was sieved to upgrade the particles by removing over-sized and under-sized material, and the upgraded batch was designated by appending the letter A to the end of the batch number (e.g., 93164A). Secondary upgrading by sieving was performed on the upgraded batches to remove specific anomalies identified during analysis for Defective IPyC, and the upgraded batches were designated by appending the letter B to the end of the batch number (e.g., 93165B). Following this secondary upgrading, coated particle composite J52R-16-98005 was produced by BWXT as fuel for the AGR Program’s AGR-5/6/7 irradiation test in the INL ATR. This composite was comprised of coated particle fuel batches J52O-16-93165B, 93168B, 93169B, and 93170B.« less
Suitability of point kernel dose calculation techniques in brachytherapy treatment planning
Lakshminarayanan, Thilagam; Subbaiah, K. V.; Thayalan, K.; Kannan, S. E.
2010-01-01
Brachytherapy treatment planning system (TPS) is necessary to estimate the dose to target volume and organ at risk (OAR). TPS is always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in applicators. However, most brachytherapy TPS software packages estimate the absorbed dose at a point, taking care of only the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. There are some degrees of uncertainties in dose rate estimations under realistic clinical conditions. In this regard, an attempt is made to explore the suitability of point kernels for brachytherapy dose rate calculations and develop new interactive brachytherapy package, named as BrachyTPS, to suit the clinical conditions. BrachyTPS is an interactive point kernel code package developed to perform independent dose rate calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. The primary aim of this study is to validate the developed point kernel code package integrated with treatment planning computational systems against the Monte Carlo (MC) results. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, namely (i) Board of Radiation Isotope and Technology (BRIT) low dose rate (LDR) applicator and (ii) Fletcher Green type LDR applicator (iii) Fletcher Williamson high dose rate (HDR) applicator, are studied to test the accuracy of the software. Dose rates computed using the developed code are compared with the relevant results of the MC simulations. Further, attempts are also made to study the dose rate distribution around the commercially available shielded vaginal applicator set (Nucletron). The percentage deviations of BrachyTPS computed dose rate values from the MC results are observed to be within plus/minus 5.5% for BRIT LDR applicator, found to vary from 2.6 to 5.1% for Fletcher green type LDR applicator and are up to −4.7% for Fletcher-Williamson HDR applicator. The isodose distribution plots also show good agreements with the results of previous literatures. The isodose distributions around the shielded vaginal cylinder computed using BrachyTPS code show better agreement (less than two per cent deviation) with MC results in the unshielded region compared to shielded region, where the deviations are observed up to five per cent. The present study implies that the accurate and fast validation of complicated treatment planning calculations is possible with the point kernel code package. PMID:20589118
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikell, J; Siman, W; Kappadath, S
2014-06-15
Purpose: 90Y microsphere therapy in liver presents a situation where beta transport is dominant and the tissue is relatively homogenous. We compare voxel-based absorbed doses from a 90Y kernel to Monte Carlo (MC) using quantitative 90Y bremsstrahlung SPECT/CT as source distribution. Methods: Liver, normal liver, and tumors were delineated by an interventional radiologist using contrast-enhanced CT registered with 90Y SPECT/CT scans for 14 therapies. Right lung was segmented via region growing. The kernel was generated with 1.04 g/cc soft tissue for 4.8 mm voxel matching the SPECT. MC simulation materials included air, lung, soft tissue, and bone with varying densities.more » We report percent difference between kernel and MC (%Δ(K,MC)) for mean absorbed dose, D70, and V20Gy in total liver, normal liver, tumors, and right lung. We also report %Δ(K,MC) for heterogeneity metrics: coefficient of variation (COV) and D10/D90. The impact of spatial resolution (0, 10, 20 mm FWHM) and lung shunt fraction (LSF) (1,5,10,20%) on the accuracy of MC and kernel doses near the liver-lung interface was modeled in 1D. We report the distance from the interface where errors become <10% of unblurred MC as d10(side of interface, dose calculation, FWHM blurring, LSF). Results: The %Δ(K,MC) for mean, D70, and V20Gy in tumor and liver was <7% while right lung differences varied from 60–90%. The %Δ(K,MC) for COV was <4.8% for tumor and liver and <54% for the right lung. The %Δ(K,MC) for D10/D90 was <5% for 22/23 tumors. d10(liver,MC,10,1–20) awere <9mm and d10(liver,MC,20,1–20) awere <15mm; both agreed within 3mm to the kernel. d10(lung,MC,10,20), d10(lung,MC,10,1), d10(lung,MC,20,20), and d10(lung,MC,20,1) awere 6, 25, 15, and 34mm, respectively. Kernel calculations on blurred distributions in lung had errors > 10%. Conclusions: Liver and tumor voxel doses with 90Y kernel and MC agree within 7%. Large differences exist between the two methods in right lung. Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA138986. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
Keller, Katharina; Mertens, Valerie; Qi, Mian; Nalepa, Anna I; Godt, Adelheid; Savitsky, Anton; Jeschke, Gunnar; Yulikov, Maxim
2017-07-21
Extraction of distance distributions between high-spin paramagnetic centers from relaxation induced dipolar modulation enhancement (RIDME) data is affected by the presence of overtones of dipolar frequencies. As previously proposed, we account for these overtones by using a modified kernel function in Tikhonov regularization analysis. This paper analyzes the performance of such an approach on a series of model compounds with the Gd(iii)-PyMTA complex serving as paramagnetic high-spin label. We describe the calibration of the overtone coefficients for the RIDME kernel, demonstrate the accuracy of distance distributions obtained with this approach, and show that for our series of Gd-rulers RIDME technique provides more accurate distance distributions than Gd(iii)-Gd(iii) double electron-electron resonance (DEER). The analysis of RIDME data including harmonic overtones can be performed using the MATLAB-based program OvertoneAnalysis, which is available as open-source software from the web page of ETH Zurich. This approach opens a perspective for the routine use of the RIDME technique with high-spin labels in structural biology and structural studies of other soft matter.
Zhao, Lijuan; Sun, Youping; Hernandez-Viezcas, Jose A; Hong, Jie; Majumdar, Sanghamitra; Niu, Genhua; Duarte-Gardea, Maria; Peralta-Videa, Jose R; Gardea-Torresdey, Jorge L
2015-03-03
Information about changes in physiological and agronomic parameters through the life cycle of plants exposed to engineered nanoparticles (NPs) is scarce. In this study, corn (Zea mays) plants were cultivated to full maturity in soil amended with either nCeO2 or nZnO at 0, 400, and 800 mg/kg. Gas exchange was monitored every 10 days, and at harvest, bioaccumulation of Ce and Zn in tissues was determined by ICP-OES/MS. The effects of NPs exposure on nutrient concentration and distribution in ears were also evaluated by ICP-OES and μ-XRF. Results showed that nCeO2 at both concentrations did not impact gas exchange in leaves at any growth stage, while nZnO at 800 mg/kg reduced net photosynthesis by 12%, stomatal conductance by 15%, and relative chlorophyll content by 10% at day 20. Yield was reduced by 38% with nCeO2 and by 49% with nZnO. Importantly, μ-XRF mapping showed that nCeO2 changed the allocation of calcium in kernels, compared to controls. In nCeO2 treated plants, Cu, K, Mn, and Zn were mainly localized at the insertion of kernels into cobs, but Ca and Fe were distributed in other parts of the kernels. Results showed that nCeO2 and nZnO reduced corn yield and altered quality of corn.
Using kernel density estimates to investigate lymphatic filariasis in northeast Brazil
Medeiros, Zulma; Bonfim, Cristine; Brandão, Eduardo; Netto, Maria José Evangelista; Vasconcellos, Lucia; Ribeiro, Liany; Portugal, José Luiz
2012-01-01
After more than 10 years of the Global Program to Eliminate Lymphatic Filariasis (GPELF) in Brazil, advances have been seen, but the endemic disease persists as a public health problem. The aim of this study was to describe the spatial distribution of lymphatic filariasis in the municipality of Jaboatão dos Guararapes, Pernambuco, Brazil. An epidemiological survey was conducted in the municipality, and positive filariasis cases identified in this survey were georeferenced in point form, using the GPS. A kernel intensity estimator was applied to identify clusters with greater intensity of cases. We examined 23 673 individuals and 323 individuals with microfilaremia were identified, representing a mean prevalence rate of 1.4%. Around 88% of the districts surveyed presented cases of filarial infection, with prevalences of 0–5.6%. The male population was more affected by the infection, with 63.8% of the cases (P<0.005). Positive cases were found in all age groups examined. The kernel intensity estimator identified the areas of greatest intensity and least intensity of filarial infection cases. The case distribution was heterogeneous across the municipality. The kernel estimator identified spatial clusters of cases, thus indicating locations with greater intensity of transmission. The main advantage of this type of analysis lies in its ability to rapidly and easily show areas with the highest concentration of cases, thereby contributing towards planning, monitoring, and surveillance of filariasis elimination actions. Incorporation of geoprocessing and spatial analysis techniques constitutes an important tool for use within the GPELF. PMID:22943547
NASA Astrophysics Data System (ADS)
Liao, P. F.; Bjorkholm, J. E.; Berman, P. R.
1980-06-01
We report the results of an experimental study of the effects of velocity-changing collisions on two-photon and stepwise-absorption line shapes. Excitation spectra for the 3S12-->3P12-->4D12 transitions of sodium atoms undergoing collisions with foreign gas perturbers are obtained. These spectra are obtained with two cw dye lasers. One laser, the pump laser, is tuned 1.6 GHz below the 3S12-->3P12 transition frequency and excites a nonthermal longitudinal velocity distribution of excited 3P12 atoms in the vapor. Absorption of the second (probe) laser is used to monitor the steady-state excited-state distribution which is a result of collisions with rare gas atoms. The spectra are obtained for various pressures of He, Ne, and Kr gases and are fit to a theoretical model which utilizes either the phenomenological Keilson-Störer or the classical hardsphere collision kernel. The theoretical model includes the effects of collisionally aided excitation of the 3P12 state as well as effects due to fine-structure state-changing collisions. Although both kernels are found to predict line shapes which are in reasonable agreement with the experimental results, the hard-sphere kernel is found superior as it gives a better description of the effects of large-angle scattering for heavy perturbers. Neither kernel provides a fully adequate description over the entire line profile. The experimental data is used to extract effective hard-sphere collision cross sections for collisions between sodium 3P12 atoms and helium, neon, and krypton perturbers.
A support architecture for reliable distributed computing systems
NASA Technical Reports Server (NTRS)
Mckendry, Martin S.
1986-01-01
The Clouds kernel design was through several design phases and is nearly complete. The object manager, the process manager, the storage manager, the communications manager, and the actions manager are examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Means, Gregory Scott; Boardman, Gregory Allen; Berry, Jonathan Dwight
A combustor for a gas turbine generally includes a radial flow fuel nozzle having a fuel distribution manifold, and a fuel injection manifold axially separated from the fuel distribution manifold. The fuel injection manifold generally includes an inner side portion, an outer side portion, and a plurality of circumferentially spaced fuel ports that extend through the outer side portion. A plurality of tubes provides axial separation between the fuel distribution manifold and the fuel injection manifold. Each tube defines a fluid communication path between the fuel distribution manifold and the fuel injection manifold.
Adaptive kernel regression for freehand 3D ultrasound reconstruction
NASA Astrophysics Data System (ADS)
Alshalalfah, Abdel-Latif; Daoud, Mohammad I.; Al-Najar, Mahasen
2017-03-01
Freehand three-dimensional (3D) ultrasound imaging enables low-cost and flexible 3D scanning of arbitrary-shaped organs, where the operator can freely move a two-dimensional (2D) ultrasound probe to acquire a sequence of tracked cross-sectional images of the anatomy. Often, the acquired 2D ultrasound images are irregularly and sparsely distributed in the 3D space. Several 3D reconstruction algorithms have been proposed to synthesize 3D ultrasound volumes based on the acquired 2D images. A challenging task during the reconstruction process is to preserve the texture patterns in the synthesized volume and ensure that all gaps in the volume are correctly filled. This paper presents an adaptive kernel regression algorithm that can effectively reconstruct high-quality freehand 3D ultrasound volumes. The algorithm employs a kernel regression model that enables nonparametric interpolation of the voxel gray-level values. The kernel size of the regression model is adaptively adjusted based on the characteristics of the voxel that is being interpolated. In particular, when the algorithm is employed to interpolate a voxel located in a region with dense ultrasound data samples, the size of the kernel is reduced to preserve the texture patterns. On the other hand, the size of the kernel is increased in areas that include large gaps to enable effective gap filling. The performance of the proposed algorithm was compared with seven previous interpolation approaches by synthesizing freehand 3D ultrasound volumes of a benign breast tumor. The experimental results show that the proposed algorithm outperforms the other interpolation approaches.
A locally adaptive kernel regression method for facies delineation
NASA Astrophysics Data System (ADS)
Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.
2015-12-01
Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.
NASA Astrophysics Data System (ADS)
Åberg Lindell, M.; Andersson, P.; Grape, S.; Hellesen, C.; Håkansson, A.; Thulin, M.
2018-03-01
This paper investigates how concentrations of certain fission products and their related gamma-ray emissions can be used to discriminate between uranium oxide (UOX) and mixed oxide (MOX) type fuel. Discrimination of irradiated MOX fuel from irradiated UOX fuel is important in nuclear facilities and for transport of nuclear fuel, for purposes of both criticality safety and nuclear safeguards. Although facility operators keep records on the identity and properties of each fuel, tools for nuclear safeguards inspectors that enable independent verification of the fuel are critical in the recovery of continuity of knowledge, should it be lost. A discrimination methodology for classification of UOX and MOX fuel, based on passive gamma-ray spectroscopy data and multivariate analysis methods, is presented. Nuclear fuels and their gamma-ray emissions were simulated in the Monte Carlo code Serpent, and the resulting data was used as input to train seven different multivariate classification techniques. The trained classifiers were subsequently implemented and evaluated with respect to their capabilities to correctly predict the classes of unknown fuel items. The best results concerning successful discrimination of UOX and MOX-fuel were acquired when using non-linear classification techniques, such as the k nearest neighbors method and the Gaussian kernel support vector machine. For fuel with cooling times up to 20 years, when it is considered that gamma-rays from the isotope 134Cs can still be efficiently measured, success rates of 100% were obtained. A sensitivity analysis indicated that these methods were also robust.
Miyatake, Aya; Nishio, Teiji; Ogino, Takashi
2011-10-01
The purpose of this study is to develop a new calculation algorithm that is satisfactory in terms of the requirements for both accuracy and calculation time for a simulation of imaging of the proton-irradiated volume in a patient body in clinical proton therapy. The activity pencil beam algorithm (APB algorithm), which is a new technique to apply the pencil beam algorithm generally used for proton dose calculations in proton therapy to the calculation of activity distributions, was developed as a calculation algorithm of the activity distributions formed by positron emitter nuclei generated from target nuclear fragment reactions. In the APB algorithm, activity distributions are calculated using an activity pencil beam kernel. In addition, the activity pencil beam kernel is constructed using measured activity distributions in the depth direction and calculations in the lateral direction. (12)C, (16)O, and (40)Ca nuclei were determined as the major target nuclei that constitute a human body that are of relevance for calculation of activity distributions. In this study, "virtual positron emitter nuclei" was defined as the integral yield of various positron emitter nuclei generated from each target nucleus by target nuclear fragment reactions with irradiated proton beam. Compounds, namely, polyethylene, water (including some gelatin) and calcium oxide, which contain plenty of the target nuclei, were irradiated using a proton beam. In addition, depth activity distributions of virtual positron emitter nuclei generated in each compound from target nuclear fragment reactions were measured using a beam ON-LINE PET system mounted a rotating gantry port (BOLPs-RGp). The measured activity distributions depend on depth or, in other words, energy. The irradiated proton beam energies were 138, 179, and 223 MeV, and measurement time was about 5 h until the measured activity reached the background level. Furthermore, the activity pencil beam data were made using the activity pencil beam kernel, which was composed of the measured depth data and the lateral data including multiple Coulomb scattering approximated by the Gaussian function, and were used for calculating activity distributions. The data of measured depth activity distributions for every target nucleus by proton beam energy were obtained using BOLPs-RGp. The form of the depth activity distribution was verified, and the data were made in consideration of the time-dependent change of the form. Time dependence of an activity distribution form could be represented by two half-lives. Gaussian form of the lateral distribution of the activity pencil beam kernel was decided by the effect of multiple Coulomb scattering. Thus, the data of activity pencil beam involving time dependence could be obtained in this study. The simulation of imaging of the proton-irradiated volume in a patient body using target nuclear fragment reactions was feasible with the developed APB algorithm taking time dependence into account. With the use of the APB algorithm, it was suggested that a system of simulation of activity distributions that has levels of both accuracy and calculation time appropriate for clinical use can be constructed.
Sepsis mortality prediction with the Quotient Basis Kernel.
Ribas Ripoll, Vicent J; Vellido, Alfredo; Romero, Enrique; Ruiz-Rodríguez, Juan Carlos
2014-05-01
This paper presents an algorithm to assess the risk of death in patients with sepsis. Sepsis is a common clinical syndrome in the intensive care unit (ICU) that can lead to severe sepsis, a severe state of septic shock or multi-organ failure. The proposed algorithm may be implemented as part of a clinical decision support system that can be used in combination with the scores deployed in the ICU to improve the accuracy, sensitivity and specificity of mortality prediction for patients with sepsis. In this paper, we used the Simplified Acute Physiology Score (SAPS) for ICU patients and the Sequential Organ Failure Assessment (SOFA) to build our kernels and algorithms. In the proposed method, we embed the available data in a suitable feature space and use algorithms based on linear algebra, geometry and statistics for inference. We present a simplified version of the Fisher kernel (practical Fisher kernel for multinomial distributions), as well as a novel kernel that we named the Quotient Basis Kernel (QBK). These kernels are used as the basis for mortality prediction using soft-margin support vector machines. The two new kernels presented are compared against other generative kernels based on the Jensen-Shannon metric (centred, exponential and inverse) and other widely used kernels (linear, polynomial and Gaussian). Clinical relevance is also evaluated by comparing these results with logistic regression and the standard clinical prediction method based on the initial SAPS score. As described in this paper, we tested the new methods via cross-validation with a cohort of 400 test patients. The results obtained using our methods compare favourably with those obtained using alternative kernels (80.18% accuracy for the QBK) and the standard clinical prediction method, which are based on the basal SAPS score or logistic regression (71.32% and 71.55%, respectively). The QBK presented a sensitivity and specificity of 79.34% and 83.24%, which outperformed the other kernels analysed, logistic regression and the standard clinical prediction method based on the basal SAPS score. Several scoring systems for patients with sepsis have been introduced and developed over the last 30 years. They allow for the assessment of the severity of disease and provide an estimate of in-hospital mortality. Physiology-based scoring systems are applied to critically ill patients and have a number of advantages over diagnosis-based systems. Severity score systems are often used to stratify critically ill patients for possible inclusion in clinical trials. In this paper, we present an effective algorithm that combines both scoring methodologies for the assessment of death in patients with sepsis that can be used to improve the sensitivity and specificity of the currently available methods. Copyright © 2014 Elsevier B.V. All rights reserved.
Analysis of the spatial distribution of dengue cases in the city of Rio de Janeiro, 2011 and 2012.
Carvalho, Silvia; Magalhães, Mônica de Avelar Figueiredo Mafra; Medronho, Roberto de Andrade
2017-08-17
Analyze the spatial distribution of classical dengue and severe dengue cases in the city of Rio de Janeiro. Exploratory study, considering cases of classical dengue and severe dengue with laboratory confirmation of the infection in the city of Rio de Janeiro during the years 2011/2012. The georeferencing technique was applied for the cases notified in the Notification Increase Information System in the period of 2011 and 2012. For this process, the fields "street" and "number" were used. The ArcGis10 program's Geocoding tool's automatic process was performed. The spatial analysis was done through the kernel density estimator. Kernel density pointed out hotspots for classic dengue that did not coincide geographically with severe dengue and were in or near favelas. The kernel ratio did not show a notable change in the spatial distribution pattern observed in the kernel density analysis. The georeferencing process showed a loss of 41% of classic dengue registries and 17% of severe dengue registries due to the address in the Notification Increase Information System form. The hotspots near the favelas suggest that the social vulnerability of these localities can be an influencing factor for the occurrence of this aggravation since there is a deficiency of the supply and access to essential goods and services for the population. To reduce this vulnerability, interventions must be related to macroeconomic policies. Analisar a distribuição espacial dos casos de dengue clássico e dengue grave no município do Rio de Janeiro. Estudo exploratório, considerando casos de dengue clássico e de dengue grave com comprovação laboratorial da infecção, ocorridos no município do Rio de Janeiro nos anos de 2011/2012. Foi aplicada a técnica de georreferenciamento dos casos notificados no Sistema de Informação de Agravos de Notificação, no período de 2011 e 2012. Para esse processo, utilizaram-se os campos "logradouro" e "número". Foi realizado o processo automático da ferramenta Geocoding do programa ArcGis10. A análise espacial foi feita a partir do estimador de densidade Kernel. A densidade de Kernel apontou áreas quentes para dengue clássico não coincidente geograficamente a dengue grave, estando localizadas dentro ou próximas de favelas. O cálculo da razão de Kernel não apresentou modificação significativa no padrão de distribuição espacial observados na análise da densidade de Kernel. O processo de georreferenciamento mostrou perda de 41% dos registros de dengue clássico e 17% de dengue grave devido ao endereçamento da ficha do Sistema de Informação de Agravos de Notificação. As áreas quentes próximas às favelas sugerem que a vulnerabilidade social existente nessas localidades pode ser um fator de influência para a ocorrência desse agravo, uma vez que há deficiência da oferta e acesso a bens e serviços essenciais para a população. Para diminuir essa vulnerabilidade, as intervenções devem estar relacionadas a políticas macroeconômicas.
Personal sleep pattern visualization using sequence-based kernel self-organizing map on sound data.
Wu, Hongle; Kato, Takafumi; Yamada, Tomomi; Numao, Masayuki; Fukui, Ken-Ichi
2017-07-01
We propose a method to discover sleep patterns via clustering of sound events recorded during sleep. The proposed method extends the conventional self-organizing map algorithm by kernelization and sequence-based technologies to obtain a fine-grained map that visualizes the distribution and changes of sleep-related events. We introduced features widely applied in sound processing and popular kernel functions to the proposed method to evaluate and compare performance. The proposed method provides a new aspect of sleep monitoring because the results demonstrate that sound events can be directly correlated to an individual's sleep patterns. In addition, by visualizing the transition of cluster dynamics, sleep-related sound events were found to relate to the various stages of sleep. Therefore, these results empirically warrant future study into the assessment of personal sleep quality using sound data. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimating average growth trajectories in shape-space using kernel smoothing.
Hutton, Tim J; Buxton, Bernard F; Hammond, Peter; Potts, Henry W W
2003-06-01
In this paper, we show how a dense surface point distribution model of the human face can be computed and demonstrate the usefulness of the high-dimensional shape-space for expressing the shape changes associated with growth and aging. We show how average growth trajectories for the human face can be computed in the absence of longitudinal data by using kernel smoothing across a population. A training set of three-dimensional surface scans of 199 male and 201 female subjects of between 0 and 50 years of age is used to build the model.
KERNELHR: A program for estimating animal home ranges
Seaman, D.E.; Griffith, B.; Powell, R.A.
1998-01-01
Kernel methods are state of the art for estimating animal home-range area and utilization distribution (UD). The KERNELHR program was developed to provide researchers and managers a tool to implement this extremely flexible set of methods with many variants. KERNELHR runs interactively or from the command line on any personal computer (PC) running DOS. KERNELHR provides output of fixed and adaptive kernel home-range estimates, as well as density values in a format suitable for in-depth statistical and spatial analyses. An additional package of programs creates contour files for plotting in geographic information systems (GIS) and estimates core areas of ranges.
ERIC Educational Resources Information Center
McMillen, Daniel P.; Singell, Larry D., Jr.
2010-01-01
Prior work uses a parametric approach to study the distributional effects of school finance reform and finds evidence that reform yields greater equality of school expenditures by lowering spending in high-spending districts (leveling down) or increasing spending in low-spending districts (leveling up). We develop a kernel density…
Wang, H; Wang, T; Johnson, L A; Pometto, A L
2008-11-12
The majority of fuel ethanol in the United States is produced by using the dry-grind corn ethanol process. The corn oil that is contained in the coproduct, distillers' dried grains with solubles (DDGS), can be recovered for use as a biodiesel feedstock. Oil removal will also improve the feed quality of DDGS. The most economical way to remove oil is considered to be at the centrifugation step for separating thin stillage (liquid) from coarse solids after distilling the ethanol. The more oil there is in the liquid, the more it can be recovered by centrifugation. Therefore, we studied the effects of corn preparation and grinding methods on oil distribution between liquid and solid phases. Grinding the corn to three different particle sizes, flaking, flaking and grinding, and flaking and extruding were used to break up the corn kernel before fermentation, and their effects on oil distribution between the liquid and solid phases were examined by simulating an industrial decanter centrifuge. Total oil contents were measured in the liquid and solids after centrifugation. Dry matter yield and oil partitioning in the thin stillage were highly positively correlated. Flaking slightly reduced bound fat. The flaked and then extruded corn meal released the highest amount of free oil, about 25% compared to 7% for the average of the other treatments. The freed oil from flaking, however, became nonextractable after the flaked corn was ground. Fine grinding alone had little effect on oil partitioning.
Generalized time-dependent Schrödinger equation in two dimensions under constraints
NASA Astrophysics Data System (ADS)
Sandev, Trifce; Petreska, Irina; Lenzi, Ervin K.
2018-01-01
We investigate a generalized two-dimensional time-dependent Schrödinger equation on a comb with a memory kernel. A Dirac delta term is introduced in the Schrödinger equation so that the quantum motion along the x-direction is constrained at y = 0. The wave function is analyzed by using Green's function approach for several forms of the memory kernel, which are of particular interest. Closed form solutions for the cases of Dirac delta and power-law memory kernels in terms of Fox H-function, as well as for a distributed order memory kernel, are obtained. Further, a nonlocal term is also introduced and investigated analytically. It is shown that the solution for such a case can be represented in terms of infinite series in Fox H-functions. Green's functions for each of the considered cases are analyzed and plotted for the most representative ones. Anomalous diffusion signatures are evident from the presence of the power-law tails. The normalized Green's functions obtained in this work are of broader interest, as they are an important ingredient for further calculations and analyses of some interesting effects in the transport properties in low-dimensional heterogeneous media.
Turan, Semra; Topcu, Ali; Karabulut, Ihsan; Vural, Halil; Hayaloglu, Ali Adnan
2007-12-26
The fatty acid, sn-2 fatty acid, triacyglycerol (TAG), tocopherol, and phytosterol compositions of kernel oils obtained from nine apricot varieties grown in the Malatya region of Turkey were determined ( P<0.05). The names of the apricot varieties were Alyanak (ALY), Cataloglu (CAT), Cöloglu (COL), Hacihaliloglu (HAC), Hacikiz (HKI), Hasanbey (HSB), Kabaasi (KAB), Soganci (SOG), and Tokaloglu (TOK). The total oil contents of apricot kernels ranged from 40.23 to 53.19%. Oleic acid contributed 70.83% to the total fatty acids, followed by linoleic (21.96%), palmitic (4.92%), and stearic (1.21%) acids. The s n-2 position is mainly occupied with oleic acid (63.54%), linoleic acid (35.0%), and palmitic acid (0.96%). Eight TAG species were identified: LLL, OLL, PLL, OOL+POL, OOO+POO, and SOO (where P, palmitoyl; S, stearoyl; O, oleoyl; and L, linoleoyl), among which mainly OOO+POO contributed to 48.64% of the total, followed by OOL+POL at 32.63% and OLL at 14.33%. Four tocopherol and six phytosterol isomers were identified and quantified; among these, gamma-tocopherol (475.11 mg/kg of oil) and beta-sitosterol (273.67 mg/100 g of oil) were predominant. Principal component analysis (PCA) was applied to the data from lipid components of apricot kernel oil in order to explore the distribution of the apricot variety according to their kernel's lipid components. PCA separated some varieties including ALY, COL, KAB, CAT, SOG, and HSB in one group and varieties TOK, HAC, and HKI in another group based on their lipid components of apricot kernel oil. So, in the present study, PCA was found to be a powerful tool for classification of the samples.
Hirayama, Shusuke; Matsuura, Taeko; Ueda, Hideaki; Fujii, Yusuke; Fujii, Takaaki; Takao, Seishin; Miyamoto, Naoki; Shimizu, Shinichi; Fujimoto, Rintaro; Umegaki, Kikuo; Shirato, Hiroki
2018-05-22
To evaluate the biological effects of proton beams as part of daily clinical routine, fast and accurate calculation of dose-averaged linear energy transfer (LET d ) is required. In this study, we have developed the analytical LET d calculation method based on the pencil-beam algorithm (PBA) considering the off-axis enhancement by secondary protons. This algorithm (PBA-dLET) was then validated using Monte Carlo simulation (MCS) results. In PBA-dLET, LET values were assigned separately for each individual dose kernel based on the PBA. For the dose kernel, we employed a triple Gaussian model which consists of the primary component (protons that undergo the multiple Coulomb scattering) and the halo component (protons that undergo inelastic, nonelastic and elastic nuclear reaction); the primary and halo components were represented by a single Gaussian and the sum of two Gaussian distributions, respectively. Although the previous analytical approaches assumed a constant LET d value for the lateral distribution of a pencil beam, the actual LET d increases away from the beam axis, because there are more scattered and therefore lower energy protons with higher stopping powers. To reflect this LET d behavior, we have assumed that the LETs of primary and halo components can take different values (LET p and LET halo ), which vary only along the depth direction. The values of dual-LET kernels were determined such that the PBA-dLET reproduced the MCS-generated LET d distribution in both small and large fields. These values were generated at intervals of 1 mm in depth for 96 energies from 70.2 to 220 MeV and collected in the look-up table. Finally, we compared the LET d distributions and mean LET d (LET d,mean ) values of targets and organs at risk between PBA-dLET and MCS. Both homogeneous phantom and patient geometries (prostate, liver, and lung cases) were used to validate the present method. In the homogeneous phantom, the LET d profiles obtained by the dual-LET kernels agree well with the MCS results except for the low-dose region in the lateral penumbra, where the actual dose was below 10% of the maximum dose. In the patient geometry, the LET d profiles calculated with the developed method reproduces MCS with the similar accuracy as in the homogeneous phantom. The maximum differences in LET d,mean for each structure between the PBA-dLET and the MCS were 0.06 keV/μm in homogeneous phantoms and 0.08 keV/μm in patient geometries under all tested conditions, respectively. We confirmed that the dual-LET-kernel model well reproduced the MCS, not only in the homogeneous phantom but also in complex patient geometries. The accuracy of the LET d was largely improved from the single-LET-kernel model, especially at the lateral penumbra. The model is expected to be useful, especially for proper recognition of the risk of side effects when the target is next to critical organs. © 2018 American Association of Physicists in Medicine.
Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.
Urban Transmission of American Cutaneous Leishmaniasis in Argentina: Spatial Analysis Study
Gil, José F.; Nasser, Julio R.; Cajal, Silvana P.; Juarez, Marisa; Acosta, Norma; Cimino, Rubén O.; Diosque, Patricio; Krolewiecki, Alejandro J.
2010-01-01
We used kernel density and scan statistics to examine the spatial distribution of cases of pediatric and adult American cutaneous leishmaniasis in an urban disease-endemic area in Salta Province, Argentina. Spatial analysis was used for the whole population and stratified by women > 14 years of age (n = 159), men > 14 years of age (n = 667), and children < 15 years of age (n = 213). Although kernel density for adults encompassed nearly the entire city, distribution in children was most prevalent in the peripheral areas of the city. Scan statistic analysis for adult males, adult females, and children found 11, 2, and 8 clusters, respectively. Clusters for children had the highest odds ratios (P < 0.05) and were located in proximity of plantations and secondary vegetation. The data from this study provide further evidence of the potential urban transmission of American cutaneous leishmaniasis in northern Argentina. PMID:20207869
NASA Astrophysics Data System (ADS)
Dai, Jun; Zhou, Haigang; Zhao, Shaoquan
2017-01-01
This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.
Life Cycle Assessment for the Production of Oil Palm Seeds
Muhamad, Halimah; Ai, Tan Yew; Khairuddin, Nik Sasha Khatrina; Amiruddin, Mohd Din; May, Choo Yuen
2014-01-01
The oil palm seed production unit that generates germinated oil palm seeds is the first link in the palm oil supply chain, followed by the nursery to produce seedling, the plantation to produce fresh fruit bunches (FFB), the mill to produce crude palm oil (CPO) and palm kernel, the kernel crushers to produce crude palm kernel oil (CPKO), the refinery to produce refined palm oil (RPO) and finally the palm biodiesel plant to produce palm biodiesel. This assessment aims to investigate the life cycle assessment (LCA) of germinated oil palm seeds and the use of LCA to identify the stage/s in the production of germinated oil palm seeds that could contribute to the environmental load. The method for the life cycle impact assessment (LCIA) is modelled using SimaPro version 7, (System for Integrated environMental Assessment of PROducts), an internationally established tool used by LCA practitioners. This software contains European and US databases on a number of materials in addition to a variety of European- and US-developed impact assessment methodologies. LCA was successfully conducted for five seed production units and it was found that the environmental impact for the production of germinated oil palm was not significant. The characterised results of the LCIA for the production of 1000 germinated oil palm seeds showed that fossil fuel was the major impact category followed by respiratory inorganics and climate change. PMID:27073598
Life Cycle Assessment for the Production of Oil Palm Seeds.
Muhamad, Halimah; Ai, Tan Yew; Khairuddin, Nik Sasha Khatrina; Amiruddin, Mohd Din; May, Choo Yuen
2014-12-01
The oil palm seed production unit that generates germinated oil palm seeds is the first link in the palm oil supply chain, followed by the nursery to produce seedling, the plantation to produce fresh fruit bunches (FFB), the mill to produce crude palm oil (CPO) and palm kernel, the kernel crushers to produce crude palm kernel oil (CPKO), the refinery to produce refined palm oil (RPO) and finally the palm biodiesel plant to produce palm biodiesel. This assessment aims to investigate the life cycle assessment (LCA) of germinated oil palm seeds and the use of LCA to identify the stage/s in the production of germinated oil palm seeds that could contribute to the environmental load. The method for the life cycle impact assessment (LCIA) is modelled using SimaPro version 7, (System for Integrated environMental Assessment of PROducts), an internationally established tool used by LCA practitioners. This software contains European and US databases on a number of materials in addition to a variety of European- and US-developed impact assessment methodologies. LCA was successfully conducted for five seed production units and it was found that the environmental impact for the production of germinated oil palm was not significant. The characterised results of the LCIA for the production of 1000 germinated oil palm seeds showed that fossil fuel was the major impact category followed by respiratory inorganics and climate change.
Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆
Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny
2014-01-01
There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702
Retrieval of the aerosol size distribution in the complex anomalous diffraction approximation
NASA Astrophysics Data System (ADS)
Franssens, Ghislain R.
This contribution reports some recently achieved results in aerosol size distribution retrieval in the complex anomalous diffraction approximation (ADA) to MIE scattering theory. This approximation is valid for spherical particles that are large compared to the wavelength and have a refractive index close to 1. The ADA kernel is compared with the exact MIE kernel. Despite being a simple approximation, the ADA seems to have some practical value for the retrieval of the larger modes of tropospheric and lower stratospheric aerosols. The ADA has the advantage over MIE theory that an analytic inversion of the associated Fredholm integral equation becomes possible. In addition, spectral inversion in the ADA can be formulated as a well-posed problem. In this way, a new inverse formula was obtained, which allows the direct computation of the size distribution as an integral over the spectral extinction function. This formula is valid for particles that both scatter and absorb light and it also takes the spectral dispersion of the refractive index into account. Some details of the numerical implementation of the inverse formula are illustrated using a modified gamma test distribution. Special attention is given to the integration of spectrally truncated discrete extinction data with errors.
Zhu, Liyang; Duan, Wuhua; Xu, Jingming; Zhu, Yongjun
2012-11-30
High-temperature gas-cooled reactors (HTGRs) are advanced nuclear systems that will receive heavy use in the future. It is important to develop spent nuclear fuel reprocessing technologies for HTGR. A new method for recovering uranium from tristructural-isotropic (TRISO-) coated fuel particles with supercritical CO(2) containing tri-n-butyl phosphate (TBP) as a complexing agent was investigated. TRISO-coated fuel particles from HTGR fuel elements were first crushed to expose UO(2) pellet fuel kernels. The crushed TRISO-coated fuel particles were then treated under O(2) stream at 750°C, resulting in a mixture of U(3)O(8) powder and SiC shells. The conversion of U(3)O(8) into solid uranyl nitrate by its reaction with liquid N(2)O(4) in the presence of a small amount of water was carried out. Complete conversion was achieved after 60 min of reaction at 80°C, whereas the SiC shells were not converted by N(2)O(4). Uranyl nitrate in the converted mixture was extracted with supercritical CO(2) containing TBP. The cumulative extraction efficiency was above 98% after 20 min of online extraction at 50°C and 25 MPa, whereas the SiC shells were not extracted by TBP. The results suggest an attractive strategy for reprocessing spent nuclear fuel from HTGR to minimize the generation of secondary radioactive waste. Copyright © 2012 Elsevier B.V. All rights reserved.
Serra-Sogas, Norma; O'Hara, Patrick D; Canessa, Rosaline; Keller, Peter; Pelot, Ronald
2008-05-01
This paper examines the use of exploratory spatial analysis for identifying hotspots of shipping-based oil pollution in the Pacific Region of Canada's Exclusive Economic Zone. It makes use of data collected from fiscal years 1997/1998 to 2005/2006 by the National Aerial Surveillance Program, the primary tool for monitoring and enforcing the provisions imposed by MARPOL 73/78. First, we present oil spill data as points in a "dot map" relative to coastlines, harbors and the aerial surveillance distribution. Then, we explore the intensity of oil spill events using the Quadrat Count method, and the Kernel Density Estimation methods with both fixed and adaptive bandwidths. We found that oil spill hotspots where more clearly defined using Kernel Density Estimation with an adaptive bandwidth, probably because of the "clustered" distribution of oil spill occurrences. Finally, we discuss the importance of standardizing oil spill data by controlling for surveillance effort to provide a better understanding of the distribution of illegal oil spills, and how these results can ultimately benefit a monitoring program.
Indetermination of particle sizing by laser diffraction in the anomalous size ranges
NASA Astrophysics Data System (ADS)
Pan, Linchao; Ge, Baozhen; Zhang, Fugen
2017-09-01
The laser diffraction method is widely used to measure particle size distributions. It is generally accepted that the scattering angle becomes smaller and the angles to the location of the main peak of scattered energy distributions in laser diffraction instruments shift to smaller values with increasing particle size. This specific principle forms the foundation of the laser diffraction method. However, this principle is not entirely correct for non-absorbing particles in certain size ranges and these particle size ranges are called anomalous size ranges. Here, we derive the analytical formulae for the bounds of the anomalous size ranges and discuss the influence of the width of the size segments on the signature of the Mie scattering kernel. This anomalous signature of the Mie scattering kernel will result in an indetermination of the particle size distribution when measured by laser diffraction instruments in the anomalous size ranges. By using the singular-value decomposition method we interpret the mechanism of occurrence of this indetermination in detail and then validate its existence by using inversion simulations.
2015-09-30
DISTRIBUTION STATEMENT A: Distribution approved for public release; distribution is unlimited. NPS-NRL- Rice -UIUC Collaboration on Navy Atmosphere...portability. There is still a gap in the OCCA support for Fortran programmers who do not have accelerator experience. Activities at Rice /Virginia Tech are...for automated data movement and for kernel optimization using source code analysis and run-time detective work. In this quarter the Rice /Virginia
Spectral decomposition of seismic data with reassigned smoothed pseudo Wigner-Ville distribution
NASA Astrophysics Data System (ADS)
Wu, Xiaoyang; Liu, Tianyou
2009-07-01
Seismic signals are nonstationary mainly due to absorption and attenuation of seismic energy in strata. Referring to spectral decomposition of seismic data, the conventional method using short-time Fourier transform (STFT) limits temporal and spectral resolution by a predefined window length. Continuous-wavelet transform (CWT) uses dilation and translation of a wavelet to produce a time-scale map. However, the wavelets utilized should be orthogonal in order to obtain a satisfactory resolution. The less applied, Wigner-Ville distribution (WVD) being superior in energy distribution concentration, is confronted with cross-terms interference (CTI) when signals are multi-component. In order to reduce the impact of CTI, Cohen class uses kernel function as low-pass filter. Nevertheless it also weakens energy concentration of auto-terms. In this paper, we employ smoothed pseudo Wigner-Ville distribution (SPWVD) with Gauss kernel function to reduce CTI in time and frequency domain, then reassign values of SPWVD (called reassigned SPWVD) according to the center of gravity of the considering energy region so that distribution concentration is maintained simultaneously. We conduct the method above on a multi-component synthetic seismic record and compare with STFT and CWT spectra. Two field examples reveal that RSPWVD potentially can be applied to detect low-frequency shadows caused by hydrocarbons and to delineate the space distribution of abnormal geological body more precisely.
Fuel Distribution Systems | Energy Systems Integration Facility | NREL
Fuel Distribution Systems Fuel Distribution Systems The Energy Systems Integration Facility's integrated fuel distribution systems provide natural gas, hydrogen, and diesel throughout its laboratories in two laboratories: the Power Systems Integration Laboratory and the Energy Storage Laboratory. Each
Haroldson, Mark A.; Schwartz, Charles C.; Thompson, Daniel J.; Bjornlie, Daniel D.; Gunther, Kerry A.; Cain, Steven L.; Tyers, Daniel B.; Frey, Kevin L.; Aber, Bryan C.
2014-01-01
The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.
HIGH-TEMPERATURE SAFETY TESTING OF IRRADIATED AGR-1 TRISO FUEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stempien, John D.; Demkowicz, Paul A.; Reber, Edward L.
High-Temperature Safety Testing of Irradiated AGR-1 TRISO Fuel John D. Stempien, Paul A. Demkowicz, Edward L. Reber, and Cad L. Christensen Idaho National Laboratory, P.O. Box 1625 Idaho Falls, ID 83415, USA Corresponding Author: john.stempien@inl.gov, +1-208-526-8410 Two new safety tests of irradiated tristructural isotropic (TRISO) coated particle fuel have been completed in the Fuel Accident Condition Simulator (FACS) furnace at the Idaho National Laboratory (INL). In the first test, three fuel compacts from the first Advanced Gas Reactor irradiation experiment (AGR-1) were simultaneously heated in the FACS furnace. Prior to safety testing, each compact was irradiated in the Advanced Testmore » Reactor to a burnup of approximately 15 % fissions per initial metal atom (FIMA), a fast fluence of 3×1025 n/m2 (E > 0.18 MeV), and a time-average volume-average (TAVA) irradiation temperature of about 1020 °C. In order to simulate a core-conduction cool-down event, a temperature-versus-time profile having a peak temperature of 1700 °C was programmed into the FACS furnace controllers. Gaseous fission products (i.e., Kr-85) were carried to the Fission Gas Monitoring System (FGMS) by a helium sweep gas and captured in cold traps featuring online gamma counting. By the end of the test, a total of 3.9% of an average particle’s inventory of Kr-85 was detected in the FGMS traps. Such a low Kr-85 activity indicates that no TRISO failures (failure of all three TRISO layers) occurred during the test. If released from the compacts, condensable fission products (e.g., Ag-110m, Cs-134, Cs-137, Eu-154, Eu-155, and Sr-90) were collected on condensation plates fitted to the end of the cold finger in the FACS furnace. These condensation plates were then analyzed for fission products. In the second test, five loose UCO fuel kernels, obtained from deconsolidated particles from an irradiated AGR-1 compact, were heated in the FACS furnace to a peak temperature of 1600 °C. This test had two primary goals. First, the test was intended to assess the retention of fission products in loose kernels without the effects of the other TRISO layers (buffer, IPyC, SiC, and OPyC) or the graphitic matrix material comprising the compact. Second, this test served as an evaluation of the FACS fission product condensation plate collection efficiency.« less
FOS: A Factored Operating Systems for High Assurance and Scalability on Multicores
2012-08-01
computing. It builds on previous work in distributed and microkernel OSes by factoring services out of the kernel, and then further distributing each...2 3.0 Methods, Assumptions, and Procedures (System Design) .................................................. 4 3.1 Microkernel ...cooperating servers. We term such a service a fleet. Figure 2 shows the high-level architecture of fos. A small microkernel runs on every core
Distributed delays in a hybrid model of tumor-immune system interplay.
Caravagna, Giulio; Graudenzi, Alex; d'Onofrio, Alberto
2013-02-01
A tumor is kinetically characterized by the presence of multiple spatio-temporal scales in which its cells interplay with, for instance, endothelial cells or Immune system effectors, exchanging various chemical signals. By its nature, tumor growth is an ideal object of hybrid modeling where discrete stochastic processes model low-numbers entities, and mean-field equations model abundant chemical signals. Thus, we follow this approach to model tumor cells, effector cells and Interleukin-2, in order to capture the Immune surveillance effect. We here present a hybrid model with a generic delay kernel accounting that, due to many complex phenomena such as chemical transportation and cellular differentiation, the tumor-induced recruitment of effectors exhibits a lag period. This model is a Stochastic Hybrid Automata and its semantics is a Piecewise Deterministic Markov process where a two-dimensional stochastic process is interlinked to a multi-dimensional mean-field system. We instantiate the model with two well-known weak and strong delay kernels and perform simulations by using an algorithm to generate trajectories of this process. Via simulations and parametric sensitivity analysis techniques we (i) relate tumor mass growth with the two kernels, we (ii) measure the strength of the Immune surveillance in terms of probability distribution of the eradication times, and (iii) we prove, in the oscillatory regime, the existence of a stochastic bifurcation resulting in delay-induced tumor eradication.
Identification of subsurface structures using electromagnetic data and shape priors
NASA Astrophysics Data System (ADS)
Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond
2015-03-01
We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.
NASA Astrophysics Data System (ADS)
Kitt, R.; Kalda, J.
2006-03-01
The question of optimal portfolio is addressed. The conventional Markowitz portfolio optimisation is discussed and the shortcomings due to non-Gaussian security returns are outlined. A method is proposed to minimise the likelihood of extreme non-Gaussian drawdowns of the portfolio value. The theory is called Leptokurtic, because it minimises the effects from “fat tails” of returns. The leptokurtic portfolio theory provides an optimal portfolio for investors, who define their risk-aversion as unwillingness to experience sharp drawdowns in asset prices. Two types of risks in asset returns are defined: a fluctuation risk, that has Gaussian distribution, and a drawdown risk, that deals with distribution tails. These risks are quantitatively measured by defining the “noise kernel” — an ellipsoidal cloud of points in the space of asset returns. The size of the ellipse is controlled with the threshold parameter: the larger the threshold parameter, the larger return are accepted for investors as normal fluctuations. The return vectors falling into the kernel are used for calculation of fluctuation risk. Analogously, the data points falling outside the kernel are used for the calculation of drawdown risks. As a result the portfolio optimisation problem becomes three-dimensional: in addition to the return, there are two types of risks involved. Optimal portfolio for drawdown-averse investors is the portfolio minimising variance outside the noise kernel. The theory has been tested with MSCI North America, Europe and Pacific total return stock indices.
NASA Astrophysics Data System (ADS)
Deng, Xiaowen; Xing, Li; Yin, Hong; Tian, Feng; Zhang, Qun
2018-03-01
Multiple-swirlers structure is commonly adopted for combustion design strategy in heavy duty gas turbine. The multiple-swirlers structure might shorten the flame brush length and reduce emissions. In engineering application, small amount of gas fuel is distributed for non-premixed combustion as a pilot flame while most fuel is supplied to main burner for premixed combustion. The effect of fuel distribution on the flow and temperature field related to the combustor performance is a significant issue. This paper investigates the fuel distribution effect on the combustor performance by adjusting the pilot/main burner fuel percentage. Five pilot fuel distribution schemes are considered including 3 %, 5 %, 7 %, 10 % and 13 %. Altogether five pilot fuel distribution schemes are computed and deliberately examined. The flow field and temperature field are compared, especially on the multiple-swirlers flow field. Computational results show that there is the optimum value for the base load of combustion condition. The pilot fuel percentage curve is calculated to optimize the combustion operation. Under the combustor structure and fuel distribution scheme, the combustion achieves high efficiency with acceptable OTDF and low NOX emission. Besides, the CO emission is also presented.
Ma, Xiaoling; Sajjad, Muhammad; Wang, Jing; Yang, Wenlong; Sun, Jiazhu; Li, Xin; Zhang, Aimin; Liu, Dongcheng
2017-09-20
Kernel hardness, which has great influence on the end-use properties of common wheat, is mainly controlled by Puroindoline genes, Pina and Pinb. Using EcoTILLING platform, we herein investigated the allelic variations of Pina and Pinb genes and their association with the Single Kernel Characterization System (SKCS) hardness index in a diverse panel of wheat germplasm. The kernel hardness varied from 1.4 to 102.7, displaying a wide range of hardness index. In total, six Pina and nine Pinb alleles resulting in 15 genotypes were detected in 1787 accessions. The most common alleles are the wild type Pina-D1a (90.4%) and Pina-D1b (7.4%) for Pina, and Pinb-D1b (43.6%), Pinb-D1a (41.1%) and Pinb-D1p (12.8%) for Pinb. All the genotypes have hard type kernel hardness of SKCS index (>60.0), except the wild types of Pina and Pinb combination (Pina-D1a/Pinb-D1a). The most frequent genotypes in Chinese and foreign cultivars was Pina-D1a/Pinb-D1b (46.3 and 39.0%, respectively) and in Chinese landraces was Pina-D1a/Pinb-D1a (54.2%). The frequencies of hard type accessions are increasing from 35.5% in the region IV, to 40.6 and 61.4% in the regions III and II, and then to 77.0% in the region I, while those of soft type are accordingly decreasing along with the increase of latitude. Varieties released after 2000 in Beijing, Hebei, Shandong and Henan have higher average kernel hardness index than that released before 2000. The kernel hardness in a diverse panel of Chinese wheat germplasm revealed an increasing of kernel hardness generally along with the latitude across China. The wild type Pina-D1a and Pinb-D1a, and one Pinb mutant (Pinb-D1b) are the most common alleles of six Pina and nine Pinb alleles, and a new double null genotype (Pina-D1x/Pinb-D1ah) possessed relatively high SKCS hardness index. More hard type varieties were released in recent years with different prevalence of Pin-D1 combinations in different regions. This work would benefit the understanding of the selection and molecular processes of kernel hardness across China and different breeding stages, and provide useful information for the improvement of wheat quality in China.
Enhanced methanol utilization in direct methanol fuel cell
Ren, Xiaoming; Gottesfeld, Shimshon
2001-10-02
The fuel utilization of a direct methanol fuel cell is enhanced for improved cell efficiency. Distribution plates at the anode and cathode of the fuel cell are configured to distribute reactants vertically and laterally uniformly over a catalyzed membrane surface of the fuel cell. A conductive sheet between the anode distribution plate and the anodic membrane surface forms a mass transport barrier to the methanol fuel that is large relative to a mass transport barrier for a gaseous hydrogen fuel cell. In a preferred embodiment, the distribution plate is a perforated corrugated sheet. The mass transport barrier may be conveniently increased by increasing the thickness of an anode conductive sheet adjacent the membrane surface of the fuel cell.
Javanrouh, Niloufar; Daneshpour, Maryam S; Soltanian, Ali Reza; Tapak, Leili
2018-06-05
Obesity is a serious health problem that leads to low quality of life and early mortality. To the purpose of prevention and gene therapy for such a worldwide disease, genome wide association study is a powerful tool for finding SNPs associated with increased risk of obesity. To conduct an association analysis, kernel machine regression is a generalized regression method, has an advantage of considering the epistasis effects as well as the correlation between individuals due to unknown factors. In this study, information of the people who participated in Tehran cardio-metabolic genetic study was used. They were genotyped for the chromosomal region, evaluation 986 variations located at 16q12.2; build 38hg. Kernel machine regression and single SNP analysis were used to assess the association between obesity and SNPs genotyped data. We found that associated SNP sets with obesity, were almost in the FTO (P = 0.01), AIKTIP (P = 0.02) and MMP2 (P = 0.02) genes. Moreover, two SNPs, i.e., rs10521296 and rs11647470, showed significant association with obesity using kernel regression (P = 0.02). In conclusion, significant sets were randomly distributed throughout the region with more density around the FTO, AIKTIP and MMP2 genes. Furthermore, two intergenic SNPs showed significant association after using kernel machine regression. Therefore, more studies have to be conducted to assess their functionality or precise mechanism. Copyright © 2018 Elsevier B.V. All rights reserved.
1983-11-01
transmission, FM(R) will only have to hold one message. 3. Program Control Block (PCB) The PCB ( Deitel 82] will be maintained by the Executive in...and Use of Kernel to Process Interrupts 35 10. Layered Operating System Design 38 11. Program Control Block Table 43 12. Ready List Data Structure 45 13...examples of fully distributed systems in operation. An objective of the NPS research program for SPLICE is to advance our knowledge of distributed
Code of Federal Regulations, 2011 CFR
2011-07-01
... in the motor vehicle diesel fuel and diesel fuel additive distribution systems? 80.592 Section 80.592... FUELS AND FUEL ADDITIVES Motor Vehicle Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA... the motor vehicle diesel fuel and diesel fuel additive distribution systems? (a) Records that must be...
Code of Federal Regulations, 2012 CFR
2012-07-01
... in the motor vehicle diesel fuel and diesel fuel additive distribution systems? 80.592 Section 80.592... FUELS AND FUEL ADDITIVES Motor Vehicle Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA... the motor vehicle diesel fuel and diesel fuel additive distribution systems? (a) Records that must be...
Code of Federal Regulations, 2014 CFR
2014-07-01
... in the motor vehicle diesel fuel and diesel fuel additive distribution systems? 80.592 Section 80.592... FUELS AND FUEL ADDITIVES Motor Vehicle Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA... the motor vehicle diesel fuel and diesel fuel additive distribution systems? (a) Records that must be...
Code of Federal Regulations, 2010 CFR
2010-07-01
... in the motor vehicle diesel fuel and diesel fuel additive distribution systems? 80.592 Section 80.592... FUELS AND FUEL ADDITIVES Motor Vehicle Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA... the motor vehicle diesel fuel and diesel fuel additive distribution systems? (a) Records that must be...
Code of Federal Regulations, 2013 CFR
2013-07-01
... in the motor vehicle diesel fuel and diesel fuel additive distribution systems? 80.592 Section 80.592... FUELS AND FUEL ADDITIVES Motor Vehicle Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA... the motor vehicle diesel fuel and diesel fuel additive distribution systems? (a) Records that must be...
The formulation and estimation of a spatial skew-normal generalized ordered-response model.
DOT National Transportation Integrated Search
2016-06-01
This paper proposes a new spatial generalized ordered response model with skew-normal kernel error terms and an : associated estimation method. It contributes to the spatial analysis field by allowing a flexible and parametric skew-normal : distribut...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, Chinthaka M.; Hunt, Rodney Dale; Snead, Lance Lewis
Uranium mononitride (UN) is important as a nuclear fuel. Fabrication of UN in its microspherical form also has its own merits since the advent of the concept of accident-tolerant fuel, where UN is being considered as a potential fuel in the form of TRISO particles. But, not many processes have been well established to synthesize kernels of UN. Therefore, a process for synthesis of microspherical UN with a minimum amount of carbon is discussed herein. First, a series of single-phased microspheres of uranium sesquinitride (U 2N 3) were synthesized by nitridation of UO 2+C microspheres at a few different temperatures.more » Resulting microspheres were of low-density U 2N 3 and decomposed into low-density UN. The variation of density of the synthesized sesquinitrides as a function of its chemical composition indicated the presence of extra (interstitial) nitrogen atoms corresponding to its hyperstoichiometry, which is normally indicated as α-U 2N 3. Average grain sizes of both U 2N 3 and UN varied in a range of 1–2.5 μm. In addition, these had a considerably large amount of pore spacing, indicating the potential sinterability of UN toward its use as a nuclear fuel.« less
An alternative covariance estimator to investigate genetic heterogeneity in populations.
Heslot, Nicolas; Jannink, Jean-Luc
2015-11-26
For genomic prediction and genome-wide association studies (GWAS) using mixed models, covariance between individuals is estimated using molecular markers. Based on the properties of mixed models, using available molecular data for prediction is optimal if this covariance is known. Under this assumption, adding individuals to the analysis should never be detrimental. However, some empirical studies showed that increasing training population size decreased prediction accuracy. Recently, results from theoretical models indicated that even if marker density is high and the genetic architecture of traits is controlled by many loci with small additive effects, the covariance between individuals, which depends on relationships at causal loci, is not always well estimated by the whole-genome kinship. We propose an alternative covariance estimator named K-kernel, to account for potential genetic heterogeneity between populations that is characterized by a lack of genetic correlation, and to limit the information flow between a priori unknown populations in a trait-specific manner. This is similar to a multi-trait model and parameters are estimated by REML and, in extreme cases, it can allow for an independent genetic architecture between populations. As such, K-kernel is useful to study the problem of the design of training populations. K-kernel was compared to other covariance estimators or kernels to examine its fit to the data, cross-validated accuracy and suitability for GWAS on several datasets. It provides a significantly better fit to the data than the genomic best linear unbiased prediction model and, in some cases it performs better than other kernels such as the Gaussian kernel, as shown by an empirical null distribution. In GWAS simulations, alternative kernels control type I errors as well as or better than the classical whole-genome kinship and increase statistical power. No or small gains were observed in cross-validated prediction accuracy. This alternative covariance estimator can be used to gain insight into trait-specific genetic heterogeneity by identifying relevant sub-populations that lack genetic correlation between them. Genetic correlation can be 0 between identified sub-populations by performing automatic selection of relevant sets of individuals to be included in the training population. It may also increase statistical power in GWAS.
Zheng, Haiyong; Wang, Ruchen; Yu, Zhibin; Wang, Nan; Gu, Zhaorui; Zheng, Bing
2017-12-28
Plankton, including phytoplankton and zooplankton, are the main source of food for organisms in the ocean and form the base of marine food chain. As the fundamental components of marine ecosystems, plankton is very sensitive to environment changes, and the study of plankton abundance and distribution is crucial, in order to understand environment changes and protect marine ecosystems. This study was carried out to develop an extensive applicable plankton classification system with high accuracy for the increasing number of various imaging devices. Literature shows that most plankton image classification systems were limited to only one specific imaging device and a relatively narrow taxonomic scope. The real practical system for automatic plankton classification is even non-existent and this study is partly to fill this gap. Inspired by the analysis of literature and development of technology, we focused on the requirements of practical application and proposed an automatic system for plankton image classification combining multiple view features via multiple kernel learning (MKL). For one thing, in order to describe the biomorphic characteristics of plankton more completely and comprehensively, we combined general features with robust features, especially by adding features like Inner-Distance Shape Context for morphological representation. For another, we divided all the features into different types from multiple views and feed them to multiple classifiers instead of only one by combining different kernel matrices computed from different types of features optimally via multiple kernel learning. Moreover, we also applied feature selection method to choose the optimal feature subsets from redundant features for satisfying different datasets from different imaging devices. We implemented our proposed classification system on three different datasets across more than 20 categories from phytoplankton to zooplankton. The experimental results validated that our system outperforms state-of-the-art plankton image classification systems in terms of accuracy and robustness. This study demonstrated automatic plankton image classification system combining multiple view features using multiple kernel learning. The results indicated that multiple view features combined by NLMKL using three kernel functions (linear, polynomial and Gaussian kernel functions) can describe and use information of features better so that achieve a higher classification accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom; Su-Jong Yoon
2014-04-01
Computational Fluid Dynamics (CFD) evaluation of homogeneous and heterogeneous fuel models was performed as part of the Phase I calculations of the International Atomic Energy Agency (IAEA) Coordinate Research Program (CRP) on High Temperature Reactor (HTR) Uncertainties in Modeling (UAM). This study was focused on the nominal localized stand-alone fuel thermal response, as defined in Ex. I-3 and I-4 of the HTR UAM. The aim of the stand-alone thermal unit-cell simulation is to isolate the effect of material and boundary input uncertainties on a very simplified problem, before propagation of these uncertainties are performed in subsequent coupled neutronics/thermal fluids phasesmore » on the benchmark. In many of the previous studies for high temperature gas cooled reactors, the volume-averaged homogeneous mixture model of a single fuel compact has been applied. In the homogeneous model, the Tristructural Isotropic (TRISO) fuel particles in the fuel compact were not modeled directly and an effective thermal conductivity was employed for the thermo-physical properties of the fuel compact. On the contrary, in the heterogeneous model, the uranium carbide (UCO), inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers of the TRISO fuel particles are explicitly modeled. The fuel compact is modeled as a heterogeneous mixture of TRISO fuel kernels embedded in H-451 matrix graphite. In this study, a steady-state and transient CFD simulations were performed with both homogeneous and heterogeneous models to compare the thermal characteristics. The nominal values of the input parameters are used for this CFD analysis. In a future study, the effects of input uncertainties in the material properties and boundary parameters will be investigated and reported.« less
Evaluation of Statistical Downscaling Skill at Reproducing Extreme Events
NASA Astrophysics Data System (ADS)
McGinnis, S. A.; Tye, M. R.; Nychka, D. W.; Mearns, L. O.
2015-12-01
Climate model outputs usually have much coarser spatial resolution than is needed by impacts models. Although higher resolution can be achieved using regional climate models for dynamical downscaling, further downscaling is often required. The final resolution gap is often closed with a combination of spatial interpolation and bias correction, which constitutes a form of statistical downscaling. We use this technique to downscale regional climate model data and evaluate its skill in reproducing extreme events. We downscale output from the North American Regional Climate Change Assessment Program (NARCCAP) dataset from its native 50-km spatial resolution to the 4-km resolution of University of Idaho's METDATA gridded surface meterological dataset, which derives from the PRISM and NLDAS-2 observational datasets. We operate on the major variables used in impacts analysis at a daily timescale: daily minimum and maximum temperature, precipitation, humidity, pressure, solar radiation, and winds. To interpolate the data, we use the patch recovery method from the Earth System Modeling Framework (ESMF) regridding package. We then bias correct the data using Kernel Density Distribution Mapping (KDDM), which has been shown to exhibit superior overall performance across multiple metrics. Finally, we evaluate the skill of this technique in reproducing extreme events by comparing raw and downscaled output with meterological station data in different bioclimatic regions according to the the skill scores defined by Perkins et al in 2013 for evaluation of AR4 climate models. We also investigate techniques for improving bias correction of values in the tails of the distributions. These techniques include binned kernel density estimation, logspline kernel density estimation, and transfer functions constructed by fitting the tails with a generalized pareto distribution.
FUNGIBLE AND COMPATIBLE BIOFUELS: LITERATURE SEARCH, SUMMARY, AND RECOMMENDATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bunting, Bruce G; Bunce, Michael; Barone, Teresa L
2011-04-01
The purpose of the study described in this report is to summarize the various barriers to more widespread distribution of bio-fuels through our common carrier fuel distribution system, which includes pipelines, barges and rail, fuel tankage, and distribution terminals. Addressing these barriers is necessary to allow the more widespread utilization and distribution of bio-fuels, in support of a renewable fuels standard and possible future low-carbon fuel standards. These barriers can be classified into several categories, including operating practice, regulatory, technical, and acceptability barriers. Possible solutions to these issues are discussed; including compatibility evaluation, changes to bio-fuels, regulatory changes, and changesmore » in the distribution system or distribution practices. No actual experimental research has been conducted in the writing of this report, but results are used to develop recommendations for future research and additional study as appropriate. This project addresses recognized barriers to the wider use of bio-fuels in the areas of development of codes and standards, industrial and consumer awareness, and materials compatibility issues.« less
NASA Technical Reports Server (NTRS)
Allan, Brian G.
2000-01-01
A reduced order modeling approach of the Navier-Stokes equations is presented for the design of a distributed optimal feedback kernel. This approach is based oil a Krylov subspace method where significant modes of the flow are captured in the model This model is then used in all optimal feedback control design where sensing and actuation is performed oil tile entire flow field. This control design approach yields all optimal feedback kernel which provides insight into the placement of sensors and actuators in the flow field. As all evaluation of this approach, a two-dimensional shear layer and driven cavity flow are investigated.
Hanft, J M; Jones, R J
1986-06-01
Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.
NASA Astrophysics Data System (ADS)
Diego Azcona, Juan; Barbés, Benigno; Wang, Lilie; Burguete, Javier
2016-01-01
This paper presents a method to obtain the pencil-beam kernels that characterize a megavoltage photon beam generated in a flattening filter free (FFF) linear accelerator (linac) by deconvolution from experimental measurements at different depths. The formalism is applied to perform independent dose calculations in modulated fields. In our previous work a formalism was developed for ideal flat fluences exiting the linac’s head. That framework could not deal with spatially varying energy fluences, so any deviation from the ideal flat fluence was treated as a perturbation. The present work addresses the necessity of implementing an exact analysis where any spatially varying fluence can be used such as those encountered in FFF beams. A major improvement introduced here is to handle the actual fluence in the deconvolution procedure. We studied the uncertainties associated to the kernel derivation with this method. Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from two linacs from different vendors, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water-equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50mm diameter circular field, collimated with a lead block. The 3D kernel for a FFF beam was obtained by deconvolution using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. Error uncertainty in the kernel derivation procedure was estimated to be within 0.2%. Eighteen modulated fields used clinically in different treatment localizations were irradiated at four measurement depths (total of fifty-four film measurements). Comparison through the gamma-index to their corresponding calculated absolute dose distributions showed a number of passing points (3%, 3mm) mostly above 99%. This new procedure is more reliable and robust than the previous one. Its ability to perform accurate independent dose calculations was demonstrated.
Pearson correlation estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.
You, Heejo; Magnuson, James S
2018-06-01
This article describes a new Python distribution of TISK, the time-invariant string kernel model of spoken word recognition (Hannagan et al. in Frontiers in Psychology, 4, 563, 2013). TISK is an interactive-activation model similar to the TRACE model (McClelland & Elman in Cognitive Psychology, 18, 1-86, 1986), but TISK replaces most of TRACE's reduplicated, time-specific nodes with theoretically motivated time-invariant, open-diphone nodes. We discuss the utility of computational models as theory development tools, the relative merits of TISK as compared to other models, and the ways in which researchers might use this implementation to guide their own research and theory development. We describe a TISK model that includes features that facilitate in-line graphing of simulation results, integration with standard Python data formats, and graph and data export. The distribution can be downloaded from https://github.com/maglab-uconn/TISK1.0 .
Distribution of the glutamine synthetase isozyme GSp1 in maize (Zea mays).
Muhitch, Michael J
2003-06-01
In maize (Zea mays L.), GSp1, the predominant GS isozyme of the developing kernel, is abundant in the pedicel and pericarp, but absent from the endosperm and embryo. Determinations of GSp1 tissue distribution in vegetative tissues have been limited thus far to root and leaves, where the isozyme is absent. However, the promoter from the gene encoding GSp1 has been shown to drive reporter gene expression not only in the maternal seed-associated tissues in transgenic maize plants, but also in the anthers, husks and pollen (Muhitch et al. 2002, Plant Sci 163: 865-872). Here we report chromatographic evidence that GSp1 resides in immature tassels, dehiscing anthers, kernel glumes, ear husks, cobs and stalks of maize plants, but not in mature, shedding pollen grains. RNA blot analysis confirmed these biochemical data. In stalks, GSp1 increased in the later stages of ear development, suggesting that it plays a role in nitrogen remobilization during grain fill.
Contour-Driven Atlas-Based Segmentation
Wachinger, Christian; Fritscher, Karl; Sharp, Greg; Golland, Polina
2016-01-01
We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images. PMID:26068202
7 CFR 810.602 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...
Hanft, Jonathan M.; Jones, Robert J.
1986-01-01
Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846
Out-of-Sample Extensions for Non-Parametric Kernel Methods.
Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang
2017-02-01
Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghrayeb, Shadi Z.; Ougouag, Abderrafi M.; Ouisloumen, Mohamed
2014-01-01
A multi-group formulation for the exact neutron elastic scattering kernel is developed. It incorporates the neutron up-scattering effects, stemming from lattice atoms thermal motion and accounts for it within the resulting effective nuclear cross-section data. The effects pertain essentially to resonant scattering off of heavy nuclei. The formulation, implemented into a standalone code, produces effective nuclear scattering data that are then supplied directly into the DRAGON lattice physics code where the effects on Doppler Reactivity and neutron flux are demonstrated. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering,more » which in turn affect the estimation of core reactivity and burnup characteristics. The results show an increase in values of Doppler temperature feedback coefficients up to -10% for UOX and MOX LWR fuels compared to the corresponding values derived using the traditional asymptotic elastic scattering kernel. This paper also summarizes the results done on this topic to date.« less
7 CFR 810.1202 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...
Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang
2016-01-01
Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.
Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang
2016-01-01
Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143
Apparatus tube configuration and mounting for solid oxide fuel cells
Zymboly, G.E.
1993-09-14
A generator apparatus is made containing long, hollow, tubular, fuel cells containing an inner air electrode, an outer fuel electrode, and solid electrolyte there between, placed between a fuel distribution board and a board which separates the combustion chamber from the generating chamber, where each fuel cell has an insertable open end and in insertable, plugged, closed end, the plugged end being inserted into the fuel distribution board and the open end being inserted through the separator board where the plug is completely within the fuel distribution board. 3 figures.
Alternative Fuels Data Center: Natural Gas Distribution
. Gas is distributed using 305,000 miles of transmission pipelines (see map), while an additional 2.2 Natural Gas Distribution to someone by E-mail Share Alternative Fuels Data Center: Natural Gas Distribution on Facebook Tweet about Alternative Fuels Data Center: Natural Gas Distribution on Twitter
7 CFR 810.802 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landwehr, Joshua B.; Suetterlein, Joshua D.; Marquez, Andres
2016-05-16
Since 2012, the U.S. Department of Energy’s X-Stack program has been developing solutions including runtime systems, programming models, languages, compilers, and tools for the Exascale system software to address crucial performance and power requirements. Fine grain programming models and runtime systems show a great potential to efficiently utilize the underlying hardware. Thus, they are essential to many X-Stack efforts. An abundant amount of small tasks can better utilize the vast parallelism available on current and future machines. Moreover, finer tasks can recover faster and adapt better, due to a decrease in state and control. Nevertheless, current applications have been writtenmore » to exploit old paradigms (such as Communicating Sequential Processor and Bulk Synchronous Parallel processing). To fully utilize the advantages of these new systems, applications need to be adapted to these new paradigms. As part of the applications’ porting process, in-depth characterization studies, focused on both application characteristics and runtime features, need to take place to fully understand the application performance bottlenecks and how to resolve them. This paper presents a characterization study for a novel high performance runtime system, called the Open Community Runtime, using key HPC kernels as its vehicle. This study has the following contributions: one of the first high performance, fine grain, distributed memory runtime system implementing the OCR standard (version 0.99a); and a characterization study of key HPC kernels in terms of runtime primitives running on both intra and inter node environments. Running on a general purpose cluster, we have found up to 1635x relative speed-up for a parallel tiled Cholesky Kernels on 128 nodes with 16 cores each and a 1864x relative speed-up for a parallel tiled Smith-Waterman kernel on 128 nodes with 30 cores.« less
SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Tian, Z; Song, T
Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less
NASA Astrophysics Data System (ADS)
Anees, Asim; Aryal, Jagannath; O'Reilly, Małgorzata M.; Gale, Timothy J.; Wardlaw, Tim
2016-12-01
A robust non-parametric framework, based on multiple Radial Basic Function (RBF) kernels, is proposed in this study, for detecting land/forest cover changes using Landsat 7 ETM+ images. One of the widely used frameworks is to find change vectors (difference image) and use a supervised classifier to differentiate between change and no-change. The Bayesian Classifiers e.g. Maximum Likelihood Classifier (MLC), Naive Bayes (NB), are widely used probabilistic classifiers which assume parametric models, e.g. Gaussian function, for the class conditional distributions. However, their performance can be limited if the data set deviates from the assumed model. The proposed framework exploits the useful properties of Least Squares Probabilistic Classifier (LSPC) formulation i.e. non-parametric and probabilistic nature, to model class posterior probabilities of the difference image using a linear combination of a large number of Gaussian kernels. To this end, a simple technique, based on 10-fold cross-validation is also proposed for tuning model parameters automatically instead of selecting a (possibly) suboptimal combination from pre-specified lists of values. The proposed framework has been tested and compared with Support Vector Machine (SVM) and NB for detection of defoliation, caused by leaf beetles (Paropsisterna spp.) in Eucalyptus nitens and Eucalyptus globulus plantations of two test areas, in Tasmania, Australia, using raw bands and band combination indices of Landsat 7 ETM+. It was observed that due to multi-kernel non-parametric formulation and probabilistic nature, the LSPC outperforms parametric NB with Gaussian assumption in change detection framework, with Overall Accuracy (OA) ranging from 93.6% (κ = 0.87) to 97.4% (κ = 0.94) against 85.3% (κ = 0.69) to 93.4% (κ = 0.85), and is more robust to changing data distributions. Its performance was comparable to SVM, with added advantages of being probabilistic and capable of handling multi-class problems naturally with its original formulation.
Effects of Fuel Composition on EGR Dilution Tolerance in Spark Ignited Engines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szybist, James P
2016-01-01
Fuel-specific differences in exhaust gas recirculation (EGR) dilution tolerance are studied in a modern, direct-injection single-cylinder research engine. A total of 6 model fuel blends are examined at a constant research octane number (RON) of 95 using n-heptane, iso-octane, toluene, and ethanol. Laminar flame speeds for these mixtures, which were calculated two different methods (an energy fraction mixing rule and a detailed kinetic simulation), spanned a range of about 6 cm/s. A constant fueling nominal load of 350 kPa IMEPg at 2000 rpm was operated with varying CA50 from 8-20 CAD aTDCf, and with EGR increasing until a COV ofmore » IMEP of 5% is reached. The results illustrate that flame speed affects EGR dilution tolerance; fuels with increased flame speeds increase EGR tolerance. Specifically, flame speed correlates most closely to the initial flame kernel growth, measured as the time of ignition to 5% mass fraction burned. The effect of the latent heat of vaporization on the flame speed is taken into account for the ethanol-containing fuels. At a 30 vol% blend level, the increased enthalpy of vaporization of ethanol compared to conventional hydrocarbons can decrease the temperature at the time of ignition by a maximum of 15 C, which can account for up to a 3.5 cm/s decrease in flame speed. The ethanol-containing fuels, however, still exhibit a flame speed advantage, and a dilution tolerance advantage over the slower flame-speed fuels. The fuel-specific differences in dilution tolerance are significant at the condition examined, allowing for a 50% relative increase in EGR (4% absolute difference in EGR) at a constant COV of IMEP of 3%.« less
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
Predicting spatial patterns of plant recruitment using animal-displacement kernels.
Santamaría, Luis; Rodríguez-Pérez, Javier; Larrinaga, Asier R; Pias, Beatriz
2007-10-10
For plants dispersed by frugivores, spatial patterns of recruitment are primarily influenced by the spatial arrangement and characteristics of parent plants, the digestive characteristics, feeding behaviour and movement patterns of animal dispersers, and the structure of the habitat matrix. We used an individual-based, spatially-explicit framework to characterize seed dispersal and seedling fate in an endangered, insular plant-disperser system: the endemic shrub Daphne rodriguezii and its exclusive disperser, the endemic lizard Podarcis lilfordi. Plant recruitment kernels were chiefly determined by the disperser's patterns of space utilization (i.e. the lizard's displacement kernels), the position of the various plant individuals in relation to them, and habitat structure (vegetation cover vs. bare soil). In contrast to our expectations, seed gut-passage rate and its effects on germination, and lizard speed-of-movement, habitat choice and activity rhythm were of minor importance. Predicted plant recruitment kernels were strongly anisotropic and fine-grained, preventing their description using one-dimensional, frequency-distance curves. We found a general trade-off between recruitment probability and dispersal distance; however, optimal recruitment sites were not necessarily associated to sites of maximal adult-plant density. Conservation efforts aimed at enhancing the regeneration of endangered plant-disperser systems may gain in efficacy by manipulating the spatial distribution of dispersers (e.g. through the creation of refuges and feeding sites) to create areas favourable to plant recruitment.
Maigne, L; Perrot, Y; Schaart, D R; Donnarieix, D; Breton, V
2011-02-07
The GATE Monte Carlo simulation platform based on the GEANT4 toolkit has come into widespread use for simulating positron emission tomography (PET) and single photon emission computed tomography (SPECT) imaging devices. Here, we explore its use for calculating electron dose distributions in water. Mono-energetic electron dose point kernels and pencil beam kernels in water are calculated for different energies between 15 keV and 20 MeV by means of GATE 6.0, which makes use of the GEANT4 version 9.2 Standard Electromagnetic Physics Package. The results are compared to the well-validated codes EGSnrc and MCNP4C. It is shown that recent improvements made to the GEANT4/GATE software result in significantly better agreement with the other codes. We furthermore illustrate several issues of general interest to GATE and GEANT4 users who wish to perform accurate simulations involving electrons. Provided that the electron step size is sufficiently restricted, GATE 6.0 and EGSnrc dose point kernels are shown to agree to within less than 3% of the maximum dose between 50 keV and 4 MeV, while pencil beam kernels are found to agree to within less than 4% of the maximum dose between 15 keV and 20 MeV.
Leimar, Olof; Doebeli, Michael; Dieckmann, Ulf
2008-04-01
We have analyzed the evolution of a quantitative trait in populations that are spatially extended along an environmental gradient, with gene flow between nearby locations. In the absence of competition, there is stabilizing selection toward a locally best-adapted trait that changes gradually along the gradient. According to traditional ideas, gradual spatial variation in environmental conditions is expected to lead to gradual variation in the evolved trait. A contrasting possibility is that the trait distribution instead breaks up into discrete clusters. Doebeli and Dieckmann (2003) argued that competition acting locally in trait space and geographical space can promote such clustering. We have investigated this possibility using deterministic population dynamics for asexual populations, analyzing our model numerically and through an analytical approximation. We examined how the evolution of clusters is affected by the shape of competition kernels, by the presence of Allee effects, and by the strength of gene flow along the gradient. For certain parameter ranges clustering was a robust outcome, and for other ranges there was no clustering. Our analysis shows that the shape of competition kernels is important for clustering: the sign structure of the Fourier transform of a competition kernel determines whether the kernel promotes clustering. Also, we found that Allee effects promote clustering, whereas gene flow can have a counteracting influence. In line with earlier findings, we could demonstrate that phenotypic clustering was favored by gradients of intermediate slope.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation
Huang, Hao; Yoo, Shinjae; Yu, Dantong; ...
2015-06-01
Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, somore » it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.« less
Church, Cody; Mawko, George; Archambault, John Paul; Lewandowski, Robert; Liu, David; Kehoe, Sharon; Boyd, Daniel; Abraham, Robert; Syme, Alasdair
2018-02-01
Radiopaque microspheres may provide intraprocedural and postprocedural feedback during transarterial radioembolization (TARE). Furthermore, the potential to use higher resolution x-ray imaging techniques as opposed to nuclear medicine imaging suggests that significant improvements in the accuracy and precision of radiation dosimetry calculations could be realized for this type of therapy. This study investigates the absorbed dose kernel for novel radiopaque microspheres including contributions of both short and long-lived contaminant radionuclides while concurrently quantifying the self-shielding of the glass network. Monte Carlo simulations using EGSnrc were performed to determine the dose kernels for all monoenergetic electron emissions and all beta spectra for radionuclides reported in a neutron activation study of the microspheres. Simulations were benchmarked against an accepted 90 Y dose point kernel. Self-shielding was quantified for the microspheres by simulating an isotropically emitting, uniformly distributed source, in glass and in water. The ratio of the absorbed doses was scored as a function of distance from a microsphere. The absorbed dose kernel for the microspheres was calculated for (a) two bead formulations following (b) two different durations of neutron activation, at (c) various time points following activation. Self-shielding varies with time postremoval from the reactor. At early time points, it is less pronounced due to the higher energies of the emissions. It is on the order of 0.4-2.8% at a radial distance of 5.43 mm with increased size from 10 to 50 μm in diameter during the time that the microspheres would be administered to a patient. At long time points, self-shielding is more pronounced and can reach values in excess of 20% near the end of the range of the emissions. Absorbed dose kernels for 90 Y, 90m Y, 85m Sr, 85 Sr, 87m Sr, 89 Sr, 70 Ga, 72 Ga, and 31 Si are presented and used to determine an overall kernel for the microspheres based on weighted activities. The shapes of the absorbed dose kernels are dominated at short times postactivation by the contributions of 70 Ga and 72 Ga. Following decay of the short-lived contaminants, the absorbed dose kernel is effectively that of 90 Y. After approximately 1000 h postactivation, the contributions of 85 Sr and 89 Sr become increasingly dominant, though the absorbed dose-rate around the beads drops by roughly four orders of magnitude. The introduction of high atomic number elements for the purpose of increasing radiopacity necessarily leads to the production of radionuclides other than 90 Y in the microspheres. Most of the radionuclides in this study are short-lived and are likely not of any significant concern for this therapeutic agent. The presence of small quantities of longer lived radionuclides will change the shape of the absorbed dose kernel around a microsphere at long time points postadministration when activity levels are significantly reduced. © 2017 American Association of Physicists in Medicine.
A Novel Weighted Kernel PCA-Based Method for Optimization and Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Thimmisetty, C.; Talbot, C.; Chen, X.; Tong, C. H.
2016-12-01
It has been demonstrated that machine learning methods can be successfully applied to uncertainty quantification for geophysical systems through the use of the adjoint method coupled with kernel PCA-based optimization. In addition, it has been shown through weighted linear PCA how optimization with respect to both observation weights and feature space control variables can accelerate convergence of such methods. Linear machine learning methods, however, are inherently limited in their ability to represent features of non-Gaussian stochastic random fields, as they are based on only the first two statistical moments of the original data. Nonlinear spatial relationships and multipoint statistics leading to the tortuosity characteristic of channelized media, for example, are captured only to a limited extent by linear PCA. With the aim of coupling the kernel-based and weighted methods discussed, we present a novel mathematical formulation of kernel PCA, Weighted Kernel Principal Component Analysis (WKPCA), that both captures nonlinear relationships and incorporates the attribution of significance levels to different realizations of the stochastic random field of interest. We also demonstrate how new instantiations retaining defining characteristics of the random field can be generated using Bayesian methods. In particular, we present a novel WKPCA-based optimization method that minimizes a given objective function with respect to both feature space random variables and observation weights through which optimal snapshot significance levels and optimal features are learned. We showcase how WKPCA can be applied to nonlinear optimal control problems involving channelized media, and in particular demonstrate an application of the method to learning the spatial distribution of material parameter values in the context of linear elasticity, and discuss further extensions of the method to stochastic inversion.
TEMPORAL EVOLUTION AND SPATIAL DISTRIBUTION OF WHITE-LIGHT FLARE KERNELS IN A SOLAR FLARE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawate, T.; Ishii, T. T.; Nakatani, Y.
2016-12-10
On 2011 September 6, we observed an X2.1-class flare in continuum and H α with a frame rate of about 30 Hz. After processing images of the event by using a speckle-masking image reconstruction, we identified white-light (WL) flare ribbons on opposite sides of the magnetic neutral line. We derive the light curve decay times of the WL flare kernels at each resolution element by assuming that the kernels consist of one or two components that decay exponentially, starting from the peak time. As a result, 42% of the pixels have two decay-time components with average decay times of 15.6 andmore » 587 s, whereas the average decay time is 254 s for WL kernels with only one decay-time component. The peak intensities of the shorter decay-time component exhibit good spatial correlation with the WL intensity, whereas the peak intensities of the long decay-time components tend to be larger in the early phase of the flare at the inner part of the flare ribbons, close to the magnetic neutral line. The average intensity of the longer decay-time components is 1.78 times higher than that of the shorter decay-time components. If the shorter decay time is determined by either the chromospheric cooling time or the nonthermal ionization timescale and the longer decay time is attributed to the coronal cooling time, this result suggests that WL sources from both regions appear in 42% of the WL kernels and that WL emission of the coronal origin is sometimes stronger than that of chromospheric origin.« less
Vokoun, Jason C.; Rabeni, Charles F.
2005-01-01
Flathead catfish Pylodictis olivaris were radio-tracked in the Grand River and Cuivre River, Missouri, from late July until they moved to overwintering habitats in late October. Fish moved within a definable area, and although occasional long-distance movements occurred, the fish typically returned to the previously occupied area. Seasonal home range was calculated with the use of kernel density estimation, which can be interpreted as a probabilistic utilization distribution that documents the internal structure of the estimate by delineating portions of the range that was used a specified percentage of the time. A traditional linear range also was reported. Most flathead catfish (89%) had one 50% kernel-estimated core area, whereas 11% of the fish split their time between two core areas. Core areas were typically in the middle of the 90% kernel-estimated home range (58%), although several had core areas in upstream (26%) and downstream (16%) portions of the home range. Home-range size did not differ based on river, sex, or size and was highly variable among individuals. The median 95% kernel estimate was 1,085 m (range, 70– 69,090 m) for all fish. The median 50% kernel-estimated core area was 135 m (10–2,260 m). The median linear range was 3,510 m (150–50,400 m). Fish pairs with core areas in the same and neighboring pools had static joint space use values of up to 49% (area of intersection index), indicating substantial overlap and use of the same area. However, all fish pairs had low dynamic joint space use values (<0.07; coefficient of association), indicating that fish pairs were temporally segregated, rarely occurring in the same location at the same time.
Validation of Born Traveltime Kernels
NASA Astrophysics Data System (ADS)
Baig, A. M.; Dahlen, F. A.; Hung, S.
2001-12-01
Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
NASA Astrophysics Data System (ADS)
Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen
2014-05-01
We present applications of a new clustering method for fault network reconstruction based on the spatial distribution of seismicity. Unlike common approaches that start from the simplest large scale and gradually increase the complexity trying to explain the small scales, our method uses a bottom-up approach, by an initial sampling of the small scales and then reducing the complexity. The new approach also exploits the location uncertainty associated with each event in order to obtain a more accurate representation of the spatial probability distribution of the seismicity. For a given dataset, we first construct an agglomerative hierarchical cluster (AHC) tree based on Ward's minimum variance linkage. Such a tree starts out with one cluster and progressively branches out into an increasing number of clusters. To atomize the structure into its constitutive protoclusters, we initialize a Gaussian Mixture Modeling (GMM) at a given level of the hierarchical clustering tree. We then let the GMM converge using an Expectation Maximization (EM) algorithm. The kernels that become ill defined (less than 4 points) at the end of the EM are discarded. By incrementing the number of initialization clusters (by atomizing at increasingly populated levels of the AHC tree) and repeating the procedure above, we are able to determine the maximum number of Gaussian kernels the structure can hold. The kernels in this configuration constitute our protoclusters. In this setting, merging of any pair will lessen the likelihood (calculated over the pdf of the kernels) but in turn will reduce the model's complexity. The information loss/gain of any possible merging can thus be quantified based on the Minimum Description Length (MDL) principle. Similar to an inter-distance matrix, where the matrix element di,j gives the distance between points i and j, we can construct a MDL gain/loss matrix where mi,j gives the information gain/loss resulting from the merging of kernels i and j. Based on this matrix, merging events resulting in MDL gain are performed in descending order until no gainful merging is possible anymore. We envision that the results of this study could lead to a better understanding of the complex interactions within the Californian fault system and hopefully use the acquired insights for earthquake forecasting.
NASA Astrophysics Data System (ADS)
Chmiel, Malgorzata; Roux, Philippe; Herrmann, Philippe; Rondeleux, Baptiste; Wathelet, Marc
2018-05-01
We investigated the construction of diffraction kernels for surface waves using two-point convolution and/or correlation from land active seismic data recorded in the context of exploration geophysics. The high density of controlled sources and receivers, combined with the application of the reciprocity principle, allows us to retrieve two-dimensional phase-oscillation diffraction kernels (DKs) of surface waves between any two source or receiver points in the medium at each frequency (up to 15 Hz, at least). These DKs are purely data-based as no model calculations and no synthetic data are needed. They naturally emerge from the interference patterns of the recorded wavefields projected on the dense array of sources and/or receivers. The DKs are used to obtain multi-mode dispersion relations of Rayleigh waves, from which near-surface shear velocity can be extracted. Using convolution versus correlation with a grid of active sources is an important step in understanding the physics of the retrieval of surface wave Green's functions. This provides the foundation for future studies based on noise sources or active sources with a sparse spatial distribution.
Carbon monoxide formation in UO2 kerneled HTR fuel particles containing oxygen getters
NASA Astrophysics Data System (ADS)
Proksch, E.; Strigl, A.; Nabielek, H.
1986-01-01
Mass spectrometric measurements of CO in irradiated UO2 fuel particles containing oxygen getters are summarized. Uranium carbide addition in the 3% to 15% range reduces the CO release by factors between 25 and 80, up to burn-up levels as high as 70% FIMA. Unintentional gettering by SiC in TRISO coated particles with failed inner pyrocarbon layers results in CO reduction factors between 15 and 110. For ZrC, ambiguous results are obtained; ZrC probably results in CO reduction by a factor of 40; Ce2O3 and La2O3 seem less effective than the carbides; for Ce2O3, reduction factors between 3 and 15 are found. However, the results are possibly incorrect due to premature oxidation of the getter already during fabrication. Addition of SiO2 + Al2O3 has no influence on CO release.
Apparatus tube configuration and mounting for solid oxide fuel cells
Zymboly, Gregory E.
1993-01-01
A generator apparatus (10) is made containing long, hollow, tubular, fuel cells containing an inner air electrode (64), an outer fuel electrode (56), and solid electrolyte (54) therebetween, placed between a fuel distribution board (29) and a board (32) which separates the combustion chamber (16) from the generating chamber (14), where each fuel cell has an insertable open end and in insertable, plugged, closed end (44), the plugged end being inserted into the fuel distribution board (29) and the open end being inserted through the separator board (32) where the plug (60) is completely within the fuel distribution board (29).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belsom, Keith Cletus; McMahan, Kevin Weston; Thomas, Larry Lou
A fuel nozzle for a gas turbine generally includes a main body having an upstream end axially separated from a downstream end. The main body at least partially defines a fuel supply passage that extends through the upstream end and at least partially through the main body. A fuel distribution manifold is disposed at the downstream end of the main body. The fuel distribution manifold includes a plurality of axially extending passages that extend through the fuel distribution manifold. A plurality of fuel injection ports defines a flow path between the fuel supply passage and each of the plurality ofmore » axially extending passages.« less
Uranium nitride as LWR TRISO fuel: Thermodynamic modeling of U-C-N
NASA Astrophysics Data System (ADS)
Besmann, Theodore M.; Shin, Dongwon; Lindemer, Terrence B.
2012-08-01
TRISO coated particle fuel is envisioned as a next generation replacement for current urania pellet fuel in LWR applications. To obtain adequate fissile loading the kernel of the TRISO particle will likely need to be UN instead of UO2. In support of the necessary development effort for this new fuel system, an assessment of phase regions of interest in the U-C-N system was undertaken as the fuel will be prepared by the carbothermic reduction of the oxide followed by nitriding, will be in equilibrium with carbon within the TRISO particle, and will react with minor actinides and fission products. The phase equilibria and thermochemistry of the U-C-N system is reviewed, including nitrogen pressure measurements above various phase fields. Measurements were used to confirm an ideal solution model of UN and UC adequately represents the UC1-xNx phase. Agreement with the data was significantly improved by effectively adjusting the Gibbs free energy of UN by +12 kJ/mol. This also required adjustment of the value for the sesquinitride by +17 kJ/mol to obtain agreement with phase equilibria. The resultant model together with reported values for other phases in the system was used to generate isothermal sections of the U-C-N phase diagram. Nitrogen partial pressures were also computed for regions of interest.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
Gabor-based kernel PCA with fractional power polynomial models for face recognition.
Liu, Chengjun
2004-05-01
This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.
Monitoring NEON terrestrial sites phenology with daily MODIS BRDF/albedo product and landsat data
USDA-ARS?s Scientific Manuscript database
The MODerate resolution Imaging Spectroradiometer (MODIS) Bidirectional Reflectance Distribution Function (BRDF) and albedo products (MCD43) have already been in production for more than a decade. The standard product makes use of a linear “kernel-driven” RossThick-LiSparse Reciprocal (RTLSR) BRDF m...
A multi-label learning based kernel automatic recommendation method for support vector machine.
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.
A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...
A Theoretical Solid Oxide Fuel Cell Model for System Controls and Stability Design
NASA Technical Reports Server (NTRS)
Kopasakis, George; Brinson, Thomas; Credle, Sydni; Xu, Ming
2006-01-01
As the aviation industry moves towards higher efficiency electrical power generation, all electric aircraft, or zero emissions and more quiet aircraft, fuel cells are sought as the technology that can deliver on these high expectations. The Hybrid Solid Oxide Fuel Cell system combines the fuel cell with a microturbine to obtain up to 70 percent cycle efficiency, and then distributes the electrical power to the loads via a power distribution system. The challenge is to understand the dynamics of this complex multi-discipline system, and design distributed controls that take the system through its operating conditions in a stable and safe manner while maintaining the system performance. This particular system is a power generation and distribution system and the fuel cell and microturbine model fidelity should be compatible with the dynamics of the power distribution system in order to allow proper stability and distributed controls design. A novel modeling approach is proposed for the fuel cell that will allow the fuel cell and the power system to be integrated and designed for stability, distributed controls, and other interface specifications. This investigation shows that for the fuel cell, the voltage characteristic should be modeled, but in addition, conservation equation dynamics, ion diffusion, charge transfer kinetics, and the electron flow inherent impedance should also be included.
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
LIQUID AND GASEOUS FUEL DISTRIBUTION SYSTEM
The report describes the national liquid and gaseous fuel distribution system. he study leading to the report was performed as part of an effort to better understand emissions of volatile organic compounds from the fuel distribution system. he primary, secondary, and tertiary seg...
Exploiting graph kernels for high performance biomedical relation extraction.
Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri
2018-01-30
Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.
7 CFR 810.2202 - Definition of other terms.
Code of Federal Regulations, 2014 CFR
2014-01-01
... kernels, foreign material, and shrunken and broken kernels. The sum of these three factors may not exceed... the removal of dockage and shrunken and broken kernels. (g) Heat-damaged kernels. Kernels, pieces of... sample after the removal of dockage and shrunken and broken kernels. (h) Other grains. Barley, corn...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...
7 CFR 51.1415 - Inedible kernels.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Synthesis of phase-pure U 2N 3 microspheres and its decomposition into UN
Silva, Chinthaka M.; Hunt, Rodney Dale; Snead, Lance Lewis; ...
2014-12-12
Uranium mononitride (UN) is important as a nuclear fuel. Fabrication of UN in its microspherical form also has its own merits since the advent of the concept of accident-tolerant fuel, where UN is being considered as a potential fuel in the form of TRISO particles. But, not many processes have been well established to synthesize kernels of UN. Therefore, a process for synthesis of microspherical UN with a minimum amount of carbon is discussed herein. First, a series of single-phased microspheres of uranium sesquinitride (U 2N 3) were synthesized by nitridation of UO 2+C microspheres at a few different temperatures.more » Resulting microspheres were of low-density U 2N 3 and decomposed into low-density UN. The variation of density of the synthesized sesquinitrides as a function of its chemical composition indicated the presence of extra (interstitial) nitrogen atoms corresponding to its hyperstoichiometry, which is normally indicated as α-U 2N 3. Average grain sizes of both U 2N 3 and UN varied in a range of 1–2.5 μm. In addition, these had a considerably large amount of pore spacing, indicating the potential sinterability of UN toward its use as a nuclear fuel.« less
Kornilov, Oleg; Toennies, J Peter
2015-02-21
The size distribution of para-H2 (pH2) clusters produced in free jet expansions at a source temperature of T0 = 29.5 K and pressures of P0 = 0.9-1.96 bars is reported and analyzed according to a cluster growth model based on the Smoluchowski theory with kernel scaling. Good overall agreement is found between the measured and predicted, Nk = A k(a) e(-bk), shape of the distribution. The fit yields values for A and b for values of a derived from simple collision models. The small remaining deviations between measured abundances and theory imply a (pH2)k magic number cluster of k = 13 as has been observed previously by Raman spectroscopy. The predicted linear dependence of b(-(a+1)) on source gas pressure was verified and used to determine the value of the basic effective agglomeration reaction rate constant. A comparison of the corresponding effective growth cross sections σ11 with results from a similar analysis of He cluster size distributions indicates that the latter are much larger by a factor 6-10. An analysis of the three body recombination rates, the geometric sizes and the fact that the He clusters are liquid independent of their size can explain the larger cross sections found for He.
Nonparametric Bayesian inference for mean residual life functions in survival analysis.
Poynor, Valerie; Kottas, Athanasios
2018-01-19
Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.
Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit
2018-02-13
Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
THE LIQUID AND GASEOUS FUEL DISTRIBUTION SYSTEM
The report describes the national liquid and gaseous fuel distribution system. he study leading to the report was performed as part of an effort to better understand emissions of volatile organic compounds from the fuel distribution system. he primary, secondary, and tertiary seg...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false How may California diesel fuel be... Motor Vehicle Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Violation Provisions § 80.617 How may California diesel fuel be distributed or sold outside of the State of California...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false How may California diesel fuel be... Motor Vehicle Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Violation Provisions § 80.617 How may California diesel fuel be distributed or sold outside of the State of California...
Unconventional protein sources: apricot seed kernels.
Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M
1981-09-01
Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.
Soot and liquid-phase fuel distributions in a newly designed optically accessible DI diesel engine
NASA Astrophysics Data System (ADS)
Dec, J. E.; Espey, C.
1993-10-01
Two-dimensional (2-D) laser-sheet imaging has been used to examine the soot and liquid-phase fuel distributions in a newly designed, optically accessible, direct-injection diesel engine of the heavy-duty size class. The design of this engine preserves the intake port geometry and basic dimensions of a Cummins N-series production engine. It also includes several unique features to provide considerable optical access. Liquid-phase fuel and soot distribution studies were conducted at a medium speed (1,200 rpm) using a Cummins closed-nozzle fuel injector. The scattering was used to obtain planar images of the liquid-phase fuel distribution. These images show that the leading edge of the liquid-phase portion of the fuel jet reaches a maximum length of 24 mm, which is about half the combustion bowl radius for this engine. Beyond this point virtually all the fuel has vaporized. Soot distribution measurements were made at a high load condition using three imaging diagnostics: natural flame luminosity, 2-D laser-induced incandescence, and 2-D elastic scattering. This investigation showed that the soot distribution in the combusting fuel jet develops through three stages. First, just after the onset of luminous combustion, soot particles are small and nearly uniformly distributed throughout the luminous region of the fuel jet. Second, after about 2 crank angle degrees a pattern develops of a higher soot concentration of larger sized particles in the head vortex region of the jet and a lower soot concentration of smaller sized particles upstream toward the injector. Third, after fuel injection ends, both the soot concentration and soot particle size increase rapidly in the upstream portion of the fuel jet.
An introduction to kernel-based learning algorithms.
Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B
2001-01-01
This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...
Design of CT reconstruction kernel specifically for clinical lung imaging
NASA Astrophysics Data System (ADS)
Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.
2005-04-01
In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.
Quality changes in macadamia kernel between harvest and farm-gate.
Walton, David A; Wallace, Helen M
2011-02-01
Macadamia integrifolia, Macadamia tetraphylla and their hybrids are cultivated for their edible kernels. After harvest, nuts-in-shell are partially dried on-farm and sorted to eliminate poor-quality kernels before consignment to a processor. During these operations, kernel quality may be lost. In this study, macadamia nuts-in-shell were sampled at five points of an on-farm postharvest handling chain from dehusking to the final storage silo to assess quality loss prior to consignment. Shoulder damage, weight of pieces and unsound kernel were assessed for raw kernels, and colour, mottled colour and surface damage for roasted kernels. Shoulder damage, weight of pieces and unsound kernel for raw kernels increased significantly between the dehusker and the final silo. Roasted kernels displayed a significant increase in dark colour, mottled colour and surface damage during on-farm handling. Significant loss of macadamia kernel quality occurred on a commercial farm during sorting and storage of nuts-in-shell before nuts were consigned to a processor. Nuts-in-shell should be dried as quickly as possible and on-farm handling minimised to maintain optimum kernel quality. 2010 Society of Chemical Industry.
Spatiotemporal variability of wildland fuels in US Northern Rocky Mountain forests
Robert E. Keane
2016-01-01
Fire regimes are ultimately controlled by wildland fuel dynamics over space and time; spatial distributions of fuel influence the size, spread, and intensity of individual fires, while the temporal distribution of fuel deposition influences fire's frequency and controls fire size. These "shifting fuel mosaics" are both a cause and a consequence...
A Theoretical Solid Oxide Fuel Cell Model for Systems Controls and Stability Design
NASA Technical Reports Server (NTRS)
Kopasakis, George; Brinson, Thomas; Credle, Sydni
2008-01-01
As the aviation industry moves toward higher efficiency electrical power generation, all electric aircraft, or zero emissions and more quiet aircraft, fuel cells are sought as the technology that can deliver on these high expectations. The hybrid solid oxide fuel cell system combines the fuel cell with a micro-turbine to obtain up to 70% cycle efficiency, and then distributes the electrical power to the loads via a power distribution system. The challenge is to understand the dynamics of this complex multidiscipline system and the design distributed controls that take the system through its operating conditions in a stable and safe manner while maintaining the system performance. This particular system is a power generation and a distribution system, and the fuel cell and micro-turbine model fidelity should be compatible with the dynamics of the power distribution system in order to allow proper stability and distributed controls design. The novelty in this paper is that, first, the case is made why a high fidelity fuel cell mode is needed for systems control and stability designs. Second, a novel modeling approach is proposed for the fuel cell that will allow the fuel cell and the power system to be integrated and designed for stability, distributed controls, and other interface specifications. This investigation shows that for the fuel cell, the voltage characteristic should be modeled but in addition, conservation equation dynamics, ion diffusion, charge transfer kinetics, and the electron flow inherent impedance should also be included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-07-01
The report is divided into the following sections: (1) Introduction; (2) Conclusions and Recommendations; (3) Existing Conditions and Facilities for a Fuel Distribution Center; (4) Pacific Ocean Regional Tuna Fisheries and Resources; (5) Fishing Effort in the FSMEEZ 1992-1994; (6) Current Transshipping Operations in the Western Pacific Ocean; (7) Current and Probale Bunkering Practices of United States, Japanese, Koren, and Taiwanese Offshore-Based Vessels Operating in FSM and Adjacent Waters; (8) Shore-Based Fish-Handling/Processing; (9) Fuels Forecast; (10) Fuel Supply, Storage and Distribution; (11) Cost Estimates; (12) Economic Evaluation of Fuel Supply, Storage and Distribution.
KEY RESULTS FROM IRRADIATION AND POST-IRRADIATION EXAMINATION OF AGR-1 UCO TRISO FUEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demkowicz, Paul A.; Hunn, John D.; Petti, David A.
The AGR-1 irradiation experiment was performed as the first test of tristructural isotropic (TRISO) fuel in the US Advanced Gas Reactor Fuel Development and Qualification Program. The experiment consisted of 72 right cylinder fuel compacts containing approximately 3×105 coated fuel particles with uranium oxide/uranium carbide (UCO) fuel kernels. The fuel was irradiated in the Advanced Test Reactor for a total of 620 effective full power days. Fuel burnup ranged from 11.3 to 19.6% fissions per initial metal atom and time average, volume average irradiation temperatures of the individual compacts ranged from 955 to 1136°C. This paper focuses on key resultsmore » from the irradiation and post-irradiation examination, which revealed a robust fuel with excellent performance characteristics under the conditions tested and have significantly improved the understanding of UCO coated particle fuel irradiation behavior within the US program. The fuel exhibited a very low incidence of TRISO coating failure during irradiation and post-irradiation safety testing at temperatures up to 1800°C. Advanced PIE methods have allowed particles with SiC coating failure to be isolated and meticulously examined, which has elucidated the specific causes of SiC failure in these specimens. The level of fission product release from the fuel during irradiation and post-irradiation safety testing has been studied in detail. Results indicated very low release of krypton and cesium through intact SiC and modest release of europium and strontium, while also confirming the potential for significant silver release through the coatings depending on irradiation conditions. Focused study of fission products within the coating layers of irradiated particles down to nanometer length scales has provided new insights into fission product transport through the coating layers and the role various fission products may have on coating integrity. The broader implications of these results and the application of lessons learned from AGR-1 to fuel fabrication and post-irradiation examination for subsequent fuel irradiation experiments as part of the US fuel program is also discussed.« less
Key results from irradiation and post-irradiation examination of AGR-1 UCO TRISO fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demkowicz, Paul A.; Hunn, John D.; Petti, David A.
The AGR-1 irradiation experiment was performed as the first test of tristructural isotropic (TRISO) fuel in the US Advanced Gas Reactor Fuel Development and Qualification Program. The experiment consisted of 72 right cylinder fuel compacts containing approximately 3 × 105 coated fuel particles with uranium oxide/uranium carbide (UCO) fuel kernels. The fuel was irradiated in the Advanced Test Reactor for a total of 620 effective full power days. Fuel burnup ranged from 11.3 to 19.6% fissions per initial metal atom and time-average, volume-average irradiation temperatures of the individual compacts ranged from 955 to 1136 °C. This paper focuses on keymore » results from the irradiation and post-irradiation examination, which revealed a robust fuel with excellent performance characteristics under the conditions tested and have significantly improved the understanding of UCO coated particle fuel irradiation behavior. The fuel exhibited zero TRISO coating failures (failure of all three dense coating layers) during irradiation and post-irradiation safety testing at temperatures up to 1700 °C. Advanced PIE methods have allowed particles with SiC coating failure that were discovered to be present in a very-low population to be isolated and meticulously examined, which has elucidated the specific causes of SiC failure in these specimens. The level of fission product release from the fuel during irradiation and post-irradiation safety testing has been studied in detail. Results indicated very low release of krypton and cesium through intact SiC and modest release of europium and strontium, while also confirming the potential for significant silver release through the coatings depending on irradiation conditions. Focused study of fission products within the coating layers of irradiated particles down to nanometer length scales has provided new insights into fission product transport through the coating layers and the role various fission products may have on coating integrity. The broader implications of these results and the application of lessons learned from AGR-1 to fuel fabrication and post-irradiation examination for subsequent fuel irradiation experiments as part of the US fuel program are also discussed.« less
Key results from irradiation and post-irradiation examination of AGR-1 UCO TRISO fuel
Demkowicz, Paul A.; Hunn, John D.; Petti, David A.; ...
2017-09-10
The AGR-1 irradiation experiment was performed as the first test of tristructural isotropic (TRISO) fuel in the US Advanced Gas Reactor Fuel Development and Qualification Program. The experiment consisted of 72 right cylinder fuel compacts containing approximately 3 × 105 coated fuel particles with uranium oxide/uranium carbide (UCO) fuel kernels. The fuel was irradiated in the Advanced Test Reactor for a total of 620 effective full power days. Fuel burnup ranged from 11.3 to 19.6% fissions per initial metal atom and time-average, volume-average irradiation temperatures of the individual compacts ranged from 955 to 1136 °C. This paper focuses on keymore » results from the irradiation and post-irradiation examination, which revealed a robust fuel with excellent performance characteristics under the conditions tested and have significantly improved the understanding of UCO coated particle fuel irradiation behavior. The fuel exhibited zero TRISO coating failures (failure of all three dense coating layers) during irradiation and post-irradiation safety testing at temperatures up to 1700 °C. Advanced PIE methods have allowed particles with SiC coating failure that were discovered to be present in a very-low population to be isolated and meticulously examined, which has elucidated the specific causes of SiC failure in these specimens. The level of fission product release from the fuel during irradiation and post-irradiation safety testing has been studied in detail. Results indicated very low release of krypton and cesium through intact SiC and modest release of europium and strontium, while also confirming the potential for significant silver release through the coatings depending on irradiation conditions. Focused study of fission products within the coating layers of irradiated particles down to nanometer length scales has provided new insights into fission product transport through the coating layers and the role various fission products may have on coating integrity. The broader implications of these results and the application of lessons learned from AGR-1 to fuel fabrication and post-irradiation examination for subsequent fuel irradiation experiments as part of the US fuel program are also discussed.« less
NASA Astrophysics Data System (ADS)
Cook, J. C.; Barker, J. G.; Rowe, J. M.; Williams, R. E.; Gagnon, C.; Lindstrom, R. M.; Ibberson, R. M.; Neumann, D. A.
2015-08-01
The recent expansion of the National Institute of Standards and Technology (NIST) Center for Neutron Research facility has offered a rare opportunity to perform an accurate measurement of the cold neutron spectrum at the exit of a newly-installed neutron guide. Using a combination of a neutron time-of-flight measurement, a gold foil activation measurement, and Monte Carlo simulation of the neutron guide transmission, we obtain the most reliable experimental characterization of the Advanced Liquid Hydrogen Cold Neutron Source brightness to date. Time-of-flight measurements were performed at three distinct fuel burnup intervals, including one immediately following reactor startup. Prior to the latter measurement, the hydrogen was maintained in a liquefied state for an extended period in an attempt to observe an initial radiation-induced increase of the ortho (o)-hydrogen fraction. Since para (p)-hydrogen has a small scattering cross-section for neutron energies below 15 meV (neutron wavelengths greater than about 2.3 Å), changes in the o- p hydrogen ratio and in the void distribution in the boiling hydrogen influence the spectral distribution. The nature of such changes is simulated with a continuous-energy, Monte Carlo radiation-transport code using 20 K o and p hydrogen scattering kernels and an estimated hydrogen density distribution derived from an analysis of localized heat loads. A comparison of the transport calculations with the mean brightness function resulting from the three measurements suggests an overall o- p ratio of about 17.5(±1) % o- 82.5% p for neutron energies<15 meV, a significantly lower ortho concentration than previously assumed.
A new discriminative kernel from probabilistic models.
Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert
2002-10-01
Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.
Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.
Kwak, Nojun
2016-05-20
Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.
A shortest-path graph kernel for estimating gene product semantic similarity.
Alvarez, Marco A; Qi, Xiaojun; Yan, Changhui
2011-07-29
Existing methods for calculating semantic similarity between gene products using the Gene Ontology (GO) often rely on external resources, which are not part of the ontology. Consequently, changes in these external resources like biased term distribution caused by shifting of hot research topics, will affect the calculation of semantic similarity. One way to avoid this problem is to use semantic methods that are "intrinsic" to the ontology, i.e. independent of external knowledge. We present a shortest-path graph kernel (spgk) method that relies exclusively on the GO and its structure. In spgk, a gene product is represented by an induced subgraph of the GO, which consists of all the GO terms annotating it. Then a shortest-path graph kernel is used to compute the similarity between two graphs. In a comprehensive evaluation using a benchmark dataset, spgk compares favorably with other methods that depend on external resources. Compared with simUI, a method that is also intrinsic to GO, spgk achieves slightly better results on the benchmark dataset. Statistical tests show that the improvement is significant when the resolution and EC similarity correlation coefficient are used to measure the performance, but is insignificant when the Pfam similarity correlation coefficient is used. Spgk uses a graph kernel method in polynomial time to exploit the structure of the GO to calculate semantic similarity between gene products. It provides an alternative to both methods that use external resources and "intrinsic" methods with comparable performance.
Hasseldine, Benjamin P J; Gao, Chao; Collins, Joseph M; Jung, Hyun-Do; Jang, Tae-Sik; Song, Juha; Li, Yaning
2017-09-01
The common millet (Panicum miliaceum) seedcoat has a fascinating complex microstructure, with jigsaw puzzle-like epidermis cells articulated via wavy intercellular sutures to form a compact layer to protect the kernel inside. However, little research has been conducted on linking the microstructure details with the overall mechanical response of this interesting biological composite. To this end, an integrated experimental-numerical-analytical investigation was conducted to both characterize the microstructure and ascertain the microscale mechanical properties and to test the overall response of kernels and full seeds under macroscale quasi-static compression. Scanning electron microscopy (SEM) was utilized to examine the microstructure of the outer seedcoat and nanoindentation was performed to obtain the material properties of the seedcoat hard phase material. A multiscale computational strategy was applied to link the microstructure to the macroscale response of the seed. First, the effective anisotropic mechanical properties of the seedcoat were obtained from finite element (FE) simulations of a microscale representative volume element (RVE), which were further verified from sophisticated analytical models. Then, macroscale FE models of the individual kernel and full seed were developed. Good agreement between the compression experiments and FE simulations were obtained for both the kernel and the full seed. The results revealed the anisotropic property and the protective function of the seedcoat, and showed that the sutures of the seedcoat play an important role in transmitting and distributing loads in responding to external compression. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jourde, K.; Gibert, D.; Marteau, J.
2015-04-01
This paper examines how the resolution of small-scale geological density models is improved through the fusion of information provided by gravity measurements and density muon radiographies. Muon radiography aims at determining the density of geological bodies by measuring their screening effect on the natural flux of cosmic muons. Muon radiography essentially works like medical X-ray scan and integrates density information along elongated narrow conical volumes. Gravity measurements are linked to density by a 3-D integration encompassing the whole studied domain. We establish the mathematical expressions of these integration formulas - called acquisition kernels - and derive the resolving kernels that are spatial filters relating the true unknown density structure to the density distribution actually recovered from the available data. The resolving kernels approach allows to quantitatively describe the improvement of the resolution of the density models achieved by merging gravity data and muon radiographies. The method developed in this paper may be used to optimally design the geometry of the field measurements to perform in order to obtain a given spatial resolution pattern of the density model to construct. The resolving kernels derived in the joined muon/gravimetry case indicate that gravity data are almost useless to constrain the density structure in regions sampled by more than two muon tomography acquisitions. Interestingly the resolution in deeper regions not sampled by muon tomography is significantly improved by joining the two techniques. The method is illustrated with examples for La Soufrière of Guadeloupe volcano.
Spectral methods in machine learning and new strategies for very large datasets
Belabbas, Mohamed-Ali; Wolfe, Patrick J.
2009-01-01
Spectral methods are of fundamental importance in statistics and machine learning, because they underlie algorithms from classical principal components analysis to more recent approaches that exploit manifold structure. In most cases, the core technical problem can be reduced to computing a low-rank approximation to a positive-definite kernel. For the growing number of applications dealing with very large or high-dimensional datasets, however, the optimal approximation afforded by an exact spectral decomposition is too costly, because its complexity scales as the cube of either the number of training examples or their dimensionality. Motivated by such applications, we present here 2 new algorithms for the approximation of positive-semidefinite kernels, together with error bounds that improve on results in the literature. We approach this problem by seeking to determine, in an efficient manner, the most informative subset of our data relative to the kernel approximation task at hand. This leads to two new strategies based on the Nyström method that are directly applicable to massive datasets. The first of these—based on sampling—leads to a randomized algorithm whereupon the kernel induces a probability distribution on its set of partitions, whereas the latter approach—based on sorting—provides for the selection of a partition in a deterministic way. We detail their numerical implementation and provide simulation results for a variety of representative problems in statistical data analysis, each of which demonstrates the improved performance of our approach relative to existing methods. PMID:19129490
Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)
NASA Astrophysics Data System (ADS)
Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.
2016-08-01
Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.
Locally Dependent Latent Trait Model and the Dutch Identity Revisited.
ERIC Educational Resources Information Center
Ip, Edward H.
2002-01-01
Proposes a class of locally dependent latent trait models for responses to psychological and educational tests. Focuses on models based on a family of conditional distributions, or kernel, that describes joint multiple item responses as a function of student latent trait, not assuming conditional independence. Also proposes an EM algorithm for…
Left ventricle segmentation via graph cut distribution matching.
Ben Ayed, Ismail; Punithakumar, Kumaradevan; Li, Shuo; Islam, Ali; Chong, Jaron
2009-01-01
We present a discrete kernel density matching energy for segmenting the left ventricle cavity in cardiac magnetic resonance sequences. The energy and its graph cut optimization based on an original first-order approximation of the Bhattacharyya measure have not been proposed previously, and yield competitive results in nearly real-time. The algorithm seeks a region within each frame by optimization of two priors, one geometric (distance-based) and the other photometric, each measuring a distribution similarity between the region and a model learned from the first frame. Based on global rather than pixelwise information, the proposed algorithm does not require complex training and optimization with respect to geometric transformations. Unlike related active contour methods, it does not compute iterative updates of computationally expensive kernel densities. Furthermore, the proposed first-order analysis can be used for other intractable energies and, therefore, can lead to segmentation algorithms which share the flexibility of active contours and computational advantages of graph cuts. Quantitative evaluations over 2280 images acquired from 20 subjects demonstrated that the results correlate well with independent manual segmentations by an expert.
Harnessing AIA Diffraction Patterns to Determine Flare Footpoint Temperatures
NASA Astrophysics Data System (ADS)
Bain, H. M.; Schwartz, R. A.; Torre, G.; Krucker, S.; Raftery, C. L.
2014-12-01
In the "Standard Flare Model" energy from accelerated electrons is deposited at the footpoints of newly reconnected flare loops, heating the surrounding plasma. Understanding the relation between the multi-thermal nature of the footpoints and the energy flux from accelerated electrons is therefore fundamental to flare physics. Extreme ultraviolet (EUV) images of bright flare kernels, obtained from the Atmospheric Imaging Assembly (AIA) onboard the Solar Dynamics Observatory, are often saturated despite the implementation of automatic exposure control. These kernels produce diffraction patterns often seen in AIA images during the most energetic flares. We implement an automated image reconstruction procedure, which utilizes diffraction pattern artifacts, to de-saturate AIA images and reconstruct the flare brightness in saturated pixels. Applying this technique to recover the footpoint brightness in each of the AIA EUV passbands, we investigate the footpoint temperature distribution. Using observations from the Ramaty High Energy Solar Spectroscopic Imager (RHESSI), we will characterize the footpoint accelerated electron distribution of the flare. By combining these techniques, we investigate the relation between the nonthermal electron energy flux and the temperature response of the flare footpoints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krisman, Alex; Hawkes, Evatt R.; Talei, Mohsen
In diesel engines, combustion is initiated by a two-staged autoignition that includes both low- and high-temperature chemistry. The location and timing of both stages of autoignition are important parameters that influence the development and stabilisation of the flame. In this study, a two-dimensional direct numerical simulation (DNS) is conducted to provide a fully resolved description of ignition at diesel engine-relevant conditions. The DNS is performed at a pressure of 40 atmospheres and at an ambient temperature of 900 K using dimethyl ether (DME) as the fuel, with a 30 species reduced chemical mechanism. At these conditions, similar to diesel fuel,more » DME exhibits two-stage ignition. The focus of this study is on the behaviour of the low-temperature chemistry (LTC) and the way in which it influences the high-temperature ignition. The results show that the LTC develops as a “spotty” first-stage autoignition in lean regions which transitions to a diffusively supported cool-flame and then propagates up the local mixture fraction gradient towards richer regions. The cool-flame speed is much faster than can be attributed to spatial gradients in first-stage ignition delay time in homogeneous reactors. The cool-flame causes a shortening of the second-stage ignition delay times compared to a homogeneous reactor and the shortening becomes more pronounced at richer mixtures. Multiple high-temperature ignition kernels are observed over a range of rich mixtures that are much richer than the homogeneous most reactive mixture and most kernels form much earlier than suggested by the homogeneous ignition delay time of the corresponding local mixture. Altogether, the results suggest that LTC can strongly influence both the timing and location in composition space of the high-temperature ignition.« less
Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.
Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry
2016-09-01
Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Palmprint and Face Multi-Modal Biometric Recognition Based on SDA-GSVD and Its Kernelization
Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu
2012-01-01
When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance. PMID:22778600
Palmprint and face multi-modal biometric recognition based on SDA-GSVD and its kernelization.
Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu
2012-01-01
When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance.
NASA Astrophysics Data System (ADS)
Wen, Gezheng; Markey, Mia K.
2015-03-01
It is resource-intensive to conduct human studies for task-based assessment of medical image quality and system optimization. Thus, numerical model observers have been developed as a surrogate for human observers. The Hotelling observer (HO) is the optimal linear observer for signal-detection tasks, but the high dimensionality of imaging data results in a heavy computational burden. Channelization is often used to approximate the HO through a dimensionality reduction step, but how to produce channelized images without losing significant image information remains a key challenge. Kernel local Fisher discriminant analysis (KLFDA) uses kernel techniques to perform supervised dimensionality reduction, which finds an embedding transformation that maximizes betweenclass separability and preserves within-class local structure in the low-dimensional manifold. It is powerful for classification tasks, especially when the distribution of a class is multimodal. Such multimodality could be observed in many practical clinical tasks. For example, primary and metastatic lesions may both appear in medical imaging studies, but the distributions of their typical characteristics (e.g., size) may be very different. In this study, we propose to use KLFDA as a novel channelization method. The dimension of the embedded manifold (i.e., the result of KLFDA) is a counterpart to the number of channels in the state-of-art linear channelization. We present a simulation study to demonstrate the potential usefulness of KLFDA for building the channelized HOs (CHOs) and generating reliable decision statistics for clinical tasks. We show that the performance of the CHO with KLFDA channels is comparable to that of the benchmark CHOs.
Broken rice kernels and the kinetics of rice hydration and texture during cooking.
Saleh, Mohammed; Meullenet, Jean-Francois
2013-05-01
During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P < 0.05) but the unbroken kernels became significantly harder. Moisture content and moisture uptake rate were positively correlated, and cooked rice hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Lan, Bo; Lowe, Michael J. S.; Dunne, Fionn P. E.
2015-10-01
A new spherical convolution approach has been presented which couples HCP single crystal wave speed (the kernel function) with polycrystal c-axis pole distribution function to give the resultant polycrystal wave speed response. The three functions have been expressed as spherical harmonic expansions thus enabling application of the de-convolution technique to enable any one of the three to be determined from knowledge of the other two. Hence, the forward problem of determination of polycrystal wave speed from knowledge of single crystal wave speed response and the polycrystal pole distribution has been solved for a broad range of experimentally representative HCP polycrystal textures. The technique provides near-perfect representation of the sensitivity of wave speed to polycrystal texture as well as quantitative prediction of polycrystal wave speed. More importantly, a solution to the inverse problem is presented in which texture, as a c-axis distribution function, is determined from knowledge of the kernel function and the polycrystal wave speed response. It has also been explained why it has been widely reported in the literature that only texture coefficients up to 4th degree may be obtained from ultrasonic measurements. Finally, the de-convolution approach presented provides the potential for the measurement of polycrystal texture from ultrasonic wave speed measurements.
Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin
2016-01-01
In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680
75 FR 51032 - National Fuel Gas Distribution Corporation; Notice of Baseline Filing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-18
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PR10-79-000] National Fuel Gas Distribution Corporation; Notice of Baseline Filing August 12, 2010. Take notice that on August 10, 2010, National fuel Gas Distribution Corporation submitted a baseline filing of its Statement of...
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
NASA Astrophysics Data System (ADS)
Troudi, Molka; Alimi, Adel M.; Saoudi, Samir
2008-12-01
The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
Using Mosix for Wide-Area Compuational Resources
Maddox, Brian G.
2004-01-01
One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.
A message passing kernel for the hypercluster parallel processing test bed
NASA Technical Reports Server (NTRS)
Blech, Richard A.; Quealy, Angela; Cole, Gary L.
1989-01-01
A Message-Passing Kernel (MPK) for the Hypercluster parallel-processing test bed is described. The Hypercluster is being developed at the NASA Lewis Research Center to support investigations of parallel algorithms and architectures for computational fluid and structural mechanics applications. The Hypercluster resembles the hypercube architecture except that each node consists of multiple processors communicating through shared memory. The MPK efficiently routes information through the Hypercluster, using a message-passing protocol when necessary and faster shared-memory communication whenever possible. The MPK also interfaces all of the processors with the Hypercluster operating system (HYCLOPS), which runs on a Front-End Processor (FEP). This approach distributes many of the I/O tasks to the Hypercluster processors and eliminates the need for a separate I/O support program on the FEP.
Modular fuel-cell stack assembly
Patel, Pinakin [Danbury, CT; Urko, Willam [West Granby, CT
2008-01-29
A modular multi-stack fuel-cell assembly in which the fuel-cell stacks are situated within a containment structure and in which a gas distributor is provided in the structure and distributes received fuel and oxidant gases to the stacks and receives exhausted fuel and oxidant gas from the stacks so as to realize a desired gas flow distribution and gas pressure differential through the stacks. The gas distributor is centrally and symmetrically arranged relative to the stacks so that it itself promotes realization of the desired gas flow distribution and pressure differential.
Catalytic Microtube Rocket Igniter
NASA Technical Reports Server (NTRS)
Schneider, Steven J.; Deans, Matthew C.
2011-01-01
Devices that generate both high energy and high temperature are required to ignite reliably the propellant mixtures in combustion chambers like those present in rockets and other combustion systems. This catalytic microtube rocket igniter generates these conditions with a small, catalysis-based torch. While traditional spark plug systems can require anywhere from 50 W to multiple kW of power in different applications, this system has demonstrated ignition at less than 25 W. Reactants are fed to the igniter from the same tanks that feed the reactants to the rest of the rocket or combustion system. While this specific igniter was originally designed for liquid methane and liquid oxygen rockets, it can be easily operated with gaseous propellants or modified for hydrogen use in commercial combustion devices. For the present cryogenic propellant rocket case, the main propellant tanks liquid oxygen and liquid methane, respectively are regulated and split into different systems for the individual stages of the rocket and igniter. As the catalyst requires a gas phase for reaction, either the stored boil-off of the tanks can be used directly or one stream each of fuel and oxidizer can go through a heat exchanger/vaporizer that turns the liquid propellants into a gaseous form. For commercial applications, where the reactants are stored as gases, the system is simplified. The resulting gas-phase streams of fuel and oxidizer are then further divided for the individual components of the igniter. One stream each of the fuel and oxidizer is introduced to a mixing bottle/apparatus where they are mixed to a fuel-rich composition with an O/F mass-based mixture ratio of under 1.0. This premixed flow then feeds into the catalytic microtube device. The total flow is on the order of 0.01 g/s. The microtube device is composed of a pair of sub-millimeter diameter platinum tubes connected only at the outlet so that the two outlet flows are parallel to each other. The tubes are each approximately 10 cm long and are heated via direct electric resistive heating. This heating brings the gasses to their minimum required ignition temperature, which is lower than the auto-thermal ignition temperature, and causes the onset of both surface and gas phase ignition producing hot temperatures and a highly reacting flame. The combustion products from the catalytic tubes, which are below the melting point of platinum, are injected into the center of another combustion stage, called the primary augmenter. The reactants for this combustion stage come from the same source but the flows of non-premixed methane and oxygen gas are split off to a secondary mixing apparatus and can be mixed in a near-stoichiometric to highly lean mixture ratio. The primary augmenter is a component that has channels venting this mixed gas to impinge on each other in the center of the augmenter, perpendicular to the flow from the catalyst. The total crosssectional area of these channels is on a similar order as that of the catalyst. The augmenter has internal channels that act as a manifold to distribute equally the gas to the inward-venting channels. This stage creates a stable flame kernel as its flows, which are on the order of 0.01 g/s, are ignited by the combustion products of the catalyst. This stage is designed to produce combustion products in the flame kernel that exceed the autothermal ignition temperature of oxygen and methane.
NASA Technical Reports Server (NTRS)
Medan, R. T.; Ray, K. S.
1974-01-01
A description of and users manual are presented for a U.S.A. FORTRAN 4 computer program which evaluates spanwise and chordwise loading distributions, lift coefficient, pitching moment coefficient, and other stability derivatives for thin wings in linearized, steady, subsonic flow. The program is based on a kernel function method lifting surface theory and is applicable to a large class of planforms including asymmetrical ones and ones with mixed straight and curved edges.
NASA Technical Reports Server (NTRS)
Watkins, Charles E.; Woolston, Donald S.; Cunningham, Herbert J.
1959-01-01
Details are given of a numerical solution of the integral equation which relates oscillatory or steady lift and downwash distributions in subsonic flow. The procedure has been programmed for the IBM 704 electronic data processing machine and yields the pressure distribution and some of its integrated properties for a given Mach number and frequency and for several modes of oscillation in from 3 to 4 minutes, results of several applications are presented.
Imaging Through Random Discrete-Scatterer Dispersive Media
2015-08-27
to that of a conventional, continuous, linear - frequency-modulated chirped signal [3]. Chirped train signals are a particular realization of a class of...continuous chirp signals, characterized by linear frequency modulation [3], we assume the time instances tn to be given by 1 tn = τg ( 1− βg n 2Ng ) n...kernel Dn(z) [9] by sincN (z) = (N + 1)−1DN/2(2πz/N). DISTRIBUTION A: Distribution approved for public release. 4 We use the elementary identity5 π sin
The Design and Emulation of a System Kernel for X-Tree,
1979-03-30
DECLASSIFICATIONIDOWNGRADING I SCHEDULE 16. DISTRIBUTION STATEMENT (ol this Report) Approved for public release; distributi.- ti4imited -; T ? A~ 17. DISTRIBUTION STATEMENT...level of the tPee. ManvIL different schemes for these additional interconnections have been Proposed. No final selection h-as set been made. Pic- tured...comFletion, 5_U (b) Irenoves the Process name from the hash table, (c) F’uts the PCB back on the FREEPCB aueue for later reuse# ( d) Goes L:ack to sleep . Ai
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kassa, Mateos; Hall, Carrie; Ickes, Andrew
Advanced internal combustion engines, although generally more efficient than conventional combustion engines, often encounter limitations in multi-cylinder applications due to variations in the combustion process encountered across cylinders and between cycles. This study leverages experimental data from an inline 6-cylinder heavy-duty dual fuel engine equipped with exhaust gas recirculation (EGR), a variable geometry turbocharger, and a fully-flexible variable intake valve actuation system to study cylinder-to-cylinder variations in power production and the underlying uneven fuel distribution that causes these variations. The engine is operated with late intake valve closure timings in a dual-fuel combustion mode in which a high reactivity fuelmore » is directly injected into the cylinders and a low reactivity fuel is port injected into the cylinders. Both dual fuel implementation and late intake valve closing (IVC) timings have been shown to improve thermal efficiency. However, experimental data from this study reveal that when late IVC timings are used on a multi-cylinder dual fuel engine a significant variation in IMEP across cylinders results and as such, leads to efficiency losses. The difference in IMEP between the different cylinders ranges from 9% at an IVC of 570°ATDC to 38% at an IVC of 610°ATDC and indicates an increasingly uneven fuel distribution. These experimental observations along with engine simulation models developed using GT-Power have been used to better understand the distribution of the port injected fuel across cylinders under various operating conditions on such dual fuel engines. This study revealed that the fuel distribution across cylinders in this dual fuel application is significantly affected by changes in the effective compression ratio as determined by the intake valve close timing as well as the design of the intake system (specifically the length of the intake runners). Late intake valve closures allow a portion of the trapped air and port injected fuel to flow back out of the cylinders into the intake manifold. The fuel that is pushed back in the intake manifold is then unevenly redistributed across the cylinders largely due to the dominating direction of the flow in the intake manifold. The effects of IVC as well as the impact of intake runner length on fuel distribution were quantitatively analyzed and a model was developed that can be used to accurately predict the fuel distribution of the port injected fuel at different operating conditions with an average estimation error of 1.5% in cylinder-specific fuel flow.« less
Nonlinear Deep Kernel Learning for Image Annotation.
Jiu, Mingyuan; Sahbi, Hichem
2017-02-08
Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.
Multineuron spike train analysis with R-convolution linear combination kernel.
Tezuka, Taro
2018-06-01
A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Haryanto, B.; Bukit, R. Br; Situmeang, E. M.; Christina, E. P.; Pandiangan, F.
2018-02-01
The purpose of this study was to determine the performance, productivity and feasibility of the operation of palm kernel processing plant based on Energy Productivity Ratio (EPR). EPR is expressed as the ratio of output to input energy and by-product. Palm Kernel plan is process in palm kernel to become palm kernel oil. The procedure started from collecting data needed as energy input such as: palm kernel prices, energy demand and depreciation of the factory. The energy output and its by-product comprise the whole production price such as: palm kernel oil price and the remaining products such as shells and pulp price. Calculation the equality of energy of palm kernel oil is to analyze the value of Energy Productivity Ratio (EPR) bases on processing capacity per year. The investigation has been done in Kernel Oil Processing Plant PT-X at Sumatera Utara plantation. The value of EPR was 1.54 (EPR > 1), which indicated that the processing of palm kernel into palm kernel oil is feasible to be operated based on the energy productivity.
Karthick, P A; Ghosh, Diptasree Maitra; Ramakrishnan, S
2018-02-01
Surface electromyography (sEMG) based muscle fatigue research is widely preferred in sports science and occupational/rehabilitation studies due to its noninvasiveness. However, these signals are complex, multicomponent and highly nonstationary with large inter-subject variations, particularly during dynamic contractions. Hence, time-frequency based machine learning methodologies can improve the design of automated system for these signals. In this work, the analysis based on high-resolution time-frequency methods, namely, Stockwell transform (S-transform), B-distribution (BD) and extended modified B-distribution (EMBD) are proposed to differentiate the dynamic muscle nonfatigue and fatigue conditions. The nonfatigue and fatigue segments of sEMG signals recorded from the biceps brachii of 52 healthy volunteers are preprocessed and subjected to S-transform, BD and EMBD. Twelve features are extracted from each method and prominent features are selected using genetic algorithm (GA) and binary particle swarm optimization (BPSO). Five machine learning algorithms, namely, naïve Bayes, support vector machine (SVM) of polynomial and radial basis kernel, random forest and rotation forests are used for the classification. The results show that all the proposed time-frequency distributions (TFDs) are able to show the nonstationary variations of sEMG signals. Most of the features exhibit statistically significant difference in the muscle fatigue and nonfatigue conditions. The maximum number of features (66%) is reduced by GA and BPSO for EMBD and BD-TFD respectively. The combination of EMBD- polynomial kernel based SVM is found to be most accurate (91% accuracy) in classifying the conditions with the features selected using GA. The proposed methods are found to be capable of handling the nonstationary and multicomponent variations of sEMG signals recorded in dynamic fatiguing contractions. Particularly, the combination of EMBD- polynomial kernel based SVM could be used to detect the dynamic muscle fatigue conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Higher-order phase transitions on financial markets
NASA Astrophysics Data System (ADS)
Kasprzak, A.; Kutner, R.; Perelló, J.; Masoliver, J.
2010-08-01
Statistical and thermodynamic properties of the anomalous multifractal structure of random interevent (or intertransaction) times were thoroughly studied by using the extended continuous-time random walk (CTRW) formalism of Montroll, Weiss, Scher, and Lax. Although this formalism is quite general (and can be applied to any interhuman communication with nontrivial priority), we consider it in the context of a financial market where heterogeneous agent activities can occur within a wide spectrum of time scales. As the main general consequence, we found (by additionally using the Saddle-Point Approximation) the scaling or power-dependent form of the partition function, Z(q'). It diverges for any negative scaling powers q' (which justifies the name anomalous) while for positive ones it shows the scaling with the general exponent τ(q'). This exponent is the nonanalytic (singular) or noninteger power of q', which is one of the pilar of higher-order phase transitions. In definition of the partition function we used the pausing-time distribution (PTD) as the central one, which takes the form of convolution (or superstatistics used, e.g. for describing turbulence as well as the financial market). Its integral kernel is given by the stretched exponential distribution (often used in disordered systems). This kernel extends both the exponential distribution assumed in the original version of the CTRW formalism (for description of the transient photocurrent measured in amorphous glassy material) as well as the Gaussian one sometimes used in this context (e.g. for diffusion of hydrogen in amorphous metals or for aging effects in glasses). Our most important finding is the third- and higher-order phase transitions, which can be roughly interpreted as transitions between the phase where high frequency trading is most visible and the phase defined by low frequency trading. The specific order of the phase transition directly depends upon the shape exponent α defining the stretched exponential integral kernel. On this basis a simple practical hint for investors was formulated.
1984-02-01
RAREFACTION WAVE ELIMINATOR CONSIDERATIONS 110 5.1 FLIP CALCULATIONS 110 5.2 A PASSIVE/ACTIVE RWE 118 6 DISTRIBUTED FUEL AIR EXPLOSIVES 120 REFERENCES 123 TA...conventional and distributed-charge fuel- air explosive charges used in a study of the utility of distributed charge FAE systems for blast simulation. The...limited investigation of distributed charge fuel air explosive configurations for blast simulator applications. During the course of this study
2013-01-01
Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755
Alumina Concentration Detection Based on the Kernel Extreme Learning Machine.
Zhang, Sen; Zhang, Tao; Yin, Yixin; Xiao, Wendong
2017-09-01
The concentration of alumina in the electrolyte is of great significance during the production of aluminum. The amount of the alumina concentration may lead to unbalanced material distribution and low production efficiency and affect the stability of the aluminum reduction cell and current efficiency. The existing methods cannot meet the needs for online measurement because industrial aluminum electrolysis has the characteristics of high temperature, strong magnetic field, coupled parameters, and high nonlinearity. Currently, there are no sensors or equipment that can detect the alumina concentration on line. Most companies acquire the alumina concentration from the electrolyte samples which are analyzed through an X-ray fluorescence spectrometer. To solve the problem, the paper proposes a soft sensing model based on a kernel extreme learning machine algorithm that takes the kernel function into the extreme learning machine. K-fold cross validation is used to estimate the generalization error. The proposed soft sensing algorithm can detect alumina concentration by the electrical signals such as voltages and currents of the anode rods. The predicted results show that the proposed approach can give more accurate estimations of alumina concentration with faster learning speed compared with the other methods such as the basic ELM, BP, and SVM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krueger, Jens; Micikevicius, Paulius; Williams, Samuel
Reverse Time Migration (RTM) is one of the main approaches in the seismic processing industry for imaging the subsurface structure of the Earth. While RTM provides qualitative advantages over its predecessors, it has a high computational cost warranting implementation on HPC architectures. We focus on three progressively more complex kernels extracted from RTM: for isotropic (ISO), vertical transverse isotropic (VTI) and tilted transverse isotropic (TTI) media. In this work, we examine performance optimization of forward wave modeling, which describes the computational kernels used in RTM, on emerging multi- and manycore processors and introduce a novel common subexpression elimination optimization formore » TTI kernels. We compare attained performance and energy efficiency in both the single-node and distributed memory environments in order to satisfy industry’s demands for fidelity, performance, and energy efficiency. Moreover, we discuss the interplay between architecture (chip and system) and optimizations (both on-node computation) highlighting the importance of NUMA-aware approaches to MPI communication. Ultimately, our results show we can improve CPU energy efficiency by more than 10× on Magny Cours nodes while acceleration via multiple GPUs can surpass the energy-efficient Intel Sandy Bridge by as much as 3.6×.« less
Small-scale modification to the lensing kernel
NASA Astrophysics Data System (ADS)
Hadzhiyska, Boryana; Spergel, David; Dunkley, Joanna
2018-02-01
Calculations of the cosmic microwave background (CMB) lensing power implemented into the standard cosmological codes such as camb and class usually treat the surface of last scatter as an infinitely thin screen. However, since the CMB anisotropies are smoothed out on scales smaller than the diffusion length due to the effect of Silk damping, the photons which carry information about the small-scale density distribution come from slightly earlier times than the standard recombination time. The dominant effect is the scale dependence of the mean redshift associated with the fluctuations during recombination. We find that fluctuations at k =0.01 Mpc-1 come from a characteristic redshift of z ≈1090 , while fluctuations at k =0.3 Mpc-1 come from a characteristic redshift of z ≈1130 . We then estimate the corrections to the lensing kernel and the related power spectra due to this effect. We conclude that neglecting it would result in a deviation from the true value of the lensing kernel at the half percent level at small CMB scales. For an all-sky, noise-free experiment, this corresponds to a ˜0.1 σ shift in the observed temperature power spectrum on small scales (2500 ≲l ≲4000 ).
Guo, Pingping; Wang, Junsong; Dong, Ge; Wei, Dandan; Li, Minghui; Yang, Minghua; Kong, Lingyi
2014-07-29
Ricin, a large, water soluble toxic glycoprotein, is distributed majorly in the kernels of castor beans (the seeds of Ricinus communis L.) and has been used in traditional Chinese medicine (TCM) or other folk remedies throughout the world. The toxicity of crude ricin (CR) from castor bean kernels was investigated for the first time using an NMR-based metabolomic approach complemented with histopathological inspection and clinical chemistry. The chronic administration of CR could cause kidney and lung impairment, spleen and thymus dysfunction and diminished nutrient intake in rats. An orthogonal signal correction partial least-squares discriminant analysis (OSC-PLSDA) of metabolomic profiles of rat biofluids highlighted a number of metabolic disturbances induced by CR. Long-term CR treatment produced perturbations on energy metabolism, nitrogen metabolism, amino acid metabolism and kynurenine pathway, and evoked oxidative stress. These findings could explain well the CR induced nephrotoxicity and pulmonary toxicity, and provided several potential biomarkers for diagnostics of these toxicities. Such a (1)H NMR based metabolomics approach showed its ability to give a systematic and holistic view of the response of an organism to drugs and is suitable for dynamic studies on the toxicological effects of TCM.
Baczewski, Andrew D; Bond, Stephen D
2013-07-28
Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.
Annular feed air breathing fuel cell stack
Wilson, Mahlon S.; Neutzler, Jay K.
1997-01-01
A stack of polymer electrolyte fuel cells is formed from a plurality of unit cells where each unit cell includes fuel cell components defining a periphery and distributed along a common axis, where the fuel cell components include a polymer electrolyte membrane, an anode and a cathode contacting opposite sides of the membrane, and fuel and oxygen flow fields contacting the anode and the cathode, respectively, wherein the components define an annular region therethrough along the axis. A fuel distribution manifold within the annular region is connected to deliver fuel to the fuel flow field in each of the unit cells. The fuel distribution manifold is formed from a hydrophilic-like material to redistribute water produced by fuel and oxygen reacting at the cathode. In a particular embodiment, a single bolt through the annular region clamps the unit cells together. In another embodiment, separator plates between individual unit cells have an extended radial dimension to function as cooling fins for maintaining the operating temperature of the fuel cell stack.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...
An SVM model with hybrid kernels for hydrological time series
NASA Astrophysics Data System (ADS)
Wang, C.; Wang, H.; Zhao, X.; Xie, Q.
2017-12-01
Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for California diesel distributed within the State of California? 80.616 Section 80.616 Protection of... ADDITIVES Motor Vehicle Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Violation Provisions § 80.616 What are the enforcement exemptions for California diesel distributed within...
Code of Federal Regulations, 2011 CFR
2011-07-01
... for California diesel distributed within the State of California? 80.616 Section 80.616 Protection of... ADDITIVES Motor Vehicle Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Violation Provisions § 80.616 What are the enforcement exemptions for California diesel distributed within...
Multiple kernels learning-based biological entity relationship extraction method.
Dongliang, Xu; Jingchang, Pan; Bailing, Wang
2017-09-20
Automatic extracting protein entity interaction information from biomedical literature can help to build protein relation network and design new drugs. There are more than 20 million literature abstracts included in MEDLINE, which is the most authoritative textual database in the field of biomedicine, and follow an exponential growth over time. This frantic expansion of the biomedical literature can often be difficult to absorb or manually analyze. Thus efficient and automated search engines are necessary to efficiently explore the biomedical literature using text mining techniques. The P, R, and F value of tag graph method in Aimed corpus are 50.82, 69.76, and 58.61%, respectively. The P, R, and F value of tag graph kernel method in other four evaluation corpuses are 2-5% higher than that of all-paths graph kernel. And The P, R and F value of feature kernel and tag graph kernel fuse methods is 53.43, 71.62 and 61.30%, respectively. The P, R and F value of feature kernel and tag graph kernel fuse methods is 55.47, 70.29 and 60.37%, respectively. It indicated that the performance of the two kinds of kernel fusion methods is better than that of simple kernel. In comparison with the all-paths graph kernel method, the tag graph kernel method is superior in terms of overall performance. Experiments show that the performance of the multi-kernels method is better than that of the three separate single-kernel method and the dual-mutually fused kernel method used hereof in five corpus sets.
THE CANADA-FRANCE ECLIPTIC PLANE SURVEY-FULL DATA RELEASE: THE ORBITAL STRUCTURE OF THE KUIPER BELT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petit, J.-M.; Rousselot, P.; Mousis, O.
2011-10-15
We report the orbital distribution of the trans-Neptunian objects (TNOs) discovered during the Canada-France Ecliptic Plane Survey (CFEPS), whose discovery phase ran from early 2003 until early 2007. The follow-up observations started just after the first discoveries and extended until late 2009. We obtained characterized observations of 321 deg{sup 2} of sky to depths in the range g {approx} 23.5-24.4 AB mag. We provide a database of 169 TNOs with high-precision dynamical classification and known discovery efficiency. Using this database, we find that the classical belt is a complex region with sub-structures that go beyond the usual splitting of innermore » (interior to 3:2 mean-motion resonance [MMR]), main (between 3:2 and 2:1 MMR), and outer (exterior to 2:1 MMR). The main classical belt (a = 40-47 AU) needs to be modeled with at least three components: the 'hot' component with a wide inclination distribution and two 'cold' components (stirred and kernel) with much narrower inclination distributions. The hot component must have a significantly shallower absolute magnitude (H{sub g} ) distribution than the other two components. With 95% confidence, there are 8000{sup +1800}{sub -1600} objects in the main belt with H{sub g} {<=} 8.0, of which 50% are from the hot component, 40% from the stirred component, and 10% from the kernel; the hot component's fraction drops rapidly with increasing H{sub g} . Because of this, the apparent population fractions depend on the depth and ecliptic latitude of a trans-Neptunian survey. The stirred and kernel components are limited to only a portion of the main belt, while we find that the hot component is consistent with a smooth extension throughout the inner, main, and outer regions of the classical belt; in fact, the inner and outer belts are consistent with containing only hot-component objects. The H{sub g} {<=} 8.0 TNO population estimates are 400 for the inner belt and 10,000 for the outer belt to within a factor of two (95% confidence). We show how the CFEPS Survey Simulator can be used to compare a cosmogonic model for the orbital element distribution to the real Kuiper Belt.« less
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...
7 CFR 810.206 - Grades and grade requirements for barley.
Code of Federal Regulations, 2010 CFR
2010-01-01
... weight per bushel (pounds) Sound barley (percent) Maximum Limits of— Damaged kernels 1 (percent) Heat damaged kernels (percent) Foreign material (percent) Broken kernels (percent) Thin barley (percent) U.S... or otherwise of distinctly low quality. 1 Includes heat-damaged kernels. Injured-by-frost kernels and...
Critical review of carbon monoxide pressure measurements in the uranium carbon oxygen ternary system
NASA Astrophysics Data System (ADS)
Gossé, S.; Guéneau, C.; Chatillon, C.; Chatain, S.
2006-06-01
For high temperature reactors (HTR), the high level of fuel operating temperature in normal and accidental conditions requires to predict the possible chemical interactions between the fuel components. Among the concerns of the TRISO fuel particle thermomechanical behavior, it is necessary to better understand the carbon monoxide formation due to chemical interactions at the UO2 kernel and graphite buffer's interface. In a first step, the thermodynamic properties of the U-C-O system have to be assessed. The experimental data from literature on the equilibrium CO gas pressure measurements in the UO2-UC2-C ternary section of the U-C-O system are critically reviewed. Discrepancies between the different determinations can be explained - (i) by the different gaseous flow regimes in the experiments and - (ii) by the location of the measuring pressure gauge away from the reaction site. Experimental values are corrected - (i) from the gaseous flow type (molecular, transition or viscous) defined by the Knudsen number and - (ii) from the thermomolecular effect due to the temperature gradient inside the experimental vessels. Taking account of the selected and corrected values improves greatly the consistency of the original set of measurements.
Chevallier, Laure; Bauer, Alexander; Cavaliere, Sara; Hui, Rob; Rozière, Jacques; Jones, Deborah J
2012-03-01
Crystalline microspheres of Nb-doped TiO(2) with a high specific surface area were synthesized using a templating method exploiting ionic interactions between nascent inorganic components and an ionomer template. The microspheres exhibit a porosity gradient, with a meso-macroporous kernel, and a mesoporous shell. The material has been investigated as cathode electrocatalyst support for polymer electrolyte membrane (PEM) fuel cells. A uniform dispersion of Pt particles on the Nb-doped TiO(2) support was obtained using a microwave method, and the electrochemical properties assessed by cyclic voltammetry. Nb-TiO(2) supported Pt demonstrated very high stability, as after 1000 voltammetric cycles, 85% of the electroactive Pt area remained compared to 47% in the case of commercial Pt on carbon. For the oxygen reduction reaction (ORR), which takes place at the cathode, the highest stability was again obtained with the Nb-doped titania-based material even though the mass activity calculated at 0.9 V vs RHE was slightly lower. The microspherical structured and mesoporous Nb-doped TiO(2) is an alternative support to carbon for PEM fuel cells. © 2012 American Chemical Society
Optimized formulas for the gravitational field of a tesseroid
NASA Astrophysics Data System (ADS)
Grombein, Thomas; Seitz, Kurt; Heck, Bernhard
2013-07-01
Various tasks in geodesy, geophysics, and related geosciences require precise information on the impact of mass distributions on gravity field-related quantities, such as the gravitational potential and its partial derivatives. Using forward modeling based on Newton's integral, mass distributions are generally decomposed into regular elementary bodies. In classical approaches, prisms or point mass approximations are mostly utilized. Considering the effect of the sphericity of the Earth, alternative mass modeling methods based on tesseroid bodies (spherical prisms) should be taken into account, particularly in regional and global applications. Expressions for the gravitational field of a point mass are relatively simple when formulated in Cartesian coordinates. In the case of integrating over a tesseroid volume bounded by geocentric spherical coordinates, it will be shown that it is also beneficial to represent the integral kernel in terms of Cartesian coordinates. This considerably simplifies the determination of the tesseroid's potential derivatives in comparison with previously published methodologies that make use of integral kernels expressed in spherical coordinates. Based on this idea, optimized formulas for the gravitational potential of a homogeneous tesseroid and its derivatives up to second-order are elaborated in this paper. These new formulas do not suffer from the polar singularity of the spherical coordinate system and can, therefore, be evaluated for any position on the globe. Since integrals over tesseroid volumes cannot be solved analytically, the numerical evaluation is achieved by means of expanding the integral kernel in a Taylor series with fourth-order error in the spatial coordinates of the integration point. As the structure of the Cartesian integral kernel is substantially simplified, Taylor coefficients can be represented in a compact and computationally attractive form. Thus, the use of the optimized tesseroid formulas particularly benefits from a significant decrease in computation time by about 45 % compared to previously used algorithms. In order to show the computational efficiency and to validate the mathematical derivations, the new tesseroid formulas are applied to two realistic numerical experiments and are compared to previously published tesseroid methods and the conventional prism approach.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...
Code of Federal Regulations, 2013 CFR
2013-01-01
...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will not...
7 CFR 51.2296 - Three-fourths half kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...
The Classification of Diabetes Mellitus Using Kernel k-means
NASA Astrophysics Data System (ADS)
Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.
2018-01-01
Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.
UNICOS Kernel Internals Application Development
NASA Technical Reports Server (NTRS)
Caredo, Nicholas; Craw, James M. (Technical Monitor)
1995-01-01
Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.
Detection of maize kernels breakage rate based on K-means clustering
NASA Astrophysics Data System (ADS)
Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping
2017-04-01
In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.
Modeling adaptive kernels from probabilistic phylogenetic trees.
Nicotra, Luca; Micheli, Alessio
2009-01-01
Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.
Aflatoxin and nutrient contents of peanut collected from local market and their processed foods
NASA Astrophysics Data System (ADS)
Ginting, E.; Rahmianna, A. A.; Yusnawan, E.
2018-01-01
Peanut is succeptable to aflatoxin contamination and the sources of peanut as well as processing methods considerably affect aflatoxin content of the products. Therefore, the study on aflatoxin and nutrient contents of peanut collected from local market and their processed foods were performed. Good kernels of peanut were prepared into fried peanut, pressed-fried peanut, peanut sauce, peanut press cake, fermented peanut press cake (tempe) and fried tempe, while blended kernels (good and poor kernels) were processed into peanut sauce and tempe and poor kernels were only processed into tempe. The results showed that good and blended kernels which had high number of sound/intact kernels (82,46% and 62,09%), contained 9.8-9.9 ppb of aflatoxin B1, while slightly higher level was seen in poor kernels (12.1 ppb). However, the moisture, ash, protein, and fat contents of the kernels were similar as well as the products. Peanut tempe and fried tempe showed the highest increase in protein content, while decreased fat contents were seen in all products. The increase in aflatoxin B1 of peanut tempe prepared from poor kernels > blended kernels > good kernels. However, it averagely decreased by 61.2% after deep-fried. Excluding peanut tempe and fried tempe, aflatoxin B1 levels in all products derived from good kernels were below the permitted level (15 ppb). This suggests that sorting peanut kernels as ingredients and followed by heat processing would decrease the aflatoxin content in the products.
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.
NASA Astrophysics Data System (ADS)
Cao, M.-H.; Jiang, H.-K.; Chin, J.-S.
1982-04-01
An improved flat-fan spray model is used for the semi-empirical analysis of liquid fuel distribution downstream of a plain orifice injector under cross-stream air flow. The model assumes that, due to the aerodynamic force of the high-velocity cross air flow, the injected fuel immediately forms a flat-fan liquid sheet perpendicular to the cross flow. Once the droplets have been formed, the trajectories of individual droplets determine fuel distribution downstream. Comparison with test data shows that the proposed model accurately predicts liquid fuel distribution at any point downstream of a plain orifice injector under high-velocity, low-temperature uniform cross-stream air flow over a wide range of conditions.
Bueso, Francisco; Sosa, Italo; Chun, Roldan; Pineda, Renan
2016-01-01
Jatropha curcas L. (Jatropha) is believed to have originated from Mexico and Central America. So far, characterization efforts have focused on Asia, Africa and Mexico. Non-toxic, low phorbol ester (PE) varieties have been found only in Mexico. Differences in PE content in seeds and its structural components, crude oil and cake from Jatropha provenances cultivated in Central and South America were evaluated. Seeds were dehulled, and kernels were separated into tegmen, cotyledons and embryo for PE quantitation by RP-HPLC. Crude oil and cake PE content was also measured. No phenotypic departures in seed size and structure were observed among Jatropha cultivated in Central and South America compared to provenances from Mexico, Asia and Africa. Cotyledons comprised 96.2-97.5 %, tegmen 1.6-2.4 % and embryo represented 0.9-1.4 % of dehulled kernel. Total PE content of all nine provenances categorized them as toxic. Significant differences in kernel PE content were observed among provenances from Mexico, Central and South America (P < 0.01), being Mexican the highest (7.6 mg/g) and Cabo Verde the lowest (2.57 mg/g). All accessions had >95 % of PEs concentrated in cotyledons, 0.5-3 % in the tegmen and 0.5-1 % in the embryo. Over 60 % of total PE in dehulled kernels accumulated in the crude oil, while 35-40 % remained in the cake after extraction. Low phenotypic variability in seed physical, structural traits and PE content was observed among provenances from Latin America. Very high-PE provenances with potential as biopesticide were found in Central America. No PE-free, edible Jatropha was found among provenances currently cultivated in Central America and Brazil that could be used for human consumption and feedstock. Furthermore, dehulled kernel structural parts as well as its crude oil and cake contained toxic PE levels.
NASA Astrophysics Data System (ADS)
Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-12-01
Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a limited number of particles.
Characterizing crown fuel distribution for conifers in the interior western United States
Seth Ex; Frederick W. Smith; Tara Keyser
2015-01-01
Canopy fire hazard evaluation is essential for prioritizing fuel treatments and for assessing potential risk to firefighters during suppression activities. Fire hazard is usually expressed as predicted potential fire behavior, which is sensitive to the methodology used to quantitatively describe fuel profiles: methodologies that assume that fuel is distributed...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kornilov, Oleg; Toennies, J. Peter
The size distribution of para-H{sub 2} (pH{sub 2}) clusters produced in free jet expansions at a source temperature of T{sub 0} = 29.5 K and pressures of P{sub 0} = 0.9–1.96 bars is reported and analyzed according to a cluster growth model based on the Smoluchowski theory with kernel scaling. Good overall agreement is found between the measured and predicted, N{sub k} = A k{sup a} e{sup −bk}, shape of the distribution. The fit yields values for A and b for values of a derived from simple collision models. The small remaining deviations between measured abundances and theory imply a (pH{submore » 2}){sub k} magic number cluster of k = 13 as has been observed previously by Raman spectroscopy. The predicted linear dependence of b{sup −(a+1)} on source gas pressure was verified and used to determine the value of the basic effective agglomeration reaction rate constant. A comparison of the corresponding effective growth cross sections σ{sub 11} with results from a similar analysis of He cluster size distributions indicates that the latter are much larger by a factor 6-10. An analysis of the three body recombination rates, the geometric sizes and the fact that the He clusters are liquid independent of their size can explain the larger cross sections found for He.« less
The influence of sub-grid scale motions on particle collision in homogeneous isotropic turbulence
NASA Astrophysics Data System (ADS)
Xiong, Yan; Li, Jing; Liu, Zhaohui; Zheng, Chuguang
2018-02-01
The absence of sub-grid scale (SGS) motions leads to severe errors in particle pair dynamics, which represents a great challenge to the large eddy simulation of particle-laden turbulent flow. In order to address this issue, data from direct numerical simulation (DNS) of homogenous isotropic turbulence coupled with Lagrangian particle tracking are used as a benchmark to evaluate the corresponding results of filtered DNS (FDNS). It is found that the filtering process in FDNS will lead to a non-monotonic variation of the particle collision statistics, including radial distribution function, radial relative velocity, and the collision kernel. The peak of radial distribution function shifts to the large-inertia region due to the lack of SGS motions, and the analysis of the local flowstructure characteristic variable at particle position indicates that the most effective interaction scale between particles and fluid eddies is increased in FDNS. Moreover, this scale shifting has an obvious effect on the odd-order moments of the probability density function of radial relative velocity, i.e. the skewness, which exhibits a strong correlation to the variance of radial distribution function in FDNS. As a whole, the radial distribution function, together with radial relative velocity, can compensate the SGS effects for the collision kernel in FDNS when the Stokes number based on the Kolmogorov time scale is greater than 3.0. However, it still leaves considerable errors for { St}_k <3.0.
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...
7 CFR 51.1450 - Serious damage.
Code of Federal Regulations, 2010 CFR
2010-01-01
...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...
7 CFR 51.1450 - Serious damage.
Code of Federal Regulations, 2011 CFR
2011-01-01
...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...
7 CFR 51.1450 - Serious damage.
Code of Federal Regulations, 2012 CFR
2012-01-01
...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...
Spark ignited turbulent flame kernel growth. Annual report, January--December 1991
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santavicca, D.A.
1994-06-01
An experimental study of the effect of spark power on the growth rate of spark-ignited flame kernels was conducted in a turbulent flow system at 1 atm, 300 K conditions. All measurements were made with premixed, propane-air at a fuel/air equivalence ratio of 0.93, with 0%, 8% or 14% dilution. Two flow conditions were studied: a low turbulence intensity case with a mean velocity of 1.25 m/sec and a turbulence intensity of 0.33 m/sec, and a high turbulence intensity case with a mean velocity of 1.04 m/sec and a turbulence intensity of 0.88 m/sec. The growth of the spark-ignited flamemore » kernel was recorded over a time interval from 83 {mu}sec to 20 msec following the start of ignition using high speed laser shadowgraphy. In order to evaluate the effect of ignition spark power, tests were conducted with a long duration (ca 4 msec) inductive discharge ignition system with an average spark power of ca 14 watts and two short duration (ca 100 nsec) breakdown ignition systems with average spark powers of ca 6 {times} 10{sup 4} and ca 6 {times} 10{sup 5} watts. The results showed that increased spark power resulted in an increased growth rate, where the effect of short duration breakdown sparks was found to persist for times of the order of milliseconds. The effectiveness of increased spark power was found to be less at high turbulence and high dilution conditions. Increased spark power had a greater effect on the 0--5 mm burn time than on the 5--13 mm burn time, in part because of the effect of breakdown energy on the initial size of the flame kernel. And finally, when spark power was increased by shortening the spark duration while keeping the effective energy the same there was a significant increase in the misfire rate, however when the spark power was further increased by increasing the breakdown energy the misfire rate dropped to zero.« less
NASA Astrophysics Data System (ADS)
Du, Peijun; Tan, Kun; Xing, Xiaoshi
2010-12-01
Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.
A trace ratio maximization approach to multiple kernel-based dimensionality reduction.
Jiang, Wenhao; Chung, Fu-lai
2014-01-01
Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hadamard Kernel SVM with applications for breast cancer outcome predictions.
Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong
2017-12-21
Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.
[Vertical distribution of fuels in Pinus yunnanensis forest and related affecting factors].
Wang, San; Niu, Shu-Kui; Li, De; Wang, Jing-Hua; Chen, Feng; Sun, Wu
2013-02-01
In order to understand the effects of fuel loadings spatial distribution on forest fire kinds and behaviors, the canopy fuels and floor fuels of Pinus yunnanensis forests with different canopy density, diameter at breast height (DBH), tree height, and stand age and at different altitude, slope grade, position, and aspect in Southwest China were taken as test objects, with the fuel loadings and their spatial distribution characteristics at different vertical layers compared and the fire behaviors in different stands analyzed. The relationships between the fuel loadings and the environmental factors were also analyzed by canonical correspondence analysis (CCA). In different stands, there existed significant differences in the vertical distribution of fuels. Pinus yunnanensis-Qak-Syzygium aromaticum, Pinus yunnanensis-oak, and Pinus yunnanensis forests were likely to occur floor fire but not crown fire, while Pinus yunnanensis-Platycladus orientalis, Pinus yunnanensis-Keteleeria fortune, and Keteleeria fortune-Pinus yunnanensis were not only inclined to occur floor fire, but also, the floor fire could be easily transformed into crown fire. The crown fuels were mainly affected by the stand age, altitude, DBH, and tree height, while the floor fuels were mainly by the canopy density, slope grade, altitude, and stand age.
Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds
NASA Astrophysics Data System (ADS)
Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.
In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.
A framework for optimal kernel-based manifold embedding of medical image data.
Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma
2015-04-01
Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.
Evaluating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Wilton, Donald R.; Champagne, Nathan J.
2008-01-01
Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.
Cao, Peng; Liu, Xiaoli; Yang, Jinzhu; Zhao, Dazhe; Huang, Min; Zhang, Jian; Zaiane, Osmar
2017-12-01
Alzheimer's disease (AD) has been not only a substantial financial burden to the health care system but also an emotional burden to patients and their families. Making accurate diagnosis of AD based on brain magnetic resonance imaging (MRI) is becoming more and more critical and emphasized at the earliest stages. However, the high dimensionality and imbalanced data issues are two major challenges in the study of computer aided AD diagnosis. The greatest limitations of existing dimensionality reduction and over-sampling methods are that they assume a linear relationship between the MRI features (predictor) and the disease status (response). To better capture the complicated but more flexible relationship, we propose a multi-kernel based dimensionality reduction and over-sampling approaches. We combined Marginal Fisher Analysis with ℓ 2,1 -norm based multi-kernel learning (MKMFA) to achieve the sparsity of region-of-interest (ROI), which leads to simultaneously selecting a subset of the relevant brain regions and learning a dimensionality transformation. Meanwhile, a multi-kernel over-sampling (MKOS) was developed to generate synthetic instances in the optimal kernel space induced by MKMFA, so as to compensate for the class imbalanced distribution. We comprehensively evaluate the proposed models for the diagnostic classification (binary class and multi-class classification) including all subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The experimental results not only demonstrate the proposed method has superior performance over multiple comparable methods, but also identifies relevant imaging biomarkers that are consistent with prior medical knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jourde, K.; Gibert, D.; Marteau, J.
2015-08-01
This paper examines how the resolution of small-scale geological density models is improved through the fusion of information provided by gravity measurements and density muon radiographies. Muon radiography aims at determining the density of geological bodies by measuring their screening effect on the natural flux of cosmic muons. Muon radiography essentially works like a medical X-ray scan and integrates density information along elongated narrow conical volumes. Gravity measurements are linked to density by a 3-D integration encompassing the whole studied domain. We establish the mathematical expressions of these integration formulas - called acquisition kernels - and derive the resolving kernels that are spatial filters relating the true unknown density structure to the density distribution actually recovered from the available data. The resolving kernel approach allows one to quantitatively describe the improvement of the resolution of the density models achieved by merging gravity data and muon radiographies. The method developed in this paper may be used to optimally design the geometry of the field measurements to be performed in order to obtain a given spatial resolution pattern of the density model to be constructed. The resolving kernels derived in the joined muon-gravimetry case indicate that gravity data are almost useless for constraining the density structure in regions sampled by more than two muon tomography acquisitions. Interestingly, the resolution in deeper regions not sampled by muon tomography is significantly improved by joining the two techniques. The method is illustrated with examples for the La Soufrière volcano of Guadeloupe.
SOMKE: kernel density estimation over data streams by sequences of self-organizing maps.
Cao, Yuan; He, Haibo; Man, Hong
2012-08-01
In this paper, we propose a novel method SOMKE, for kernel density estimation (KDE) over data streams based on sequences of self-organizing map (SOM). In many stream data mining applications, the traditional KDE methods are infeasible because of the high computational cost, processing time, and memory requirement. To reduce the time and space complexity, we propose a SOM structure in this paper to obtain well-defined data clusters to estimate the underlying probability distributions of incoming data streams. The main idea of this paper is to build a series of SOMs over the data streams via two operations, that is, creating and merging the SOM sequences. The creation phase produces the SOM sequence entries for windows of the data, which obtains clustering information of the incoming data streams. The size of the SOM sequences can be further reduced by combining the consecutive entries in the sequence based on the measure of Kullback-Leibler divergence. Finally, the probability density functions over arbitrary time periods along the data streams can be estimated using such SOM sequences. We compare SOMKE with two other KDE methods for data streams, the M-kernel approach and the cluster kernel approach, in terms of accuracy and processing time for various stationary data streams. Furthermore, we also investigate the use of SOMKE over nonstationary (evolving) data streams, including a synthetic nonstationary data stream, a real-world financial data stream and a group of network traffic data streams. The simulation results illustrate the effectiveness and efficiency of the proposed approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besmann, Theodore M; Shin, Dongwon
TRISO coated particle fuel is envisioned as a next generation replacement for current urania pellet fuel in LWR applications. To obtain adequate fissile loading the kernel of the TRISO particle will need to be UN. In support of the fuel development effort, an assessment of phase regions of interest in the U-C-N system was undertaken as the fuel will be prepared by the carbothermic reduction of the oxide and it will be in equilibrium with carbon within the TRISO particle. The phase equilibria and thermochemistry of the U-C-N system is reviewed, including nitrogen pressure measurements above various phase fields. Selectedmore » measurements were used to fit a first order model of the UC1-xNx phase, represented by the inter-solution of UN and UC. Fit to the data was significantly improved by also adjusting the heat of formation for UN by ~12 kJ/mol and the phase equilbria was best reproduced by also adjusting the heat for U2N3 by +XXX. The determined interaction parameters yielded a slightly positive deviation from ideality, which agrees with lattice parameter measurements which show positive deviation from Vegard s law. The resultant model together with reported values for other phases in the system were used to generate isothermal sections of the U-C-N phase diagram. Nitrogen partial pressures were also computed for regions of interest.« less
Kernel Machine SNP-set Testing under Multiple Candidate Kernels
Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.
2013-01-01
Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868
Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki
2014-01-01
The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.
DOT National Transportation Integrated Search
2017-04-30
Coastal communities are vulnerable to disruptions in their fuel distribution networks due to : tropical storms, hurricanes and associated flooding. These disruptions impact communities by : limiting fueling in the days following the storm potentially...
Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.
Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui
2018-03-01
Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.
7 CFR 810.202 - Definition of other terms.
Code of Federal Regulations, 2014 CFR
2014-01-01
... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...
7 CFR 810.202 - Definition of other terms.
Code of Federal Regulations, 2013 CFR
2013-01-01
... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...
7 CFR 810.202 - Definition of other terms.
Code of Federal Regulations, 2012 CFR
2012-01-01
... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...
graphkernels: R and Python packages for graph comparison
Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten
2018-01-01
Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902
Aflatoxin variability in pistachios.
Mahoney, N E; Rodriguez, S B
1996-01-01
Pistachio fruit components, including hulls (mesocarps and epicarps), seed coats (testas), and kernels (seeds), all contribute to variable aflatoxin content in pistachios. Fresh pistachio kernels were individually inoculated with Aspergillus flavus and incubated 7 or 10 days. Hulled, shelled kernels were either left intact or wounded prior to inoculation. Wounded kernels, with or without the seed coat, were readily colonized by A. flavus and after 10 days of incubation contained 37 times more aflatoxin than similarly treated unwounded kernels. The aflatoxin levels in the individual wounded pistachios were highly variable. Neither fungal colonization nor aflatoxin was detected in intact kernels without seed coats. Intact kernels with seed coats had limited fungal colonization and low aflatoxin concentrations compared with their wounded counterparts. Despite substantial fungal colonization of wounded hulls, aflatoxin was not detected in hulls. Aflatoxin levels were significantly lower in wounded kernels with hulls than in kernels of hulled pistachios. Both the seed coat and a water-soluble extract of hulls suppressed aflatoxin production by A. flavus. PMID:8919781
graphkernels: R and Python packages for graph comparison.
Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten
2018-02-01
Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.
Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.
2013-01-01
Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507
Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K
2015-05-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.
Ni, Xinzhi; Wilson, Jeffrey P; Toews, Michael D; Buntin, G David; Lee, R Dewey; Li, Xin; Lei, Zhongren; He, Kanglai; Xu, Wenwei; Li, Xianchun; Huffaker, Alisa; Schmelz, Eric A
2014-10-01
Spatial and temporal patterns of insect damage in relation to aflatoxin contamination in a corn field with plants of uniform genetic background are not well understood. After previous examination of spatial patterns of insect damage and aflatoxin in pre-harvest corn fields, we further examined both spatial and temporal patterns of cob- and kernel-feeding insect damage, and aflatoxin level with two samplings at pre-harvest in 2008 and 2009. The feeding damage by each of the ear/kernel-feeding insects (i.e., corn earworm/fall armyworm damage on the silk/cob, and discoloration of corn kernels by stink bugs) and maize weevil population were assessed at each grid point with five ears. Sampling data showed a field edge effect in both insect damage and aflatoxin contamination in both years. Maize weevils tended toward an aggregated distribution more frequently than either corn earworm or stink bug damage in both years. The frequency of detecting aggregated distribution for aflatoxin level was less than any of the insect damage assessments. Stink bug damage and maize weevil number were more closely associated with aflatoxin level than was corn earworm damage. In addition, the indices of spatial-temporal association (χ) demonstrated that the number of maize weevils was associated between the first (4 weeks pre-harvest) and second (1 week pre-harvest) samplings in both years on all fields. In contrast, corn earworm damage between the first and second samplings from the field on the Belflower Farm, and aflatoxin level and corn earworm damage from the field on the Lang Farm were dissociated in 2009. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.
Kinetic behaviours of aggregate growth driven by time-dependent migration, birth and death
NASA Astrophysics Data System (ADS)
Zhu, Sheng-Qing; Yang, Shun-You; Ke, Jianhong; Lin, Zhenquan
2008-12-01
We propose a dynamic growth model to mimic some social phenomena, such as the evolution of cities' population, in which monomer migrations occur between any two aggregates and monomer birth/death can simultaneously occur in each aggregate. Considering the fact that the rate kernels of migration, birth and death processes may change with time, we assume that the migration rate kernel is ijf(t), and the self-birth and death rate kernels are ig1(t) and ig2(t), respectively. Based on the mean-field rate equation, we obtain the exact solution of this model and then discuss semi-quantitatively the scaling behaviour of the aggregate size distribution at large times. The results show that in the long-time limit, (i) if ∫t0g1(t') dt'/∫t0g2(t') dt' >= 1 or exp{∫t0[g2(t') - g1(t')] dt'}/∫t0f(t') dt' → 0, the aggregate size distribution ak(t) can obey a generalized scaling form; (ii) if ∫t0g1(t') dt'/∫t0g2(t') dt' → 0 and exp ∫t0[g2(t') - g1(t') dt'/∫t0f(t') dt' → ∞, ak(t) can take a scale-free form and decay exponentially in size k; (iii) ak(t) will satisfy a modified scaling law in the remaining cases. Moreover, the total mass of aggregates depends strongly on the net birth rate g1(t) - g2(t) and evolves exponentially as exp{∫t0[g1(t') - g2(t')] dt'}, which is in qualitative agreement with the evolution of the total population of a country in real world.
Suspended liquid particle disturbance on laser-induced blast wave and low density distribution
NASA Astrophysics Data System (ADS)
Ukai, Takahiro; Zare-Behtash, Hossein; Kontis, Konstantinos
2017-12-01
The impurity effect of suspended liquid particles on the laser-induced gas breakdown was experimentally investigated in quiescent gas. The focus of this study is the investigation of the influence of the impurities on the shock wave structure as well as the low density distribution. A 532 nm Nd:YAG laser beam with an 188 mJ/pulse was focused on the chamber filled with suspended liquid particles 0.9 ± 0.63 μm in diameter. Several shock waves are generated by multiple gas breakdowns along the beam path in the breakdown with particles. Four types of shock wave structures can be observed: (1) the dual blast waves with a similar shock radius, (2) the dual blast waves with a large shock radius at the lower breakdown, (3) the dual blast waves with a large shock radius at the upper breakdown, and (4) the triple blast waves. The independent blast waves interact with each other and enhance the shock strength behind the shock front in the lateral direction. The triple blast waves lead to the strongest shock wave in all cases. The shock wave front that propagates toward the opposite laser focal spot impinges on one another, and thereafter a transmitted shock wave (TSW) appears. The TSW interacts with the low density core called a kernel; the kernel then longitudinally expands quickly due to a Richtmyer-Meshkov-like instability. The laser-particle interaction causes an increase in the kernel volume which is approximately five times as large as that in the gas breakdown without particles. In addition, the laser-particle interaction can improve the laser energy efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Jie; Wang Yuming; Liu Yang, E-mail: jzhang7@gmu.ed
We have developed a computational software system to automate the process of identifying solar active regions (ARs) and quantifying their physical properties based on high-resolution synoptic magnetograms constructed from Michelson Doppler Imager (MDI; on board the SOHO spacecraft) images from 1996 to 2008. The system, based on morphological analysis and intensity thresholding, has four functional modules: (1) intensity segmentation to obtain kernel pixels, (2) a morphological opening operation to erase small kernels, which effectively remove ephemeral regions and magnetic fragments in decayed ARs, (3) region growing to extend kernels to full AR size, and (4) the morphological closing operation tomore » merge/group regions with a small spatial gap. We calculate the basic physical parameters of the 1730 ARs identified by the auto system. The mean and maximum magnetic flux of individual ARs are 1.67 x 10{sup 22} Mx and 1.97 x 10{sup 23} Mx, while that per Carrington rotation are 1.83 x 10{sup 23} Mx and 6.96 x 10{sup 23} Mx, respectively. The frequency distributions of ARs with respect to both area size and magnetic flux follow a log-normal function. However, when we decrease the detection thresholds and thus increase the number of detected ARs, the frequency distribution largely follows a power-law function. We also find that the equatorward drifting motion of the AR bands with solar cycle can be described by a linear function superposed with intermittent reverse driftings. The average drifting speed over one solar cycle is 1{sup o}.83{+-}0{sup o}.04 yr{sup -1} or 0.708 {+-} 0.015 m s{sup -1}.« less
NASA Astrophysics Data System (ADS)
Walrand, Stephan; Hanin, François-Xavier; Pauwels, Stanislas; Jamar, François
2012-07-01
Clinical trials on 177Lu-90Y therapy used empirical activity ratios. Radionuclides (RN) with larger beta maximal range could favourably replace 90Y. Our aim is to provide RN dose-deposition kernels and to compare the tumour control probability (TCP) of RN combinations. Dose kernels were derived by integration of the mono-energetic beta-ray dose distributions (computed using Monte Carlo) weighted by their respective beta spectrum. Nine homogeneous spherical tumours (1-25 mm in diameter) and four spherical tumours including a lattice of cold, but alive, spheres (1, 3, 5, 7 mm in diameter) were modelled. The TCP for 93Y, 90Y and 125Sn in combination with 177Lu in variable proportions (that kept constant the renal cortex biological effective dose) were derived by 3D dose kernel convolution. For a mean tumour-absorbed dose of 180 Gy, 2 mm homogeneous tumours and tumours including 3 mm diameter cold alive spheres were both well controlled (TCP > 0.9) using a 75-25% combination of 177Lu and 90Y activity. However, 125Sn-177Lu achieved a significantly better result by controlling 1 mm-homogeneous tumour simultaneously with tumours including 5 mm diameter cold alive spheres. Clinical trials using RN combinations should use RN proportions tuned to the patient dosimetry. 125Sn production and its coupling to somatostatin analogue appear feasible. Assuming similar pharmacokinetics 125Sn is the best RN for combination with 177Lu in peptide receptor radiotherapy justifying pharmacokinetics studies in rodent of 125Sn-labelled somatostatin analogues.
Code of Federal Regulations, 2010 CFR
2010-01-01
...— Damaged kernels 1 (percent) Foreign material (percent) Other grains (percent) Skinned and broken kernels....0 10.0 15.0 1 Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered against sound barley. Notes: Malting barley shall not be infested in accordance with...
Code of Federal Regulations, 2013 CFR
2013-01-01
... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...
Code of Federal Regulations, 2014 CFR
2014-01-01
... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...
7 CFR 810.205 - Grades and grade requirements for Two-rowed Malting barley.
Code of Federal Regulations, 2010 CFR
2010-01-01
... (percent) Maximum limits of— Wild oats (percent) Foreign material (percent) Skinned and broken kernels... Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered...
Lee, Chi-Yuan; Chan, Pin-Cheng; Lee, Chung-Ju
2010-01-01
Temperature, voltage and fuel flow distribution all contribute considerably to fuel cell performance. Conventional methods cannot accurately determine parameter changes inside a fuel cell. This investigation developed flexible and multi-functional micro sensors on a 40 μm-thick stainless steel foil substrate by using micro-electro-mechanical systems (MEMS) and embedded them in a proton exchange membrane fuel cell (PEMFC) to measure the temperature, voltage and flow. Users can monitor and control in situ the temperature, voltage and fuel flow distribution in the cell. Thereby, both fuel cell performance and lifetime can be increased.
NASA Astrophysics Data System (ADS)
Neamţu, Mihaela; Stoian, Dana; Navolan, Dan Bogdan
2014-12-01
In the present paper we provide a mathematical model that describe the hypothalamus-pituitary-thyroid axis in autoimmune (Hashimoto's) thyroiditis. Since there is a spatial separation between thyroid and pituitary gland in the body, time is needed for transportation of thyrotropin and thyroxine between the glands. Thus, the distributed time delays are considered as both weak and Dirac kernels. The delayed model is analyzed regarding the stability and bifurcation behavior. The last part contains some numerical simulations to illustrate the effectiveness of our results and conclusions.
Spatial frequency performance limitations of radiation dose optimization and beam positioning
NASA Astrophysics Data System (ADS)
Stewart, James M. P.; Stapleton, Shawn; Chaudary, Naz; Lindsay, Patricia E.; Jaffray, David A.
2018-06-01
The flexibility and sophistication of modern radiotherapy treatment planning and delivery methods have advanced techniques to improve the therapeutic ratio. Contemporary dose optimization and calculation algorithms facilitate radiotherapy plans which closely conform the three-dimensional dose distribution to the target, with beam shaping devices and image guided field targeting ensuring the fidelity and accuracy of treatment delivery. Ultimately, dose distribution conformity is limited by the maximum deliverable dose gradient; shallow dose gradients challenge techniques to deliver a tumoricidal radiation dose while minimizing dose to surrounding tissue. In this work, this ‘dose delivery resolution’ observation is rigorously formalized for a general dose delivery model based on the superposition of dose kernel primitives. It is proven that the spatial resolution of a delivered dose is bounded by the spatial frequency content of the underlying dose kernel, which in turn defines a lower bound in the minimization of a dose optimization objective function. In addition, it is shown that this optimization is penalized by a dose deposition strategy which enforces a constant relative phase (or constant spacing) between individual radiation beams. These results are further refined to provide a direct, analytic method to estimate the dose distribution arising from the minimization of such an optimization function. The efficacy of the overall framework is demonstrated on an image guided small animal microirradiator for a set of two-dimensional hypoxia guided dose prescriptions.
Detection of ochratoxin A contamination in stored wheat using near-infrared hyperspectral imaging
NASA Astrophysics Data System (ADS)
Senthilkumar, T.; Jayas, D. S.; White, N. D. G.; Fields, P. G.; Gräfenhan, T.
2017-03-01
Near-infrared (NIR) hyperspectral imaging system was used to detect five concentration levels of ochratoxin A (OTA) in contaminated wheat kernels. The wheat kernels artificially inoculated with two different OTA producing Penicillium verrucosum strains, two different non-toxigenic P. verrucosum strains, and sterile control wheat kernels were subjected to NIR hyperspectral imaging. The acquired three-dimensional data were reshaped into readable two-dimensional data. Principal Component Analysis (PCA) was applied to the two dimensional data to identify the key wavelengths which had greater significance in detecting OTA contamination in wheat. Statistical and histogram features extracted at the key wavelengths were used in the linear, quadratic and Mahalanobis statistical discriminant models to differentiate between sterile control, five concentration levels of OTA contamination in wheat kernels, and five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels. The classification models differentiated sterile control samples from OTA contaminated wheat kernels and non-OTA producing P. verrucosum inoculated wheat kernels with a 100% accuracy. The classification models also differentiated between five concentration levels of OTA contaminated wheat kernels and between five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels with a correct classification of more than 98%. The non-OTA producing P. verrucosum inoculated wheat kernels and OTA contaminated wheat kernels subjected to hyperspectral imaging provided different spectral patterns.
Application of kernel method in fluorescence molecular tomography
NASA Astrophysics Data System (ADS)
Zhao, Yue; Baikejiang, Reheman; Li, Changqing
2017-02-01
Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.
NASA Technical Reports Server (NTRS)
Santavicca, D. A.; Steinberger, R. L.; Gibbons, K. A.; Citeno, J. V.; Mills, S.
1993-01-01
Results are presented from an experimental study of the effect of incomplete fuel-air mixing on the lean limit and emissions characteristics of a lean, prevaporized, premixed (LPP), coaxial mixing tube combustor. Two-dimensional exciplex fluorescence was used to characterize the degree of fuel vaporization and mixing at the combustor inlet under non-combusting conditions. These tests were conducted at a pressure of 4 atm., a temperature of 400 C, a mixer tube velocity of 100 m/sec and an equivalence ratio of .8, using a mixture of tetradecane, 1 methyl naphthalene and TMPD as a fuel simulant. Fuel-air mixtures with two distinct spatial distributions were studied. The exciplex measurements showed that there was a significant amount of unvaporized fuel at the combustor entrance in both cases. One case, however, exhibited a very non-uniform distribution of fuel liquid and vapor at the combustor entrance, i.e., with most of the fuel in the upper half of the combustor tube, while in the other case, both the fuel liquid and vapor were much more uniformly distributed across the width of the combustor entrance. The lean limit and emissions measurements were all made at a pressure of 4 atm. and a mixer tube velocity of 100 m/sec, using Jet A fuel and both fuel-air mixture distributions. Contrary to what was expected, the better mixed case was found to have a substantially leaner operating limit. The two mixture distributions also unexpectedly resulted in comparable NO(x) emissions, for a given equivalence ratio and inlet temperature, however, lower NO(x) emissions were possible in the better mixed case due to its leaner operating limit.
Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.
2014-01-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435
Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D
2010-05-01
The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to 0.91 when a threshold of either 20 or 100 ng g(-1) was used. Overall, the results indicate that fluorescence hyperspectral imaging may be applicable in estimating aflatoxin content in individual corn kernels.
Characterization of Transient Plasma Ignition Flame Kernel Growth for Varying Inlet Conditions
2009-12-01
unlimited 12b. DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words) Pulse detonation engines ( PDEs ) have the...Instruments NPS - Naval Postgraduate School PDC - Pulse Detonation Combustor PDE - Pulse Detonation Engine Phi The Greek letter Φ PSIA...produced little to no new chemical propulsion developments; only improvements to existing architectures. The Pulse Detonation Engine ( PDE ) is a
Sparsity-based image monitoring of crystal size distribution during crystallization
NASA Astrophysics Data System (ADS)
Liu, Tao; Huo, Yan; Ma, Cai Y.; Wang, Xue Z.
2017-07-01
To facilitate monitoring crystal size distribution (CSD) during a crystallization process by using an in-situ imaging system, a sparsity-based image analysis method is proposed for real-time implementation. To cope with image degradation arising from in-situ measurement subject to particle motion, solution turbulence, and uneven illumination background in the crystallizer, sparse representation of a real-time captured crystal image is developed based on using an in-situ image dictionary established in advance, such that the noise components in the captured image can be efficiently removed. Subsequently, the edges of a crystal shape in a captured image are determined in terms of the salience information defined from the denoised crystal images. These edges are used to derive a blur kernel for reconstruction of a denoised image. A non-blind deconvolution algorithm is given for the real-time reconstruction. Consequently, image segmentation can be easily performed for evaluation of CSD. The crystal image dictionary and blur kernels are timely updated in terms of the imaging conditions to improve the restoration efficiency. An experimental study on the cooling crystallization of α-type L-glutamic acid (LGA) is shown to demonstrate the effectiveness and merit of the proposed method.
Tebuconazole and Azoxystrobin Residue Behaviors and Distribution in Field and Cooked Peanut.
Hou, Fan; Teng, Peipei; Liu, Fengmao; Wang, Wenzhuo
2017-06-07
Residue behaviors of tebuconazole and azoxystrobin in field condition and the variation of their residue levels during the boiling process were evaluated. The terminal residues of peanut kernels were determined by using a modified QuEChERS method (quick, easy, cheap, effective, rugged, and safe) by means of the optimization of the novel purification procedure with multiwalled carbon nanotubes (MWCNTs) and Fe 3 O 4 -magnetic nanoparticle (Fe 3 O 4 -MNP) in the presence of an external magnetic field, and the terminal residues were all at trace level at harvest time. The residues in shells were detected as well to investigate the distribution in peanuts. Tebuconazole and azoxystrobin residue levels varied before/after boiling in kernels and shells to different degrees due to various factors, such as the modes of action and physicochemical properties of pesticides. The residues have been transferred from peanut into the infusion during boiling with the higher percentage of azoxystrobin as its lower logK ow . The processing factors (PFs) for tebuconazole and azoxystrobin after processing were <1, indicating that home cooking in this study could reduce the residue levels in peanut. Risk assessment showed there was no health risk for consumers.
Classification of Phylogenetic Profiles for Protein Function Prediction: An SVM Approach
NASA Astrophysics Data System (ADS)
Kotaru, Appala Raju; Joshi, Ramesh C.
Predicting the function of an uncharacterized protein is a major challenge in post-genomic era due to problems complexity and scale. Having knowledge of protein function is a crucial link in the development of new drugs, better crops, and even the development of biochemicals such as biofuels. Recently numerous high-throughput experimental procedures have been invented to investigate the mechanisms leading to the accomplishment of a protein’s function and Phylogenetic profile is one of them. Phylogenetic profile is a way of representing a protein which encodes evolutionary history of proteins. In this paper we proposed a method for classification of phylogenetic profiles using supervised machine learning method, support vector machine classification along with radial basis function as kernel for identifying functionally linked proteins. We experimentally evaluated the performance of the classifier with the linear kernel, polynomial kernel and compared the results with the existing tree kernel. In our study we have used proteins of the budding yeast saccharomyces cerevisiae genome. We generated the phylogenetic profiles of 2465 yeast genes and for our study we used the functional annotations that are available in the MIPS database. Our experiments show that the performance of the radial basis kernel is similar to polynomial kernel is some functional classes together are better than linear, tree kernel and over all radial basis kernel outperformed the polynomial kernel, linear kernel and tree kernel. In analyzing these results we show that it will be feasible to make use of SVM classifier with radial basis function as kernel to predict the gene functionality using phylogenetic profiles.
Steckel, S; Stewart, S D
2015-06-01
Ear-feeding larvae, such as corn earworm, Helicoverpa zea Boddie (Lepidoptera: Noctuidae), can be important insect pests of field corn, Zea mays L., by feeding on kernels. Recently introduced, stacked Bacillus thuringiensis (Bt) traits provide improved protection from ear-feeding larvae. Thus, our objective was to evaluate how injury to kernels in the ear tip might affect yield when this injury was inflicted at the blister and milk stages. In 2010, simulated corn earworm injury reduced total kernel weight (i.e., yield) at both the blister and milk stage. In 2011, injury to ear tips at the milk stage affected total kernel weight. No differences in total kernel weight were found in 2013, regardless of when or how much injury was inflicted. Our data suggested that kernels within the same ear could compensate for injury to ear tips by increasing in size, but this increase was not always statistically significant or sufficient to overcome high levels of kernel injury. For naturally occurring injury observed on multiple corn hybrids during 2011 and 2012, our analyses showed either no or a minimal relationship between number of kernels injured by ear-feeding larvae and the total number of kernels per ear, total kernel weight, or the size of individual kernels. The results indicate that intraear compensation for kernel injury to ear tips can occur under at least some conditions. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
THETRIS: A MICRO-SCALE TEMPERATURE AND GAS RELEASE MODEL FOR TRISO FUEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Ortensi; A.M. Ougouag
2011-12-01
The dominating mechanism in the passive safety of gas-cooled, graphite-moderated, high-temperature reactors (HTRs) is the Doppler feedback effect. These reactor designs are fueled with sub-millimeter sized kernels formed into TRISO particles that are imbedded in a graphite matrix. The best spatial and temporal representation of the feedback effect is obtained from an accurate approximation of the fuel temperature. Most accident scenarios in HTRs are characterized by large time constants and slow changes in the fuel and moderator temperature fields. In these situations a meso-scale, pebble and compact scale, solution provides a good approximation of the fuel temperature. Micro-scale models aremore » necessary in order to obtain accurate predictions in faster transients or when parameters internal to the TRISO are needed. Since these coated particles constitute one of the fundamental design barriers for the release of fission products, it becomes important to understand the transient behavior inside this containment system. An explicit TRISO fuel temperature model named THETRIS has been developed and incorporated into the CYNOD-THERMIX-KONVEK suite of coupled codes. The code includes gas release models that provide a simple predictive capability of the internal pressure during transients. The new model yields similar results to those obtained with other micro-scale fuel models, but with the added capability to analyze gas release, internal pressure buildup, and effects of a gap in the TRISO. The analyses show the instances when the micro-scale models improve the predictions of the fuel temperature and Doppler feedback. In addition, a sensitivity study of the potential effects on the transient behavior of high-temperature reactors due to the presence of a gap is included. Although the formation of a gap occurs under special conditions, its consequences on the dynamic behavior of the reactor can cause unexpected responses during fast transients. Nevertheless, the strong Doppler feedback forces the reactor to quickly stabilize.« less
Finite-frequency structural sensitivities of short-period compressional body waves
NASA Astrophysics Data System (ADS)
Fuji, Nobuaki; Chevrot, Sébastien; Zhao, Li; Geller, Robert J.; Kawai, Kenji
2012-07-01
We present an extension of the method recently introduced by Zhao & Chevrot for calculating Fréchet kernels from a precomputed database of strain Green's tensors by normal mode summation. The extension involves two aspects: (1) we compute the strain Green's tensors using the Direct Solution Method, which allows us to go up to frequencies as high as 1 Hz; and (2) we develop a spatial interpolation scheme so that the Green's tensors can be computed with a relatively coarse grid, thus improving the efficiency in the computation of the sensitivity kernels. The only requirement is that the Green's tensors be computed with a fine enough spatial sampling rate to avoid spatial aliasing. The Green's tensors can then be interpolated to any location inside the Earth, avoiding the need to store and retrieve strain Green's tensors for a fine sampling grid. The interpolation scheme not only significantly reduces the CPU time required to calculate the Green's tensor database and the disk space to store it, but also enhances the efficiency in computing the kernels by reducing the number of I/O operations needed to retrieve the Green's tensors. Our new implementation allows us to calculate sensitivity kernels for high-frequency teleseismic body waves with very modest computational resources such as a laptop. We illustrate the potential of our approach for seismic tomography by computing traveltime and amplitude sensitivity kernels for high frequency P, PKP and Pdiff phases. A comparison of our PKP kernels with those computed by asymptotic ray theory clearly shows the limits of the latter. With ray theory, it is not possible to model waves diffracted by internal discontinuities such as the core-mantle boundary, and it is also difficult to compute amplitudes for paths close to the B-caustic of the PKP phase. We also compute waveform partial derivatives for different parts of the seismic wavefield, a key ingredient for high resolution imaging by waveform inversion. Our computations of partial derivatives in the time window where PcP precursors are commonly observed show that the distribution of sensitivity is complex and counter-intuitive, with a large contribution from the mid-mantle region. This clearly emphasizes the need to use accurate and complete partial derivatives in waveform inversion.
Evidence-based Kernels: Fundamental Units of Behavioral Influence
Biglan, Anthony
2008-01-01
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600
Integrating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Champagne, Nathan J.; Wilton, Donald R.
2008-01-01
A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form
Ranking Support Vector Machine with Kernel Approximation
Dou, Yong
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Code of Federal Regulations, 2011 CFR
2011-04-01
... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...
Code of Federal Regulations, 2013 CFR
2013-04-01
... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...
Code of Federal Regulations, 2012 CFR
2012-04-01
... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...
Wigner functions defined with Laplace transform kernels.
Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George
2011-10-24
We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America
Online learning control using adaptive critic designs with sparse kernel machines.
Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo
2013-05-01
In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.
Influence of wheat kernel physical properties on the pulverizing process.
Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula
2014-10-01
The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.
Transfer Kernel Common Spatial Patterns for Motor Imagery Brain-Computer Interface Classification.
Dai, Mengxi; Zheng, Dezhi; Liu, Shucong; Zhang, Pengju
2018-01-01
Motor-imagery-based brain-computer interfaces (BCIs) commonly use the common spatial pattern (CSP) as preprocessing step before classification. The CSP method is a supervised algorithm. Therefore a lot of time-consuming training data is needed to build the model. To address this issue, one promising approach is transfer learning, which generalizes a learning model can extract discriminative information from other subjects for target classification task. To this end, we propose a transfer kernel CSP (TKCSP) approach to learn a domain-invariant kernel by directly matching distributions of source subjects and target subjects. The dataset IVa of BCI Competition III is used to demonstrate the validity by our proposed methods. In the experiment, we compare the classification performance of the TKCSP against CSP, CSP for subject-to-subject transfer (CSP SJ-to-SJ), regularizing CSP (RCSP), stationary subspace CSP (ssCSP), multitask CSP (mtCSP), and the combined mtCSP and ssCSP (ss + mtCSP) method. The results indicate that the superior mean classification performance of TKCSP can achieve 81.14%, especially in case of source subjects with fewer number of training samples. Comprehensive experimental evidence on the dataset verifies the effectiveness and efficiency of the proposed TKCSP approach over several state-of-the-art methods.