Science.gov

Sample records for heat kernel expansion

  1. Heat kernel asymptotic expansions for the Heisenberg sub-Laplacian and the Grushin operator

    PubMed Central

    Chang, Der-Chen; Li, Yutian

    2015-01-01

    The sub-Laplacian on the Heisenberg group and the Grushin operator are typical examples of sub-elliptic operators. Their heat kernels are both given in the form of Laplace-type integrals. By using Laplace's method, the method of stationary phase and the method of steepest descent, we derive the small-time asymptotic expansions for these heat kernels, which are related to the geodesic structure of the induced geometries. PMID:25792966

  2. Heat-kernel expansion on noncompact domains and a generalized zeta-function regularization procedure

    SciTech Connect

    Cognola, Guido; Elizalde, Emilio; Zerbini, Sergio

    2006-08-15

    Heat-kernel expansion and zeta function regularization are discussed for Laplace-type operators with discrete spectrum in noncompact domains. Since a general theory is lacking, the heat-kernel expansion is investigated by means of several examples. It is pointed out that for a class of exponential (analytic) interactions, generically the noncompactness of the domain gives rise to logarithmic terms in the heat-kernel expansion. Then, a meromorphic continuation of the associated zeta function is investigated. A simple model is considered, for which the analytic continuation of the zeta function is not regular at the origin, displaying a pole of higher order. For a physically meaningful evaluation of the related functional determinant, a generalized zeta function regularization procedure is proposed.

  3. An Irreducible Form for the Asymptotic Expansion Coefficients of the Heat Kernel of Fermions

    NASA Astrophysics Data System (ADS)

    Yajima, S.; Fukuda, M.; Tokuo, S.; Kubota, S.-I.; Higashida, Y.; Kamo, Y.

    2008-09-01

    We consider the asymptotic coefficients of the heat kernel for a fermion of spin (1)/(2) interacting with all types of non-abelian boson fields, i.e. totally antisymmetric tensor fields, in even dimensional Riemannian space. The coefficients are decomposed by irreducible matrices which are the totally antisymmetric product of the γ-matrices. The form of the coefficients given in our method is useful to evaluate some fermionic anomalies.

  4. On the asymptotic expansion of the Bergman kernel

    NASA Astrophysics Data System (ADS)

    Seto, Shoo

    Let (L, h) → (M, o) be a polarized Kahler manifold. We define the Bergman kernel for H0(M, Lk), holomorphic sections of the high tensor powers of the line bundle L. In this thesis, we will study the asymptotic expansion of the Bergman kernel. We will consider the on-diagonal, near-diagonal and far off-diagonal, using L2 estimates to show the existence of the asymptotic expansion and computation of the coefficients for the on and near-diagonal case, and a heat kernel approach to show the exponential decay of the off-diagonal of the Bergman kernel for noncompact manifolds assuming only a lower bound on Ricci curvature and C2 regularity of the metric.

  5. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template.

  6. Heat kernel methods for Lifshitz theories

    NASA Astrophysics Data System (ADS)

    Barvinsky, Andrei O.; Blas, Diego; Herrero-Valea, Mario; Nesterov, Dmitry V.; Pérez-Nadal, Guillem; Steinwachs, Christian F.

    2017-06-01

    We study the one-loop covariant effective action of Lifshitz theories using the heat kernel technique. The characteristic feature of Lifshitz theories is an anisotropic scaling between space and time. This is enforced by the existence of a preferred foliation of space-time, which breaks Lorentz invariance. In contrast to the relativistic case, covariant Lifshitz theories are only invariant under diffeomorphisms preserving the foliation structure. We develop a systematic method to reduce the calculation of the effective action for a generic Lifshitz operator to an algorithm acting on known results for relativistic operators. In addition, we present techniques that drastically simplify the calculation for operators with special properties. We demonstrate the efficiency of these methods by explicit applications.

  7. A Closed Formula for the Asymptotic Expansion of the Bergman Kernel

    NASA Astrophysics Data System (ADS)

    Xu, Hao

    2012-09-01

    We prove a graph theoretic closed formula for coefficients in the Tian-Yau-Zelditch asymptotic expansion of the Bergman kernel. The formula is expressed in terms of the characteristic polynomial of the directed graphs representing Weyl invariants. The proof relies on a combinatorial interpretation of a recursive formula due to M. Engliš and A. Loi.

  8. Frostless heat pump having thermal expansion valves

    DOEpatents

    Chen, Fang C [Knoxville, TN; Mei, Viung C [Oak Ridge, TN

    2002-10-22

    A heat pump system having an operable relationship for transferring heat between an exterior atmosphere and an interior atmosphere via a fluid refrigerant and further having a compressor, an interior heat exchanger, an exterior heat exchanger, a heat pump reversing valve, an accumulator, a thermal expansion valve having a remote sensing bulb disposed in heat transferable contact with the refrigerant piping section between said accumulator and said reversing valve, an outdoor temperature sensor, and a first means for heating said remote sensing bulb in response to said outdoor temperature sensor thereby opening said thermal expansion valve to raise suction pressure in order to mitigate defrosting of said exterior heat exchanger wherein said heat pump continues to operate in a heating mode.

  9. Heat treatments of low expansion alloys

    SciTech Connect

    Smith, D.F. Jr.; Clatworthy, E.F.

    1984-05-01

    This patent is directed to an overaging heat treatment applied to age-hardenable nickel-cobalt-iron controlled expansion alloys so as to contribute high notch strength at temperatures on the order of about 1000/sup 0/ F. thereto.

  10. Nondiagonal Values of the Heat Kernel for Scalars in a Constant Electromagnetic Field

    NASA Astrophysics Data System (ADS)

    Kalinichenko, I. S.; Kazinski, P. O.

    2017-03-01

    An original method for finding the nondiagonal values of the heat kernel associated with the wave operator Fourier-transformed in time is proposed for the case of a constant external electromagnetic field. The connection of the trace of such a heat kernel to the one-loop correction to the grand thermodynamic potential is indicated. The structure of its singularities is analyzed.

  11. Functional properties of raw and heat processed cashew nut (Anacardium occidentale, L.) kernel protein isolates.

    PubMed

    Neto, V Q; Narain, N; Silva, J B; Bora, P S

    2001-08-01

    The functional properties viz. solubility, water and oil absorption, emulsifying and foaming capacities of the protein isolates prepared from raw and heat processed cashew nut kernels were evaluated. Protein solubility vs. pH profile showed the isoelectric point at pH 5 for both isolates. The isolate prepared from raw cashew nuts showed superior solubility at and above isoelectric point pH. The water and oil absorption capacities of the proteins were slightly improved by heat treatment of cashew nut kernels. The emulsifying capacity of the isolates showed solubility dependent behavior and was better for raw cashew nut protein isolate at pH 5 and above. However, heat treated cashew nut protein isolate presented better foaming capacity at pH 7 and 8 but both isolates showed extremely low foam stability as compared to that of egg albumin.

  12. Heat capacity and thermal expansion of water and helium

    NASA Astrophysics Data System (ADS)

    Putintsev, N. M.; Putintsev, D. N.

    2017-04-01

    Original expressions for heat capacity CV and its components, vibrational and configurational components of thermal expansion coefficient were established. The values of CV, Cvib, Cconf, αvib and αconf for water and helium 4He were calculated.

  13. A Novel Cortical Thickness Estimation Method based on Volumetric Laplace-Beltrami Operator and Heat Kernel

    PubMed Central

    Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J.; Wang, Yalin

    2015-01-01

    Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the grey matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer’s disease (AD) and mild cognitive impairment (MCI) on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces. PMID:25700360

  14. A novel cortical thickness estimation method based on volumetric Laplace-Beltrami operator and heat kernel.

    PubMed

    Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J; Wang, Yalin

    2015-05-01

    Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the gray matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer's disease (AD) and mild cognitive impairment (MCI) on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces.

  15. Direct expansion solar collector and heat pump

    NASA Astrophysics Data System (ADS)

    1982-05-01

    A hybrid heat pump/solar collector combination in which solar collectors replace the outside air heat exchanger found in conventional air-to-air heat pump systems is discussed. The solar panels ordinarily operate at or below ambient temperature, eliminating the need to install the collector panels in a glazed and insulated enclosure. The collectors simply consist of a flat plate with a centrally located tube running longitudinally. Solar energy absorbed by exposed panels directly vaporizes the refrigerant fluid. The resulting vapor is compressed to higher temperature and pressure; then, it is condensed to release the heat absorbed during the vaporization process. Control and monitoring of the demonstration system are addressed, and the tests conducted with the demonstration system are described. The entire heat pump system is modelled, including predicted performance and costs, and economic comparisons are made with conventional flat-plate collector systems.

  16. Heat Pumps With Direct Expansion Solar Collectors

    NASA Astrophysics Data System (ADS)

    Ito, Sadasuke

    In this paper, the studies of heat pump systems using solar collectors as the evaporators, which have been done so far by reserchers, are reviwed. Usually, a solar collector without any cover is preferable to one with ac over because of the necessity of absorbing heat from the ambient air when the intensity of the solar energy on the collector is not enough. The performance of the collector depends on its area and the intensity of the convective heat transfer on the surface. Fins are fixed on the backside of the collector-surface or on the tube in which the refrigerant flows in order to increase the convective heat transfer. For the purpose of using a heat pump efficiently throughout year, a compressor with variable capacity is applied. The solar assisted heat pump can be used for air conditioning at night during the summer. Only a few groups of people have studied cooling by using solar assisted heat pump systems. In Japan, a kind of system for hot water supply has been produced commercially in a company and a kind of system for air conditioning has been installed in buildings commercially by another company.

  17. The heat kernel for two Aharonov-Bohm solenoids in a uniform magnetic field

    NASA Astrophysics Data System (ADS)

    Šťovíček, Pavel

    2017-01-01

    A non-relativistic quantum model is considered with a point particle carrying a charge e and moving in the plane pierced by two infinitesimally thin Aharonov-Bohm solenoids and subjected to a perpendicular uniform magnetic field of magnitude B. Relying on a technique originally due to Schulman, Laidlaw and DeWitt which is applicable to Schrödinger operators on multiply connected configuration manifolds a formula is derived for the corresponding heat kernel. As an application of the heat kernel formula, approximate asymptotic expressions are derived for the lowest eigenvalue lying above the first Landau level and for the corresponding eigenfunction while assuming that | eB | R2 /(ħ c) is large, where R is the distance between the two solenoids.

  18. Plasma heating via adiabatic magnetic compression-expansion cycle

    SciTech Connect

    Avinash, K.; Sengupta, M.; Ganesh, R.

    2016-06-15

    Heating of collisionless plasmas in closed adiabatic magnetic cycle comprising of a quasi static compression followed by a non quasi static constrained expansion against a constant external pressure is proposed. Thermodynamic constraints are derived to show that the plasma always gains heat in cycles having at least one non quasi static process. The turbulent relaxation of the plasma to the equilibrium state at the end of the non quasi static expansion is discussed and verified via 1D Particle in Cell (PIC) simulations. Applications of this scheme to heating plasmas in open configurations (mirror machines) and closed configurations (tokamak, reverse field pinche) are discussed.

  19. Asymptotic expansions of the kernel functions for line formation with continuous absorption

    NASA Technical Reports Server (NTRS)

    Hummer, D. G.

    1991-01-01

    Asymptotic expressions are obtained for the kernel functions M2(tau, alpha, beta) and K2(tau, alpha, beta) appearing in the theory of line formation with complete redistribution over a Voigt profile with damping parameter a, in the presence of a source of continuous opacity parameterized by beta. For a greater than 0, each coefficient in the asymptotic series is expressed as the product of analytic functions of a and eta. For Doppler broadening, only the leading term can be evaluated analytically.

  20. Asymptotic expansions of the kernel functions for line formation with continuous absorption

    NASA Technical Reports Server (NTRS)

    Hummer, D. G.

    1991-01-01

    Asymptotic expressions are obtained for the kernel functions M2(tau, alpha, beta) and K2(tau, alpha, beta) appearing in the theory of line formation with complete redistribution over a Voigt profile with damping parameter a, in the presence of a source of continuous opacity parameterized by beta. For a greater than 0, each coefficient in the asymptotic series is expressed as the product of analytic functions of a and eta. For Doppler broadening, only the leading term can be evaluated analytically.

  1. Investigation of direct expansion in ground source heat pumps

    NASA Astrophysics Data System (ADS)

    Kalman, M. D.

    A fully instrumented subscale ground coupled heat pump system was developed, and built, and used to test and obtain data on three different earth heat exchanger configurations under heating conditions (ground cooling). Various refrigerant flow control and compressor protection devices were tested for their applicability to the direct expansion system. Undistributed Earth temperature data were acquired at various depths. The problem of oil return at low evaporator temperatures and low refrigerant velocities was addressed. An analysis was performed to theoretically determine what evaporator temperature can be expected with an isolated ground pipe configuration with given length, pipe size, soil conditions and constant heat load. Technical accomplishments to data are summarized.

  2. Heat damage and in vitro starch digestibility of puffed wheat kernels.

    PubMed

    Cattaneo, Stefano; Hidalgo, Alyssa; Masotti, Fabio; Stuknytė, Milda; Brandolini, Andrea; De Noni, Ivano

    2015-12-01

    The effect of processing conditions on heat damage, starch digestibility, release of advanced glycation end products (AGEs) and antioxidant capacity of puffed cereals was studied. The determination of several markers arising from Maillard reaction proved pyrraline (PYR) and hydroxymethylfurfural (HMF) as the most reliable indices of heat load applied during puffing. The considerable heat load was evidenced by the high levels of both PYR (57.6-153.4 mg kg(-1) dry matter) and HMF (13-51.2 mg kg(-1) dry matter). For cost and simplicity, HMF looked like the most appropriate index in puffed cereals. Puffing influenced starch in vitro digestibility, being most of the starch (81-93%) hydrolyzed to maltotriose, maltose and glucose whereas only limited amounts of AGEs were released. The relevant antioxidant capacity revealed by digested puffed kernels can be ascribed to both the new formed Maillard reaction products and the conditions adopted during in vitro digestion.

  3. Towards a Holistic Cortical Thickness Descriptor: Heat Kernel-Based Grey Matter Morphology Signatures.

    PubMed

    Wang, Gang; Wang, Yalin

    2017-02-15

    In this paper, we propose a heat kernel based regional shape descriptor that may be capable of better exploiting volumetric morphological information than other available methods, thereby improving statistical power on brain magnetic resonance imaging (MRI) analysis. The mechanism of our analysis is driven by the graph spectrum and the heat kernel theory, to capture the volumetric geometry information in the constructed tetrahedral meshes. In order to capture profound brain grey matter shape changes, we first use the volumetric Laplace-Beltrami operator to determine the point pair correspondence between white-grey matter and CSF-grey matter boundary surfaces by computing the streamlines in a tetrahedral mesh. Secondly, we propose multi-scale grey matter morphology signatures to describe the transition probability by random walk between the point pairs, which reflects the inherent geometric characteristics. Thirdly, a point distribution model is applied to reduce the dimensionality of the grey matter morphology signatures and generate the internal structure features. With the sparse linear discriminant analysis, we select a concise morphology feature set with improved classification accuracies. In our experiments, the proposed work outperformed the cortical thickness features computed by FreeSurfer software in the classification of Alzheimer's disease and its prodromal stage, i.e., mild cognitive impairment, on publicly available data from the Alzheimer's Disease Neuroimaging Initiative. The multi-scale and physics based volumetric structure feature may bring stronger statistical power than some traditional methods for MRI-based grey matter morphology analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation

    DOE PAGES

    Huang, Hao; Yoo, Shinjae; Yu, Dantong; ...

    2015-06-01

    Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, somore » it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.« less

  5. Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation

    SciTech Connect

    Huang, Hao; Yoo, Shinjae; Yu, Dantong; Qin, Hong

    2015-06-01

    Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, so it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.

  6. An Irreducible Form of Gamma Matrices for HMDS Coefficients of the Heat Kernel in Higher Dimensions

    NASA Astrophysics Data System (ADS)

    Fukuda, M.; Yajima, S.; Higashida, Y.; Kubota, S.; Tokuo, S.; Kamo, Y.

    2009-05-01

    The heat kernel method is used to calculate 1-loop corrections of a fermion interacting with general background fields. To apply the Hadamard-Minakshisundaram-DeWitt-Seeley (HMDS) coefficients a_q(x,x') of the heat kernel to calculate the corrections, it is meaningful to decompose the coefficients into tensorial components with irreducible matrices, which are the totally antisymmetric products of γ matrices. We present formulae for the tensorial forms of the γ-matrix-valued quantities X, tilde{Λ}_{μν} and their product and covariant derivative in terms of the irreducible matrices in higher dimensions. The concrete forms of HMDS coefficients obtained by repeated application of the formulae simplifies the derivation of the loop corrections after the trace calculations, because each term in the coefficients contains one of the irreducible matrices and some of the terms are expressed by commutator and the anticommutator with respect to th e generator of non-abelian gauge groups. The form of the third HMDS coefficient is useful for evaluating some of the fermionic anomalies in 6-dimensional curved space. We show that the new formulae appear in the chiral {U(1)} anomaly when the vector and the third-order tensor gauge fields do not commute.

  7. Weighted Riemannian 1-manifolds for classical orthogonal polynomials and their heat kernel

    NASA Astrophysics Data System (ADS)

    Crasmareanu, Mircea

    2015-12-01

    Through the eigenvalue problem we associate to the classical orthogonal polynomials two classes of weighted Riemannian 1-manifolds having the coordinate x. For the first class the eigenvalues contains x and the metric is fixed as being the Euclidean one while for the second class the eigenvalues are independent of this variable and the metric and weight function are founded. The Hermite polynomials is the only case which generates the same manifold. The geometry of second class of weighted manifolds is studied from several points of view: geodesics, distance and exponential map, harmonic functions and their energy density, volume, zeta function, heat kernel. A partial heat equation is studied for these metrics and for the Poincaré ball model of hyperbolic geometry.

  8. Sustained and generalized extracellular fluid expansion following heat acclimation

    PubMed Central

    Patterson, Mark J; Stocks, Jodie M; Taylor, Nigel A S

    2004-01-01

    We measured intra- and extravascular body-fluid compartments in 12 resting males before (day 1; control), during (day 8) and after (day 22) a 3-week, exercise–heat acclimation protocol to investigate plasma volume (PV) changes. Our specific focus was upon the selective nature of the acclimation-induced PV expansion, and the possibility that this expansion could be sustained during prolonged acclimation. Acclimation was induced by cycling in the heat, and involved 16 treatment days (controlled hyperthermia (90 min); core temperature = 38.5°C) and three experimental exposures (40 min rest, 96.9 min (s.d. 9.5 min) cycling), each preceded by a rest day. The environmental conditions were a temperature of 39.8°C (s.d. 0.5°C) and relative humidity of 59.2% (s.d. 0.8%). On days 8 and 22, PV was expanded and maintained relative to control values (day 1: 44.0 ± 1.8; day 8: 48.8 ± 1.7; day 22: 48.8 ± 2.0 ml kg−1; P < 0.05). The extracellular fluid compartment (ECF) was equivalently expanded from control values on days 8 (279.6 ± 14.2versus 318.6 ± 14.3 ml kg−1; n = 8; P < 0.05) and 22 (287.5 ± 10.6 versus 308.4 ± 14.8 ml kg−1; n = 12; P < 0.05). Plasma electrolyte, total protein and albumin concentrations were unaltered following heat acclimation (P > 0.05), although the total plasma content of these constituents was elevated (P < 0.05). The PV and interstitial fluid (ISF) compartments exhibited similar relative expansions on days 8 (15.0 ± 2.2% versus 14.7 ± 4.1%; P > 0.05) and 22 (14.4 ± 3.6%versus 6.4 ± 2.2%; P = 0.10). It is concluded that the acclimation-induced PV expansion can be maintained following prolonged heat acclimation. In addition, this PV expansion was not selective, but represented a ubiquitous expansion of the extracellular compartment. PMID:15218070

  9. Degradation of aflatoxins in peanut kernels/flour by gaseous ozonation and mild heat treatment.

    PubMed

    Proctor, A D; Ahmedna, M; Kumar, J V; Goktepe, I

    2004-08-01

    Aflatoxins occur naturally in many agricultural crops causing health hazards and economic losses. Despite improved handling, processing and storage, they remain a problem in the peanut industry. Therefore, new ways to detoxify contaminated products are needed to limit economic/health impacts and add value to the peanut industry. The study was conducted (1) to evaluate the effectiveness of ozonation and mild heat in breaking down aflatoxins in peanut kernels and flour, and (2) to quantify aflatoxin destruction compared with untreated samples. Peanut samples were inoculated with known concentrations of aflatoxins B1, B2, G1 and G2. Samples were subjected to gaseous ozonation and under various temperatures (25, 50, 75 degrees C) and exposure times (5, 10, 15 min). Ozonated and non-ozonated samples were extracted in acetonitrile/water, derivatized in a Kobra cell and quantified by high-performance liquid chromatography. Ozonation efficiency increased with higher temperatures and longer treatment times. Regardless of treatment combinations, aflatoxins B1 and G1 exhibited the highest degradation levels. Higher levels of toxin degradation were achieved in peanut kernels than in flour. The temperature effect lessened as the exposure time increased, suggesting that ozonation at room temperature for 10-15 min could yield degradation levels similar to those achieved at higher temperatures while being more economical.

  10. Para-hydrogen and helium cluster size distributions in free jet expansions based on Smoluchowski theory with kernel scaling

    SciTech Connect

    Kornilov, Oleg; Toennies, J. Peter

    2015-02-21

    The size distribution of para-H{sub 2} (pH{sub 2}) clusters produced in free jet expansions at a source temperature of T{sub 0} = 29.5 K and pressures of P{sub 0} = 0.9–1.96 bars is reported and analyzed according to a cluster growth model based on the Smoluchowski theory with kernel scaling. Good overall agreement is found between the measured and predicted, N{sub k} = A k{sup a} e{sup −bk}, shape of the distribution. The fit yields values for A and b for values of a derived from simple collision models. The small remaining deviations between measured abundances and theory imply a (pH{sub 2}){sub k} magic number cluster of k = 13 as has been observed previously by Raman spectroscopy. The predicted linear dependence of b{sup −(a+1)} on source gas pressure was verified and used to determine the value of the basic effective agglomeration reaction rate constant. A comparison of the corresponding effective growth cross sections σ{sub 11} with results from a similar analysis of He cluster size distributions indicates that the latter are much larger by a factor 6-10. An analysis of the three body recombination rates, the geometric sizes and the fact that the He clusters are liquid independent of their size can explain the larger cross sections found for He.

  11. Shape-Based Image Matching Using Heat Kernels and Diffusion Maps

    NASA Astrophysics Data System (ADS)

    Vizilter, Yu. V.; Gorbatsevich, V. S.; Rubis, A. Yu.; Zheltov, S. Yu.

    2014-08-01

    2D image matching problem is often stated as an image-to-shape or shape-to-shape matching problem. Such shape-based matching techniques should provide the matching of scene image fragments registered in various lighting, weather and season conditions or in different spectral bands. Most popular shape-to-shape matching technique is based on mutual information approach. Another wellknown approach is a morphological image-to-shape matching proposed by Pytiev. In this paper we propose the new image-to-shape matching technique based on heat kernels and diffusion maps. The corresponding Diffusion Morphology is proposed as a new generalization of Pytiev morphological scheme. The fast implementation of morphological diffusion filtering is described. Experimental comparison of new and aforementioned shape-based matching techniques is reported applying to the TV and IR image matching problem.

  12. Rotational Relaxation in Nonequilibrium Freejet Expansions of Heated Nitrogen

    NASA Technical Reports Server (NTRS)

    Gochberg, Lawrence A.; Hurlbut, Franklin C.; Arnold, James O. (Technical Monitor)

    1994-01-01

    Rotational temperatures have been measured in rarefied, nonequilibrium, heated freejet expansions of nitrogen using the electron beam fluorescence technique at the University of California at Berkeley Low Density Wind Tunnel facility. Spectroscopic measurements of the (0,0) band of the first negative system of nitrogen reveal the nonequilibrium behavior in the flowfield upstream of, and through the Mach disk, which forms as the freejet expands into a region of finite back pressure. Results compare well with previous freejet expansion data and computations regarding location of the Mach disk and terminal rotational temperature in the expansion. Measurements are also presented for shock thickness based on the rotational temperature changes in the flow. Thickening shock layers, departures of rotational temperature from equilibrium in the expansion region, and downstream rotational temperature recovery much below that of an isentropic normal shock provide indications of the rarefied, nonequilibrium flow behavior. The data are analyzed to infer constant values of the rotational-relaxation collision number from 2.2 to 6.5 for the various flow conditions. Collision numbers are also calculated in a consistent manner for data from other investigations for which is seen a qualitative increase with increasing temperature. Rotational-relaxation collision numbers are seen as not fully descriptive of the rarefied freejet flows. This may be due to the high degree of nonequilibrium in the flowfields, and/or to the use of a temperature-insensitive rotational-relaxation collision number model in the data analyses.

  13. Quantum elasticity of graphene: Thermal expansion coefficient and specific heat

    NASA Astrophysics Data System (ADS)

    Burmistrov, I. S.; Gornyi, I. V.; Kachorovskii, V. Yu.; Katsnelson, M. I.; Mirlin, A. D.

    2016-11-01

    We explore thermodynamics of a quantum membrane, with a particular application to suspended graphene membrane and with a particular focus on the thermal expansion coefficient. We show that an interplay between quantum and classical anharmonicity-controlled fluctuations leads to unusual elastic properties of the membrane. The effect of quantum fluctuations is governed by the dimensionless coupling constant, g0≪1 , which vanishes in the classical limit (ℏ →0 ) and is equal to ≃0.05 for graphene. We demonstrate that the thermal expansion coefficient αT of the membrane is negative and remains nearly constant down to extremely low temperatures, T0∝exp(-2 /g0) . We also find that αT diverges in the classical limit: αT∝-ln(1 /g0) for g0→0 . For graphene parameters, we estimate the value of the thermal expansion coefficient as αT≃-0.23 eV-1 , which applies below the temperature Tuv˜g0ϰ0˜500 K (where ϰ0˜1 eV is the bending rigidity) down to T0˜10-14 K. For T expansion coefficient slowly (logarithmically) approaches zero with decreasing temperature. This behavior is surprising since typically the thermal expansion coefficient goes to zero as a power-law function. We discuss possible experimental consequences of this anomaly. We also evaluate classical and quantum contributions to the specific heat of the membrane and investigate the behavior of the Grüneisen parameter.

  14. Energy recovery during expansion of compressed gas using power plant low-quality heat sources

    DOEpatents

    Ochs, Thomas L.; O'Connor, William K.

    2006-03-07

    A method of recovering energy from a cool compressed gas, compressed liquid, vapor, or supercritical fluid is disclosed which includes incrementally expanding the compressed gas, compressed liquid, vapor, or supercritical fluid through a plurality of expansion engines and heating the gas, vapor, compressed liquid, or supercritical fluid entering at least one of the expansion engines with a low quality heat source. Expansion engines such as turbines and multiple expansions with heating are disclosed.

  15. Hypervelocity Heat-Transfer Measurements in an Expansion Tube

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Perkins, John N.

    1996-01-01

    A series of experiments has been conducted in the NASA HYPULSE Expansion Tube, in both CO2 and air test gases, in order to obtain data for comparison with computational results and to assess the capability for performing hypervelocity heat-transfer studies in this facility. Heat-transfer measurements were made in both test gases on 70 deg sphere-cone models and on hemisphere models of various radii. HYPULSE freestream flow conditions in these test gases were found to be repeatable to within 3-10%, and aerothermodynamic test times of 150 microsec in CO2 and 125 microsec in air were identified. Heat-transfer measurement uncertainty was estimated to be 10-15%. Comparisons were made with computational results from the non-equilibrium Navier-Stokes solver NEQ2D. Measured and computed heat-transfer rates agreed to within 10% on the hemispheres and on the sphere-cone forebodies, and to within 10% in CO2 and 25% in air on the afterbodies and stings of the sphere-cone models.

  16. Bergman Kernel from Path Integral

    NASA Astrophysics Data System (ADS)

    Douglas, Michael R.; Klevtsov, Semyon

    2010-01-01

    We rederive the expansion of the Bergman kernel on Kähler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory, and generalize it to supersymmetric quantum mechanics. One physics interpretation of this result is as an expansion of the projector of wave functions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kähler form. This is relevant for the quantum Hall effect in curved space, and for its higher dimensional generalizations. Other applications include the theory of coherent states, the study of balanced metrics, noncommutative field theory, and a conjecture on metrics in black hole backgrounds discussed in [24]. We give a short overview of these various topics. From a conceptual point of view, this expansion is noteworthy as it is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey et al short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry.

  17. Investigation of contact resistance for fin-tube heat exchanger by means of tube expansion

    NASA Astrophysics Data System (ADS)

    Hing, Yau Kar; Raghavan, Vijay R.; Meng, Chin Wai

    2012-06-01

    An experimental study on the heat transfer performance of a fin-tube heat exchanger due to mechanical expansion of the tube by bullets has been reported in this paper. The manufacture of a fin-tube heat exchanger commonly involves inserting copper tubes into a stack of aluminium fins and expanding the tubes mechanically. The mechanical expansion is achieved by inserting a steel bullet through the tube. The steel bullet has a larger diameter than the tube and the expansion provides a firm surface contact between fins and tubes. Five bullet expansion ratios (i.e. 1.045 to 1.059) have been used in the study to expand a 9.52mm diameter tubes in a fin-tube heat exchanger. The study is conducted on a water-to-water loop experiment rig under steady state conditions. In addition, the effects of fin hardness and fin pitch are investigated in the study. The results indicate that the optimum heat transfer occurred at a bullet expansion ratio ranging from 1.049 to 1.052. It is also observed that larger fin pitches require larger bullet expansion ratios, especially with lower fin hardness. As the fin pitch increases, both fin hardness (i.e. H22 and H24) exhibit increasing heat transfer rate per fin (W/fin). With the H22 hardness temper, the increase is as much as 11% while H24 increases by 1.2%.

  18. Acute volume expansion preserves orthostatic tolerance during whole-body heat stress in humans

    PubMed Central

    Keller, David M; Low, David A; Wingo, Jonathan E; Brothers, R Matthew; Hastings, Jeff; Davis, Scott L; Crandall, Craig G

    2009-01-01

    Whole-body heat stress reduces orthostatic tolerance via a yet to be identified mechanism(s). The reduction in central blood volume that accompanies heat stress may contribute to this phenomenon. The purpose of this study was to test the hypothesis that acute volume expansion prior to the application of an orthostatic challenge attenuates heat stress-induced reductions in orthostatic tolerance. In seven normotensive subjects (age, 40 ± 10 years: mean ±s.d.), orthostatic tolerance was assessed using graded lower-body negative pressure (LBNP) until the onset of symptoms associated with ensuing syncope. Orthostatic tolerance (expressed in cumulative stress index units, CSI) was determined on each of 3 days, with each day having a unique experimental condition: normothermia, whole-body heating, and whole-body heating + acute volume expansion. For the whole-body heating + acute volume expansion experimental day, dextran 40 was rapidly infused prior to LBNP sufficient to return central venous pressure to pre-heat stress values. Whole-body heat stress alone reduced orthostatic tolerance by ∼80% compared to normothermia (938 ± 152 versus 182 ± 57 CSI; mean ±s.e.m., P < 0.001). Acute volume expansion during whole-body heating completely ameliorated the heat stress-induced reduction in orthostatic tolerance (1110 ± 69 CSI, P < 0.001). Although heat stress results in many cardiovascular and neural responses that directionally challenge blood pressure regulation, reduced central blood volume appears to be an underlying mechanism responsible for impaired orthostatic tolerance in the heat-stressed human. PMID:19139044

  19. Heat pump systems with direct expansion ground coils

    NASA Astrophysics Data System (ADS)

    Svec, O. J.; Baxter, V. D.

    This paper is a summary of an International research project organized within the framework of the International Energy Agency (IEA), Implementing Agreement on Heat Pumps. This cooperative project, based on a task sharing principle, was proposed by the Canadian team and joined by the national teams of the United States of America, Japan and Austria. The Institute for Research in Construction (IRC) of the National Research Council of Canada (NRCC), has been acting as the Operating Agent for this project, known as Annex XV. The need for this research work is based on the recognition of the state-of-the-art of Ground Source Heat Pump (GSHP) technology, which can simply be described by the following two statements: (1) GSHP technology is the most successful among all renewable technologies in North American and northern European countries; and (2) installation cost of GSHP systems is currently too high for a meaningful worldwide penetration into the heating/cooling market.

  20. Eigenvalue Expansion Approach to Study Bio-Heat Equation

    NASA Astrophysics Data System (ADS)

    Khanday, M. A.; Nazir, Khalid

    2016-07-01

    A mathematical model based on Pennes bio-heat equation was formulated to estimate temperature profiles at peripheral regions of human body. The heat processes due to diffusion, perfusion and metabolic pathways were considered to establish the second-order partial differential equation together with initial and boundary conditions. The model was solved using eigenvalue method and the numerical values of the physiological parameters were used to understand the thermal disturbance on the biological tissues. The results were illustrated at atmospheric temperatures TA = 10∘C and 20∘C.

  1. The Effect of Homogenization Heat Treatment on Thermal Expansion Coefficient and Dimensional Stability of Low Thermal Expansion Cast Irons

    NASA Astrophysics Data System (ADS)

    Chen, Li-Hao; Liu, Zong-Pei; Pan, Yung-Ning

    2016-08-01

    In this paper, the effect of homogenization heat treatment on α value [coefficient of thermal expansion (10-6 K-1)] of low thermal expansion cast irons was studied. In addition, constrained thermal cyclic tests were conducted to evaluate the dimensional stability of the low thermal expansion cast irons with various heat treatment conditions. The results indicate that when the alloys were homogenized at a relatively low temperature, e.g., 1023 K (750 °C), the elimination of Ni segregation was not very effective, but the C concentration in the matrix was moderately reduced. On the other hand, if the alloys were homogenized at a relatively high temperature, e.g., 1473 K (1200 °C), opposite results were obtained. Consequently, not much improvement (reduction) in α value was achieved in both cases. Therefore, a compound homogenization heat treatment procedure was designed, namely 1473 K (1200 °C)/4 hours/FC/1023 K (750 °C)/2 hours/WQ, in which a relatively high homogenization temperature of 1473 K (1200 °C) can effectively eliminate the Ni segregation, and a subsequent holding stage at 1023.15 K (750 °C) can reduce the C content in the matrix. As a result, very low α values of around (1 to 2) × 10-6 K-1 were obtained. Regarding the constrained thermal cyclic testing in 303 K to 473 K (30 °C to 200 °C), the results indicate that regardless of heat treatment condition, low thermal expansion cast irons exhibit exceedingly higher dimensional stability than either the regular ductile cast iron or the 304 stainless steel. Furthermore, positive correlation exists between the α 303.15 K to 473.15 K value and the amount of shape change after the thermal cyclic testing. Among the alloys investigated, Heat I-T3B (1473 K (1200 °C)/4 hours/FC/1023 K (750 °C)/2 hours/WQ) exhibits the lowest α 303 K to 473 K value (1.72 × 10-6 K-1), and hence has the least shape change (7.41 μm) or the best dimensional stability.

  2. The Statistical Interpretation of Classical Thermodynamic Heating and Expansion Processes

    ERIC Educational Resources Information Center

    Cartier, Stephen F.

    2011-01-01

    A statistical model has been developed and applied to interpret thermodynamic processes typically presented from the macroscopic, classical perspective. Through this model, students learn and apply the concepts of statistical mechanics, quantum mechanics, and classical thermodynamics in the analysis of the (i) constant volume heating, (ii)…

  3. The Statistical Interpretation of Classical Thermodynamic Heating and Expansion Processes

    ERIC Educational Resources Information Center

    Cartier, Stephen F.

    2011-01-01

    A statistical model has been developed and applied to interpret thermodynamic processes typically presented from the macroscopic, classical perspective. Through this model, students learn and apply the concepts of statistical mechanics, quantum mechanics, and classical thermodynamics in the analysis of the (i) constant volume heating, (ii)…

  4. Learning with Box Kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-04-12

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, since the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given which dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  5. Learning with box kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-11-01

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, because the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given that dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  6. Effects of City Expansion on Heat Stress under Climate Change Conditions

    PubMed Central

    Argüeso, Daniel; Evans, Jason P.; Pitman, Andrew J.; Di Luca, Alejandro

    2015-01-01

    We examine the joint contribution of urban expansion and climate change on heat stress over the Sydney region. A Regional Climate Model was used to downscale present (1990–2009) and future (2040–2059) simulations from a Global Climate Model. The effects of urban surfaces on local temperature and vapor pressure were included. The role of urban expansion in modulating the climate change signal at local scales was investigated using a human heat-stress index combining temperature and vapor pressure. Urban expansion and climate change leads to increased risk of heat-stress conditions in the Sydney region, with substantially more frequent adverse conditions in urban areas. Impacts are particularly obvious in extreme values; daytime heat-stress impacts are more noticeable in the higher percentiles than in the mean values and the impact at night is more obvious in the lower percentiles than in the mean. Urban expansion enhances heat-stress increases due to climate change at night, but partly compensates its effects during the day. These differences are due to a stronger contribution from vapor pressure deficit during the day and from temperature increases during the night induced by urban surfaces. Our results highlight the inappropriateness of assessing human comfort determined using temperature changes alone and point to the likelihood that impacts of climate change assessed using models that lack urban surfaces probably underestimate future changes in terms of human comfort. PMID:25668390

  7. Effects of city expansion on heat stress under climate change conditions.

    PubMed

    Argüeso, Daniel; Evans, Jason P; Pitman, Andrew J; Di Luca, Alejandro

    2015-01-01

    We examine the joint contribution of urban expansion and climate change on heat stress over the Sydney region. A Regional Climate Model was used to downscale present (1990-2009) and future (2040-2059) simulations from a Global Climate Model. The effects of urban surfaces on local temperature and vapor pressure were included. The role of urban expansion in modulating the climate change signal at local scales was investigated using a human heat-stress index combining temperature and vapor pressure. Urban expansion and climate change leads to increased risk of heat-stress conditions in the Sydney region, with substantially more frequent adverse conditions in urban areas. Impacts are particularly obvious in extreme values; daytime heat-stress impacts are more noticeable in the higher percentiles than in the mean values and the impact at night is more obvious in the lower percentiles than in the mean. Urban expansion enhances heat-stress increases due to climate change at night, but partly compensates its effects during the day. These differences are due to a stronger contribution from vapor pressure deficit during the day and from temperature increases during the night induced by urban surfaces. Our results highlight the inappropriateness of assessing human comfort determined using temperature changes alone and point to the likelihood that impacts of climate change assessed using models that lack urban surfaces probably underestimate future changes in terms of human comfort.

  8. Mixed convection in a horizontal porous duct with a sudden expansion and local heating from below

    SciTech Connect

    Yokoyama, Y.; Mahajan, R.L.; Kulacki, F.A.

    1999-08-01

    Results are reported for an experimental and numerical study of forced and mixed convective heat transfer in a liquid-saturated horizontal porous duct. The cross section of the duct has a sudden expansion with a heated region on the lower surface downstream and adjacent to the expansion. Within the framework of Darcy`s formulation, the calculated and measured Nusselt numbers for 0.1 < Pe < 100 and 50 < Ra < 500 are in excellent agreement. Further, the calculated Nusselt numbers are very close to those for the bottom-heated flat duct. This finding has important implications for convective heat and mass transfer in geophysical systems and porous matrix heat exchangers. The calculations were also carried out for glass bead-packed beds saturated with water using non-Darcy`s formula. The streamlines in the forced convection indicate that, even with non-Darcy effects included, recirculation is not observed downstream of an expansion and the heat transfer rate is decreased but only marginally.

  9. Experimental analysis of direct-expansion ground-coupled heat pump systems

    NASA Astrophysics Data System (ADS)

    Mei, V. C.; Baxter, V. D.

    1991-09-01

    Direct-expansion ground-coil-coupled (DXGC) heat pump systems have certain energy efficiency advantages over conventional ground-coupled heat pump (GCHP) systems. Principal among these advantages are that the secondary heat transfer fluid heat exchanger and circulating pump are eliminated. While the DXGC concept can produce higher efficiencies, it also produces more system design and environmental problems (e.g., compressor starting, oil return, possible ground pollution, and more refrigerant charging). Furthermore, general design guidelines for DXGC systems are not well documented. A two-pronged approach was adopted for this study: (1) a literature survey, and (2) a laboratory study of a DXGC heat pump system with R-22 as the refrigerant, for both heating and cooling mode tests done in parallel and series tube connections. The results of each task are described in this paper. A set of general design guidelines was derived from the test results and is also presented.

  10. Pressurized heat treatment of glass-ceramic to control thermal expansion

    DOEpatents

    Kramer, Daniel P.

    1985-01-01

    A method of producing a glass-ceramic having a specified thermal expansion value is disclosed. The method includes the step of pressurizing the parent glass material to a predetermined pressure during heat treatment so that the glass-ceramic produced has a specified thermal expansion value. Preferably, the glass-ceramic material is isostatically pressed. A method for forming a strong glass-ceramic to metal seal is also disclosed in which the glass-ceramic is fabricated to have a thermal expansion value equal to that of the metal. The determination of the thermal expansion value of a parent glass material placed in a high-temperature environment is also used to determine the pressure in the environment.

  11. Bergman kernel, balanced metrics and black holes

    NASA Astrophysics Data System (ADS)

    Klevtsov, Semyon

    In this thesis we explore the connections between the Kahler geometry and Landau levels on compact manifolds. We rederive the expansion of the Bergman kernel on Kahler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory. The physics interpretation of this result is as an expansion of the projector of wavefunctions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kahler form. This is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry. We also generalize this expansion to supersymmetric quantum mechanics and more general magnetic fields, and explore its applications. These include the quantum Hall effect in curved space, the balanced metrics and Kahler gravity. In particular, we conjecture that for a probe in a BPS black hole in type II strings compactified on Calabi-Yau manifolds, the moduli space metric is the balanced metric.

  12. Green Synthesis of Silicon Carbide Nanowhiskers by Microwave Heating of Blends of Palm Kernel Shell and Silica

    NASA Astrophysics Data System (ADS)

    Voon, C. H.; Lim, B. Y.; Gopinath, S. C. B.; Tan, H. S.; Tony, V. C. S.; Arshad, M. K. Md; Foo, K. L.; Hashim, U.

    2016-11-01

    Silicon carbide nanomaterials especially silicon carbide nanowhiskers (SiCNWs) has been known for its excellent properties such as high thermal stability, good chemical inertness and excellent electronic properties. In this paper, a green synthesis of SiCNWs by microwave heating of blends of palm kernel shell (PKS) and silica was presented. The effect of ratio of PKS and silica on the synthesis process was also studied and reported. Blends of PKS and silica in different ratio were mixed homogenously in ultrasonic bath for 2 hours using ethanol as liquid medium. The blends were then dried on hotplate to remove the ethanol and compressed into pellets form.. Synthesis was conducted in 2.45 GHz multimode cavity at 1400 °C for 40 minutes. X-ray diffraction revealed that β-SiC was detected for samples synthesized from blends with ratio of PKS to silica of 5:1 and 7:1. FESEM images also show that SiCNWs with the average diameter of 70 nm were successfully formed from blends with ratio of PKS to silica of 5:1 and 7:1. A vapour-liquid-solid (VLS) mechanism was proposed to explain the growth of SiCNWs from blends of PKS and silica.

  13. Debye temperature, thermal expansion, and heat capacity of TcC up to 100 GPa

    SciTech Connect

    Song, T.; Ma, Q.; Tian, J.H.; Liu, X.B.; Ouyang, Y.H.; Zhang, C.L.; Su, W.F.

    2015-01-15

    Highlights: • A number of thermodynamic properties of rocksalt TcC are investigated for the first time. • The quasi-harmonic Debye model is applied to take into account the thermal effect. • The pressure and temperature up to about 100 GPa and 3000 K, respectively. - Abstract: Debye temperature, thermal expansion coefficient, and heat capacity of ideal stoichiometric TcC in the rocksalt structure have been studied systematically by using ab initio plane-wave pseudopotential density functional theory method within the generalized gradient approximation. Through the quasi-harmonic Debye model, in which the phononic effects are considered, the dependences of Debye temperature, thermal expansion coefficient, constant-volume heat capacity, and constant-pressure heat capacity on pressure and temperature are successfully predicted. All the thermodynamic properties of TcC with rocksalt phase have been predicted in the entire temperature range from 300 to 3000 K and pressure up to 100 GPa.

  14. Negative thermal expansion and anomalies of heat capacity of LuB50 at low temperatures.

    PubMed

    Novikov, V V; Zhemoedov, N A; Matovnikov, A V; Mitroshenkov, N V; Kuznetsov, S V; Bud'ko, S L

    2015-09-28

    Heat capacity and thermal expansion of LuB50 boride were experimentally studied in the 2-300 K temperature range. The data reveal an anomalous contribution to the heat capacity at low temperatures. The value of this contribution is proportional to the first degree of temperature. It was identified that this anomaly in heat capacity is caused by the effect of disorder in the LuB50 crystalline structure and it can be described in the soft atomic potential model (SAP). The parameters of the approximation were determined. The temperature dependence of LuB50 heat capacity in the whole temperature range was approximated by the sum of SAP contribution, Debye and two Einstein components. The parameters of SAP contribution for LuB50 were compared to the corresponding values for LuB66, which was studied earlier. Negative thermal expansion at low temperatures was experimentally observed for LuB50. The analysis of the experimental temperature dependence for the Gruneisen parameter of LuB50 suggested that the low-frequency oscillations, described in SAP mode, are responsible for the negative thermal expansion. Thus, the glasslike character of the behavior of LuB50 thermal characteristics at low temperatures was confirmed.

  15. Negative thermal expansion and anomalies of heat capacity of LuB50 at low temperatures

    DOE PAGES

    Novikov, V. V.; Zhemoedov, N. A.; Matovnikov, A. V.; ...

    2015-07-20

    Heat capacity and thermal expansion of LuB50 boride were experimentally studied in the 2–300 K temperature range. The data reveal an anomalous contribution to the heat capacity at low temperatures. The value of this contribution is proportional to the first degree of temperature. It was identified that this anomaly in heat capacity is caused by the effect of disorder in the LuB50 crystalline structure and it can be described in the soft atomic potential model (SAP). The parameters of the approximation were determined. The temperature dependence of LuB50 heat capacity in the whole temperature range was approximated by the summore » of SAP contribution, Debye and two Einstein components. The parameters of SAP contribution for LuB50 were compared to the corresponding values for LuB66, which was studied earlier. Negative thermal expansion at low temperatures was experimentally observed for LuB50. The analysis of the experimental temperature dependence for the Gruneisen parameter of LuB50 suggested that the low-frequency oscillations, described in SAP mode, are responsible for the negative thermal expansion. As a result, the glasslike character of the behavior of LuB50 thermal characteristics at low temperatures was confirmed.« less

  16. Collisional heating and adiabatic expansion of warm dense matter with intense relativistic electrons

    NASA Astrophysics Data System (ADS)

    Coleman, J. E.; Colgan, J.

    2017-07-01

    Adiabatic expansion of a warm dense Ti plasma has been observed after isochoric heating of a 100 -μ m -thick Ti foil with an ˜100 -ns-long intense relativistic electron bunch at an energy of 19.8 MeV and a current of 1.7 kA. The expansion fits well with the analytical point-source solution. After 10 J is deposited and the plasma rapidly expands out of the warm dense phase, a stable degenerate plasma (T ˜1.2 eV ) with ne>1017cm-3 is measured for >100 ns. This is the first temporal measurement of the generation and adiabatic expansion of a large volume (3 ×10-4cm3) of warm dense plasma isochorically heated by intense monochromatic electrons. The suite of diagnostics is presented, which includes time-resolved plasma plume expansion measurements on a single shot, visible spectroscopy measurements of the emission and absorption spectrum, measurements of the beam distribution, and plans for the future.

  17. Was plio-pleistocene hominid brain expansion a pleiotropic effect of adaptation to heat stress?

    PubMed

    Eckhardt, R B

    1987-09-01

    This paper examines the hypothesis (Fiałkowski 1978, 1986) that hominid brain expansion was largely a side effect of an evolutionary response to increased heat stress under conditions of primitive hunting, with reduction in reliability of brain components due to a rise in temperature having been offset by increases in the number of cerebral sub-units and interconnections among them. Fiałkowski's hypothesis is shown here to be based on measurements that are seriously inaccurate, and the explanatory mechanism to be contradicted by existing data on response to heat stress by smaller-brained nonhuman primates.

  18. Heat capacity and thermal expansion of icosahedral lutetium boride LuB66

    SciTech Connect

    Novikov, V V; Avdashchenko, D V; Matovnikov, A V; Mitroshenkov, N V; Bud’ko, S L

    2014-01-07

    The experimental values of heat capacity and thermal expansion for lutetium boride LuB66 in the temperature range of 2-300 K were analysed in the Debye-Einstein approximation. It was found that the vibration of the boron sub-lattice can be considered within the Debye model with high characteristic temperatures; low-frequency vibration of weakly connected metal atoms is described by the Einstein model.

  19. A thermodynamic view on latent heat transport, expansion work of water vapor and irreversible moist processes.

    NASA Astrophysics Data System (ADS)

    Pauluis, O.

    2001-05-01

    Three aspects of moist convection are discussed here: the latent heat transport from the Earth's surface to the regions where water vapor condenses, the expansion work performed by water vapor during its ascent, and the irreversible entropy production due to diffusion of water vapor and phase changes. A thermodynamic relationship between these three aspects of moist convection, referred here to as the entropy budget of the water substance, is derived. This relationship is similar to the entropy budget of an imperfect heat engine that produces less work than the corresponding Carnot cycle because of the irreversibility associated with diffusion of water vapor and irreversible phase changes. In addition to behaving as a heat engine, moist convection also acts as an atmospheric dehumidifier that removes water from the atmosphere through condensation and precipitation. In statistical equilibrium, this dehumidification is balanced by a continuous moistening of dry air, associated at the microphysical scales with diffusion of water vapor and irreversible phase changes. The irreversible entropy production due to these moist processes can thus be viewed as the irreversible counterpart to the atmospheric dehumidification. The entropy budget of the water substance thus indicates that there is a competition between how much the latent heat transport behaves as an atmospheric dehumidifier, and how much it behaves as a heat engine. Scaling arguments show that for conditions typical of the tropical atmosphere, the expansion work of water vapor accounts for about one third of the work that would be performed by a corresponding Carnot cycle. This implies that the latent heat transport acts more as an atmospheric dehumidifier than as a heat engine. This also implies that the amount of work performed by moist convection should be much weaker than what has been predicted by earlier theories that assume that convection behaves mostly as a perfect heat engine.

  20. Combining Lactic Acid Spray with Near-Infrared Radiation Heating To Inactivate Salmonella enterica Serovar Enteritidis on Almond and Pine Nut Kernels

    PubMed Central

    Ha, Jae-Won

    2015-01-01

    The aim of this study was to investigate the efficacy of near-infrared radiation (NIR) heating combined with lactic acid (LA) sprays for inactivating Salmonella enterica serovar Enteritidis on almond and pine nut kernels and to elucidate the mechanisms of the lethal effect of the NIR-LA combined treatment. Also, the effect of the combination treatment on product quality was determined. Separately prepared S. Enteritidis phage type (PT) 30 and non-PT 30 S. Enteritidis cocktails were inoculated onto almond and pine nut kernels, respectively, followed by treatments with NIR or 2% LA spray alone, NIR with distilled water spray (NIR-DW), and NIR with 2% LA spray (NIR-LA). Although surface temperatures of nuts treated with NIR were higher than those subjected to NIR-DW or NIR-LA treatment, more S. Enteritidis survived after NIR treatment alone. The effectiveness of NIR-DW and NIR-LA was similar, but significantly more sublethally injured cells were recovered from NIR-DW-treated samples. We confirmed that the enhanced bactericidal effect of the NIR-LA combination may not be attributable to cell membrane damage per se. NIR heat treatment might allow S. Enteritidis cells to become permeable to applied LA solution. The NIR-LA treatment (5 min) did not significantly (P > 0.05) cause changes in the lipid peroxidation parameters, total phenolic contents, color values, moisture contents, and sensory attributes of nut kernels. Given the results of the present study, NIR-LA treatment may be a potential intervention for controlling food-borne pathogens on nut kernel products. PMID:25911473

  1. Combining Lactic Acid Spray with Near-Infrared Radiation Heating To Inactivate Salmonella enterica Serovar Enteritidis on Almond and Pine Nut Kernels.

    PubMed

    Ha, Jae-Won; Kang, Dong-Hyun

    2015-07-01

    The aim of this study was to investigate the efficacy of near-infrared radiation (NIR) heating combined with lactic acid (LA) sprays for inactivating Salmonella enterica serovar Enteritidis on almond and pine nut kernels and to elucidate the mechanisms of the lethal effect of the NIR-LA combined treatment. Also, the effect of the combination treatment on product quality was determined. Separately prepared S. Enteritidis phage type (PT) 30 and non-PT 30 S. Enteritidis cocktails were inoculated onto almond and pine nut kernels, respectively, followed by treatments with NIR or 2% LA spray alone, NIR with distilled water spray (NIR-DW), and NIR with 2% LA spray (NIR-LA). Although surface temperatures of nuts treated with NIR were higher than those subjected to NIR-DW or NIR-LA treatment, more S. Enteritidis survived after NIR treatment alone. The effectiveness of NIR-DW and NIR-LA was similar, but significantly more sublethally injured cells were recovered from NIR-DW-treated samples. We confirmed that the enhanced bactericidal effect of the NIR-LA combination may not be attributable to cell membrane damage per se. NIR heat treatment might allow S. Enteritidis cells to become permeable to applied LA solution. The NIR-LA treatment (5 min) did not significantly (P > 0.05) cause changes in the lipid peroxidation parameters, total phenolic contents, color values, moisture contents, and sensory attributes of nut kernels. Given the results of the present study, NIR-LA treatment may be a potential intervention for controlling food-borne pathogens on nut kernel products. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  2. High efficiency, quasi-instantaneous steam expansion device utilizing fossil or nuclear fuel as the heat source

    SciTech Connect

    Claudio Filippone, Ph.D.

    1999-06-01

    Thermal-hydraulic analysis of a specially designed steam expansion device (heat cavity) was performed to prove the feasibility of steam expansions at elevated rates for power generation with higher efficiency. The steam expansion process inside the heat cavity greatly depends on the gap within which the steam expands and accelerates. This system can be seen as a miniaturized boiler integrated inside the expander where steam (or the proper fluid) is generated almost instantaneously prior to its expansion in the work-producing unit. Relatively cold water is pulsed inside the heat cavity, where the heat transferred causes the water to flash to steam, thereby increasing its specific volume by a large factor. The gap inside the heat cavity forms a special nozzle-shaped system in which the fluid expands rapidly, accelerating toward the system outlet. The expansion phenomenon is the cause of ever-increasing fluid speed inside the cavity system, eliminating the need for moving parts (pumps, valves, etc.). In fact, the subsequent velocity induced by the sudden fluid expansion causes turbulent conditions, forcing accelerating Reynolds and Nusselt numbers which, in turn, increase the convective heat transfer coefficient. When the combustion of fossil fuels constitutes the heat source, the heat cavity concept can be applied directly inside the stator of conventional turbines, thereby greatly increasing the overall system efficiency.

  3. HIGH EFFICIENCY, QUASI-INSTANTANEOUS STEAM EXPANSION DEVICE UTILIZING FOSSIL OR NUCLEAR FUEL AS THE HEAT SOURCE

    SciTech Connect

    Claudio Filippone, Ph.D.

    1999-06-01

    Thermal-hydraulic analysis of a specially designed steam expansion device (heat cavity) was performed to prove the feasibility of steam expansions at elevated rates for power generation with higher efficiency. The steam expansion process inside the heat cavity greatly depends on the gap within which the steam expands and accelerates. This system can be seen as a miniaturized boiler integrated inside the expander where steam (or the proper fluid) is generated almost instantaneously prior to its expansion in the work-producing unit. Relatively cold water is pulsed inside the heat cavity, where the heat transferred causes the water to flash to steam, thereby increasing its specific volume by a large factor. The gap inside the heat cavity forms a special nozzle-shaped system in which the fluid expands rapidly, accelerating toward the system outlet. The expansion phenomenon is the cause of ever-increasing fluid speed inside the cavity system, eliminating the need for moving parts (pumps, valves, etc.). In fact, the subsequent velocity induced by the sudden fluid expansion causes turbulent conditions, forcing accelerating Reynolds and Nusselt numbers which, in turn, increase the convective heat transfer coefficient. When the combustion of fossil fuels constitutes the heat source, the heat cavity concept can be applied directly inside the stator of conventional turbines, thereby greatly increasing the overall system efficiency.

  4. Heat Transfer and Fluid Dynamics Measurements in the Expansion Space of a Stirling Cycle Engine

    NASA Technical Reports Server (NTRS)

    Jiang, Nan; Simon, Terrence W.

    2006-01-01

    The heater (or acceptor) of a Stirling engine, where most of the thermal energy is accepted into the engine by heat transfer, is the hottest part of the engine. Almost as hot is the adjacent expansion space of the engine. In the expansion space, the flow is oscillatory, impinging on a two-dimensional concavely-curved surface. Knowing the heat transfer on the inside surface of the engine head is critical to the engine design for efficiency and reliability. However, the flow in this region is not well understood and support is required to develop the CFD codes needed to design modern Stirling engines of high efficiency and power output. The present project is to experimentally investigate the flow and heat transfer in the heater head region. Flow fields and heat transfer coefficients are measured to characterize the oscillatory flow as well as to supply experimental validation for the CFD Stirling engine design codes. Presented also is a discussion of how these results might be used for heater head and acceptor region design calculations.

  5. The effects of volcanic eruptions on simulated ocean heat content and thermal expansion

    NASA Astrophysics Data System (ADS)

    Gleckler, P.; Achutarao, K.; Barnett, T.; Gregory, J.; Pierce, D.; Santer, B.; Taylor, K.; Wigley, T.

    2006-12-01

    We examine the ocean heat content in a recent suite of coupled ocean-atmosphere model simulations of the 20th Century. Our results suggest that 20th Century increases in ocean heat content and sea-level (via thermal expansion) were substantially reduced by the 1883 eruption of Krakatoa. The volcanically-induced cooling of the ocean surface is subducted into deeper ocean layers, where it persists for decades. Temporary reductions in ocean heat content associated with the comparable eruptions of El Chichon (1982) and Pinatubo (1991) were much shorter lived because they occurred relative to a non-stationary background of large, anthropogenically-forced ocean warming. To understand the response of these simulations to volcanic loadings, we focus on multiple realizations of the 20th Century experiment with three models (NCAR CCSM3, GFDL 2.0, and GISS HYCOM). By comparing these runs to control simulations of each model, we track the three dimensional oceanic response to Krakatoa using S/N analysis. Inter-model differences in the oceanic thermal response to Krakatoa are large and arise from differences in external forcing, model physics, and experimental design. Our results suggest that inclusion of the effects of Krakatoa (and perhaps even earlier eruptions) is important for reliable simulation of 20th century ocean heat uptake and thermal expansion. Systematic experimentation will be required to quantify the relative importance of these factors.

  6. Thermal Expansion, Specific Heat and Magnetostriction Measurements on R-Copper

    NASA Astrophysics Data System (ADS)

    Chien, Teh-Shih

    The RCu (R = Gd, Tb, Dy and Ho) and R _2In (R = Gd and Tb) alloys have been systematically studied by thermal expansion, specific heat and magnetostriction measurements in order to investigate their magnetic and physical behaviors. GdCu and TbCu alloys undergo martensitic transformations at high and low temperatures. The Neel temperature of the GdCu alloy is 141.3 K from thermal expansion measurements. The Neel temperature T_{rm N} and martensitic transformation temperature M _{rm s} are 113.6 K and 116 K, respectively, for TbCu alloy. This is the first study to distinguish T_{rm N} from M_{rm s} using thermal expansion and specific heat measurements as well as a large thermal hysteresis. Both GdCu and TbCu alloys have a first-order structural transformation and a second-order magnetic phase transition. DyCu alloy has T_{rm N} = 60.5 K. The magnetic specific heat, C_{ rm m}, is a function of T^3 which obeys spin wave theory. HoCu alloy has T _{rm N} = 26 K and a spin reorientation at 14.1 K. YCu alloy has a Debye temperature of 230 K and C_{rm e} = 0.002T J/moleK. The Debye temperature is 160 K for all RCu alloys except for the DyCu alloy which has theta = 150 K. Gd_2In alloy has T _{rm N} = 97 K and T _{rm c} = 190.3 K which are associated with the antiferromagnetic and ferromagnetic transitions, respectively, from thermal expansion and magnetostriction measurements. Gd_2In alloy is a metamagnet with a critical magnetic field H = 8 kOe. Volume magnetostriction, omega_{rm V} is a function of H^{2 over3} in the ferromagnetic state. omega_{rm v} is a function of H^2, as expected, in the antiferromagnetic and paramagnetic states. The Curie temperature is 167.5 K for Tb_2In, as given by the thermal expansion and specific heat measurements. omega_{rm v} is a function of H in the ferromagnetic state. omega_{rm v} is a function of H^2, as expected, in the paramagnetic state.

  7. Vocational-Technical Physics Project. Thermometers: I. Temperature and Heat, II. Expansion Thermometers, III. Electrical Thermometers. Field Test Edition.

    ERIC Educational Resources Information Center

    Forsyth Technical Inst., Winston-Salem, NC.

    This vocational physics individualized student instructional module on thermometers consists of the three units: Temperature and heat, expansion thermometers, and electrical thermometers. Designed with a laboratory orientation, experiments are included on linear expansion; making a bimetallic thermometer, a liquid-in-gas thermometer, and a gas…

  8. Vocational-Technical Physics Project. Thermometers: I. Temperature and Heat, II. Expansion Thermometers, III. Electrical Thermometers. Field Test Edition.

    ERIC Educational Resources Information Center

    Forsyth Technical Inst., Winston-Salem, NC.

    This vocational physics individualized student instructional module on thermometers consists of the three units: Temperature and heat, expansion thermometers, and electrical thermometers. Designed with a laboratory orientation, experiments are included on linear expansion; making a bimetallic thermometer, a liquid-in-gas thermometer, and a gas…

  9. Thermal expansion, heat capacity and magnetostriction of RAl3 (R = Tm, Yb, Lu) single crystals

    SciTech Connect

    Bud'ko, S.; Frenerick, J.; Mun, E.; Canfield, P.; Schmiedeshoff, G.

    2007-12-13

    We present thermal expansion and longitudinal magnetostriction data for cubic RAl{sub 3} (R = Tm, Yb, Lu) single crystals. The thermal expansion coefficient for YbAl{sub 3} is consistent with an intermediate valence of the Yb ion, whereas the data for TmAl{sub 3} show crystal electric field contributions and have strong magnetic field dependences. de Haas-van Alphen like oscillations were observed in the magnetostriction data for YbAl{sub 3} and LuAl{sub 3}, several new extreme orbits were measured and their effective masses were estimated. Specific heat data taken at 0 and 140 kOe for both LuAl{sub 3} and TmAl{sub 3} for T {le} 200 K allow for the determination of a crystal electric field splitting scheme for TmAl{sub 3}.

  10. Effects of compression and expansion ramp fuel injector configuration on scramjet combustion and heat transfer

    NASA Technical Reports Server (NTRS)

    Stouffer, Scott D.; Baker, N. R.; Capriotti, D. P.; Northam, G. B.

    1993-01-01

    A scramjet combustor with four wall-ramp injectors containing Mach-1.7 fuel jets in the base of the ramps was investigated experimentally. During the test program, two swept ramp injector designs were evaluated. One swept-ramp model had 10-deg compression-ramps and the other had 10-deg expansion cavities between flush wall ramps. The scramjet combustor model was instrumented with pressure taps and heat-flux gages. The pressure measurements indicated that both injector configurations were effective in promoting mixing and combustion. Autoignition occurred for the compression-ramp injectors, and the fuel began to burn immediately downstream of the injectors. In tests of the expansion ramps, a pilot was required to ignite the fuel, and the fuel did not burn for a distance of at least two gaps downstream of the injectors. Once initiated, combustion was rapid in this configuration. Heat transfer measurements showed that the heat flux differed greatly both across the width of the combustor and along the length of the combustor.

  11. Krakatoa lives: The effect of volcanic eruptions on ocean heat content and thermal expansion

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.; AchutaRao, K.; Gregory, J. M.; Santer, B. D.; Taylor, K. E.; Wigley, T. M. L.

    2006-09-01

    A suite of climate model experiments indicates that 20th Century increases in ocean heat content and sea-level (via thermal expansion) were substantially reduced by the 1883 eruption of Krakatoa. The volcanically-induced cooling of the ocean surface is subducted into deeper ocean layers, where it persists for decades. Temporary reductions in ocean heat content associated with the comparable eruptions of El Chichón (1982) and Pinatubo (1991) were much shorter lived because they occurred relative to a non-stationary background of large, anthropogenically-forced ocean warming. Our results suggest that inclusion of the effects of Krakatoa (and perhaps even earlier eruptions) is important for reliable simulation of 20th century ocean heat uptake and thermal expansion. Inter-model differences in the oceanic thermal response to Krakatoa are large and arise from differences in external forcing, model physics, and experimental design. Systematic experimentation is required to quantify the relative importance of these factors. The next generation of historical forcing experiments may require more careful treatment of pre-industrial volcanic aerosol loadings.

  12. Dependence of divertor heat flux widths on heating power, flux expansion, and plasma current in the NSTX

    SciTech Connect

    Maingi, Rajesh; Soukhanovskii, V. A.; Ahn, J.W.

    2011-01-01

    We report the dependence of the lower divertor surface heat flux profiles, measured from infrared thermography and mapped magnetically to the mid-plane on loss power into the scrape-off layer (P{sub LOSS}), plasma current (I{sub p}), and magnetic flux expansion (f{sub exp}), as well as initial results with lithium wall conditioning in NSTX. Here we extend previous studies [R. Maingi et al., J. Nucl. Mater. 363-365 (2007) 196-200] to higher triangularity similar to 0.7 and higher I{sub p} {le} 1.2 MA. First we note that the mid-plane heat flux width mapped to the mid-plane, {lambda}{sub q}{sup mid} is largely independent of P{sub LOSS} for P{sub LOSS} {ge} 4 MW. {lambda}{sub q}{sup mid} is also found to be relatively independent of f{sub exp}; peak heat flux is strongly reduced as f{sub exp} is increased, as expected. Finally, {lambda}{sub q}{sup mid} is shown to strongly contract with increasing I{sub p} such that {lambda}{sub q}{sup mid} {alpha} I{sub p}{sup -1.6} with a peak divertor heat flux of q{sub div,peak} similar to 15 MW/m{sup 2} when I{sub p} = 1.2 MA and P{sub LOSS} similar to 6 MW. These relationships are then used to predict the divertor heat flux for the planned NSTX-Upgrade, with heating power between 10 and 15 MW, B{sub t} = 1.01 and I{sub p}= 2.0 MA for 5 s.

  13. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  14. Akermanite: phase transitions in heat capacity and thermal expansion, and revised thermodynamic data.

    USGS Publications Warehouse

    Hemingway, B.S.; Evans, H.T.; Nord, G.L.; Haselton, H.T.; Robie, R.A.; McGee, J.J.

    1986-01-01

    A small but sharp anomaly in the heat capacity of akermanite at 357.9 K, and a discontinuity in its thermal expansion at 693 K, as determined by XRD, have been found. The enthalpy and entropy assigned to the heat-capacity anomaly, for the purpose of tabulation, are 679 J/mol and 1.9 J/(mol.K), respectively. They were determined from the difference between the measured values of the heat capacity in the T interval 320-365 K and that obtained from an equation which fits the heat-capacity and heat-content data for akermanite from 290 to 1731 K. Heat-capacity measurements are reported for the T range from 9 to 995 K. The entropy and enthalpy of formation of akermanite at 298.15 K and 1 bar are 212.5 + or - 0.4 J/(mol.K) and -3864.5 + or - 4.0 kJ/mol, respectively. Weak satellite reflections have been observed in hk0 single-crystal X-ray precession photographs and electron-diffraction patterns of this material at room T. With in situ heating by TEM, the satellite reflections decreased significantly in intensity above 358 K and disappeared at about 580 K and, on cooling, reappeared. These observations suggest that the anomalies in the thermal behaviour of akermanite are associated with local displacements of Ca ions from the mirror plane (space group P421m) and accompanying distortion of the MgSi2O7 framework.-L.C.C.

  15. High Enthalpy Studies of Capsule Heating in an Expansion Tunnel Facility

    NASA Technical Reports Server (NTRS)

    Dufrene, Aaron; MacLean, Matthew; Holden, Michael

    2012-01-01

    Measurements were made on an Orion heat shield model to demonstrate the capability of the new LENS-XX expansion tunnel facility to make high quality measurements of heat transfer distributions at flow velocities from 3 km/s (h(sub 0) = 5 MJ/kg) to 8.4 km/s (h(sub 0) = 36 MJ/kg). Thirty-nine heat transfer gauges, including both thin-film and thermocouple instruments, as well as four pressure gauges, and high-speed Schlieren were used to assess the aerothermal environment on the capsule heat shield. Only results from laminar boundary layer runs are reported. A major finding of this test series is that the high enthalpy, low-density flows displayed surface heating behavior that is observed to be consistent with some finite-rate recombination process occurring on the surface of the model. It is too early to speculate on the nature of the mechanism, but the response of the gages on the surface seems generally repeatable and consistent for a range of conditions. This result is an important milestone in developing and proving a capability to make measurements in a ground test environment and extrapolate them to flight for conditions with extreme non-equilibrium effects. Additionally, no significant, isolated stagnation point augmentation ("bump") was observed in the tests in this facility. Cases at higher Reynolds number seemed to show the greatest amount of overall increase in heating on the windward side of the model, which may in part be due to small-scale particulate.

  16. Are heat waves susceptible to mitigate the expansion of a species progressing with global warming?

    PubMed Central

    Robinet, Christelle; Rousselet, Jérôme; Pineau, Patrick; Miard, Florie; Roques, Alain

    2013-01-01

    A number of organisms, especially insects, are extending their range in response of the increasing trend of warmer temperatures. However, the effects of more frequent climatic anomalies on these species are not clearly known. The pine processionary moth, Thaumetopoea pityocampa, is a forest pest that is currently extending its geographical distribution in Europe in response to climate warming. However, its population density largely decreased in its northern expansion range (near Paris, France) the year following the 2003 heat wave. In this study, we tested whether the 2003 heat wave could have killed a large part of egg masses. First, the local heat wave intensity was determined. Then, an outdoor experiment was conducted to measure the deviation between the temperatures recorded by weather stations and those observed within sun-exposed egg masses. A second experiment was conducted under laboratory conditions to simulate heat wave conditions (with night/day temperatures of 20/32°C and 20/40°C compared to the control treatment 13/20°C) and measure the potential effects of this heat wave on egg masses. No effects were noticed on egg development. Then, larvae hatched from these egg masses were reared under mild conditions until the third instar and no delayed effects on the development of larvae were found. Instead of eggs, the 2003 heat wave had probably affected directly or indirectly the young larvae that were already hatched when it occurred. Our results suggest that the effects of extreme climatic anomalies occurring over narrow time windows are difficult to determine because they strongly depend on the life stage of the species exposed to these anomalies. However, these effects could potentially reduce or enhance the average warming effects. As extreme weather conditions are predicted to become more frequent in the future, it is necessary to disentangle the effects of the warming trend from the effects of climatic anomalies when predicting the response of a

  17. Are heat waves susceptible to mitigate the expansion of a species progressing with global warming?

    PubMed

    Robinet, Christelle; Rousselet, Jérôme; Pineau, Patrick; Miard, Florie; Roques, Alain

    2013-09-01

    A number of organisms, especially insects, are extending their range in response of the increasing trend of warmer temperatures. However, the effects of more frequent climatic anomalies on these species are not clearly known. The pine processionary moth, Thaumetopoea pityocampa, is a forest pest that is currently extending its geographical distribution in Europe in response to climate warming. However, its population density largely decreased in its northern expansion range (near Paris, France) the year following the 2003 heat wave. In this study, we tested whether the 2003 heat wave could have killed a large part of egg masses. First, the local heat wave intensity was determined. Then, an outdoor experiment was conducted to measure the deviation between the temperatures recorded by weather stations and those observed within sun-exposed egg masses. A second experiment was conducted under laboratory conditions to simulate heat wave conditions (with night/day temperatures of 20/32°C and 20/40°C compared to the control treatment 13/20°C) and measure the potential effects of this heat wave on egg masses. No effects were noticed on egg development. Then, larvae hatched from these egg masses were reared under mild conditions until the third instar and no delayed effects on the development of larvae were found. Instead of eggs, the 2003 heat wave had probably affected directly or indirectly the young larvae that were already hatched when it occurred. Our results suggest that the effects of extreme climatic anomalies occurring over narrow time windows are difficult to determine because they strongly depend on the life stage of the species exposed to these anomalies. However, these effects could potentially reduce or enhance the average warming effects. As extreme weather conditions are predicted to become more frequent in the future, it is necessary to disentangle the effects of the warming trend from the effects of climatic anomalies when predicting the response of a

  18. Bergman kernel from the lowest Landau level

    NASA Astrophysics Data System (ADS)

    Klevtsov, S.

    2009-07-01

    We use path integral representation for the density matrix, projected on the lowest Landau level, to generalize the expansion of the Bergman kernel on Kähler manifold to the case of arbitrary magnetic field.

  19. Boundary-layer computational model for predicting the flow and heat transfer in sudden expansions

    NASA Technical Reports Server (NTRS)

    Lewis, J. P.; Pletcher, R. H.

    1986-01-01

    Fully developed turbulent and laminar flows through symmetric planar and axisymmetric expansions with heat transfer were modeled using a finite-difference discretization of the boundary-layer equations. By using the boundary-layer equations to model separated flow in place of the Navier-Stokes equations, computational effort was reduced permitting turbulence modelling studies to be economically carried out. For laminar flow, the reattachment length was well predicted for Reynolds numbers as low as 20 and the details of the trapped eddy were well predicted for Reynolds numbers above 200. For turbulent flows, the Boussinesq assumption was used to express the Reynolds stresses in terms of a turbulent viscosity. Near-wall algebraic turbulence models based on Prandtl's-mixing-length model and the maximum Reynolds shear stress were compared.

  20. Heat kernels on cone of AdS2 and k-wound circular Wilson loop in AdS5 × S5 superstring

    NASA Astrophysics Data System (ADS)

    Bergamin, R.; Tseytlin, A. A.

    2016-04-01

    We compute the one-loop world-sheet correction to partition function of {{AdS}}5× {{{S}}}5 superstring that should be representing k-fundamental circular Wilson loop in planar limit. The 2d metric of the minimal surface ending on k-wound circle at the boundary is that of a cone of AdS2 with deficit 2π (1-k). We compute the determinants of 2d fluctuation operators by first constructing heat kernels of scalar and spinor Laplacians on the cone using the Sommerfeld formula. The final expression for the k-dependent part of the one-loop correction has simple integral representation but is different from earlier results.

  1. The effects of compressor speed and electronic expansion valve opening on the optimum design of inverter heat pump at various heating loads

    SciTech Connect

    Hwang, Y.; Kim, Y.; Park, J.; Kim, C.

    1999-07-01

    The experiments to design the optimum operation point of an inverter heat pump were performed by varying compressor speed and expansion valve opening for various heating loads. At the indoor temperatures of {minus}5 {approximately} 15C and outdoor temperatures of {minus}10 {approximately} 25 C, the compressor driving frequencies were varied 10 {approximately} 120 Hz and 80 {approximately} 200 pulse for the expansion valve opening while the speed of the indoor and outdoor fans were fixed. From the results of this study, the optimum combination of compressor driving frequency and expansion valve opening were found to exist if indoor and outdoor temperatures are settled though the operation point is changed by the preferable factor among capacity, comfort and power saving.

  2. Quasilinear diffusion coefficients in a finite Larmor radius expansion for ion cyclotron heated plasmas

    DOE PAGES

    Lee, Jungpyo; Wright, John; Bertelli, Nicola; ...

    2017-04-24

    In this study, a reduced model of quasilinear velocity diffusion by a small Larmor radius approximation is derived to couple the Maxwell’s equations and the Fokker Planck equation self-consistently for the ion cyclotron range of frequency waves in a tokamak. The reduced model ensures the important properties of the full model by Kennel-Engelmann diffusion, such as diffusion directions, wave polarizations, and H-theorem. The kinetic energy change (Wdot ) is used to derive the reduced model diffusion coefficients for the fundamental damping (n = 1) and the second harmonic damping (n = 2) to the lowest order of the finite Larmormore » radius expansion. The quasilinear diffusion coefficients are implemented in a coupled code (TORIC-CQL3D) with the equivalent reduced model of the dielectric tensor. We also present the simulations of the ITER minority heating scenario, in which the reduced model is verified within the allowable errors from the full model results.« less

  3. Negative thermal expansion and anomalies of heat capacity of LuB50 at low temperatures

    SciTech Connect

    Novikov, V. V.; Zhemoedov, N. A.; Matovnikov, A. V.; Mitroshenkov, N. V.; Kuznetsov, S. V.; Bud'ko, S. L.

    2015-07-20

    Heat capacity and thermal expansion of LuB50 boride were experimentally studied in the 2–300 K temperature range. The data reveal an anomalous contribution to the heat capacity at low temperatures. The value of this contribution is proportional to the first degree of temperature. It was identified that this anomaly in heat capacity is caused by the effect of disorder in the LuB50 crystalline structure and it can be described in the soft atomic potential model (SAP). The parameters of the approximation were determined. The temperature dependence of LuB50 heat capacity in the whole temperature range was approximated by the sum of SAP contribution, Debye and two Einstein components. The parameters of SAP contribution for LuB50 were compared to the corresponding values for LuB66, which was studied earlier. Negative thermal expansion at low temperatures was experimentally observed for LuB50. The analysis of the experimental temperature dependence for the Gruneisen parameter of LuB50 suggested that the low-frequency oscillations, described in SAP mode, are responsible for the negative thermal expansion. As a result, the glasslike character of the behavior of LuB50 thermal characteristics at low temperatures was confirmed.

  4. Numerical simulation of heat transfer to separation tio2/water nanofluids flow in an asymmetric abrupt expansion

    NASA Astrophysics Data System (ADS)

    Oon, Cheen Sean; Nee Yew, Sin; Chew, Bee Teng; Salim Newaz, Kazi Md; Al-Shamma'a, Ahmed; Shaw, Andy; Amiri, Ahmad

    2015-05-01

    Flow separation and reattachment of 0.2% TiO2 nanofluid in an asymmetric abrupt expansion is studied in this paper. Such flows occur in various engineering and heat transfer applications. Computational fluid dynamics package (FLUENT) is used to investigate turbulent nanofluid flow in the horizontal double-tube heat exchanger. The meshing of this model consists of 43383 nodes and 74891 elements. Only a quarter of the annular pipe is developed and simulated as it has symmetrical geometry. Standard k-epsilon second order implicit, pressure based-solver equation is applied. Reynolds numbers between 17050 and 44545, step height ratio of 1 and 1.82 and constant heat flux of 49050 W/m2 was utilized in the simulation. Water was used as a working fluid to benchmark the study of the heat transfer enhancement in this case. Numerical simulation results show that the increase in the Reynolds number increases the heat transfer coefficient and Nusselt number of the flowing fluid. Moreover, the surface temperature will drop to its lowest value after the expansion and then gradually increase along the pipe. Finally, the chaotic movement and higher thermal conductivity of the TiO2 nanoparticles have contributed to the overall heat transfer enhancement of the nanofluid compare to the water.

  5. Effect of dynamic and thermal prehistory on aerodynamic characteristics and heat transfer behind a sudden expansion in a round tube

    NASA Astrophysics Data System (ADS)

    Terekhov, V. I.; Bogatko, T. V.

    2017-03-01

    The results of a numerical study of the influence of the thicknesses of dynamic and thermal boundary layers on turbulent separation and heat transfer in a tube with sudden expansion are presented. The first part of this work studies the influence of the thickness of the dynamic boundary layer, which was varied by changing the length of the stabilization area within the maximal extent possible: from zero to half of the tube diameter. In the second part of the study, the flow before separation was hydrodynamically stabilized and the thermal layer before the expansion could simultaneously change its thickness from 0 to D1/2. The Reynolds number was varied in the range of {Re}_{{{{D}}1 }} = 6.7 \\cdot 103 {{to}} 1.33 \\cdot 105, and the degree of tube expansion remained constant at ER = ( D 2/ D 1)2 = 1.78. A significant effect of the thickness of the separated boundary layer on both dynamic and thermal characteristics of the flow is shown. In particular, it was found out that with an increase in the thickness of the boundary layer the recirculation zone increases and the maximal Nusselt number decreases. It was determined that the growth of the heat layer thickness does not affect the hydrodynamic characteristics of the flow after separation but does lead to a reduction of heat transfer intensity in the separation area and removal of the coordinates of maximal heat transfer from the point of tube expansion. The generalizing dependence for the maximal Nusselt number at various thermal layer thicknesses is given. Comparison with experimental data confirmed the main trends in the behavior of heat and mass transfer processes in separated flows behind a step with different thermal prehistories.

  6. Weighted Bergman kernels and virtual Bergman kernels

    NASA Astrophysics Data System (ADS)

    Roos, Guy

    2005-12-01

    We introduce the notion of "virtual Bergman kernel" and apply it to the computation of the Bergman kernel of "domains inflated by Hermitian balls", in particular when the base domain is a bounded symmetric domain.

  7. Crystalline electric field and lattice contributions to thermodynamic properties of PrGaO3: specific heat and thermal expansion

    NASA Astrophysics Data System (ADS)

    Senyshyn, A.; Schnelle, W.; Vasylechko, L.; Ehrenberg, H.; Berkowski, M.

    2007-04-01

    The low-temperature heat capacity of perovskite-type PrGaO3 has been measured in the temperature range from 2 to 320 K. Thermodynamic standard values at 298.15 K are reported. An initial Debye temperature θD(0) = (480 ± 10) K was determined by fitting the calculated lattice heat capacity. The entropy of the derived Debye temperature functions agrees well with values calculated from thermal displacement parameters and from atomistic simulations. The thermal expansion and the Grüneisen parameter, arising from a coupling of crystal field states of Pr3+ ion and phonon modes at low temperature, were analysed.

  8. The Adiabatic Expansion of Gases and the Determination of Heat Capacity Ratios: A Physical Chemistry Experiment.

    ERIC Educational Resources Information Center

    Moore, William M.

    1984-01-01

    Describes the procedures and equipment for an experiment on the adiabatic expansion of gases suitable for demonstration and discussion in the physical chemical laboratory. The expansion produced shows how the process can change temperature and still return to a different location on an isotherm. (JN)

  9. Differential response of cell-cycle and cell-expansion regulators to heat stress in apple (Malus domestica) fruitlets.

    PubMed

    Flaishman, Moshe A; Peles, Yuval; Dahan, Yardena; Milo-Cochavi, Shira; Frieman, Aviad; Naor, Amos

    2015-04-01

    Temperature is one of the most significant factors affecting physiological and biochemical aspects of fruit development. Current and progressing global warming is expected to change climate in the traditional deciduous fruit tree cultivation regions. In this study, 'Golden Delicious' trees, grown in a controlled environment or commercial orchard, were exposed to different periods of heat treatment. Early fruitlet development was documented by evaluating cell number, cell size and fruit diameter for 5-70 days after full bloom. Normal activities of molecular developmental and growth processes in apple fruitlets were disrupted under daytime air temperatures of 29°C and higher as a result of significant temporary declines in cell-production and cell-expansion rates, respectively. Expression screening of selected cell cycle and cell expansion genes revealed the influence of high temperature on genetic regulation of apple fruitlet development. Several core cell-cycle and cell-expansion genes were differentially expressed under high temperatures. While expression levels of B-type cyclin-dependent kinases and A- and B-type cyclins declined moderately in response to elevated temperatures, expression of several cell-cycle inhibitors, such as Mdwee1, Mdrbr and Mdkrps was sharply enhanced as the temperature rose, blocking the cell-cycle cascade at the G1/S and G2/M transition points. Moreover, expression of several expansin genes was associated with high temperatures, making them potentially useful as molecular platforms to enhance cell-expansion processes under high-temperature regimes. Understanding the molecular mechanisms of heat tolerance associated with genes controlling cell cycle and cell expansion may lead to the development of novel strategies for improving apple fruit productivity under global warming. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Non-Markovian expansion in quantum Brownian motion

    NASA Astrophysics Data System (ADS)

    Fraga, Eduardo S.; Krein, Gastão; Palhares, Letícia F.

    2014-01-01

    We consider the non-Markovian Langevin evolution of a dissipative dynamical system in quantum mechanics in the path integral formalism. After discussing the role of the frequency cutoff for the interaction of the system with the heat bath and the kernel and noise correlator that follow from the most common choices, we derive an analytic expansion for the exact non-Markovian dissipation kernel and the corresponding colored noise in the general case that is consistent with the fluctuation-dissipation theorem and incorporates systematically non-local corrections. We illustrate the modifications to results obtained using the traditional (Markovian) Langevin approach in the case of the exponential kernel and analyze the case of the non-Markovian Brownian motion. We present detailed results for the free and the quadratic cases, which can be compared to exact solutions to test the convergence of the method, and discuss potentials of a general nonlinear form.

  11. Description and initial operating performance of the Langley 6-inch expansion tube using heated helium driver gas

    NASA Technical Reports Server (NTRS)

    Moore, J. A.

    1975-01-01

    A general description of the Langley 6-inch expansion tube is presented along with discussion of the basic components, internal resistance heater, arc-discharge assemblies, instrumentation, and operating procedure. Preliminary results using unheated and resistance-heated helium as the driver gas are presented. The driver-gas pressure ranged from approximately 17 to 59 MPa and its temperature ranged from 300 to 510 K. Interface velocities of approximately 3.8 to 6.7 km/sec were generated between the test gas and the acceleration gas using air as the test gas and helium as the acceleration gas. Test flow quality and comparison of measured and predicted expansion-tube flow quantities are discussed.

  12. The Heat Resistance of Microbial Cells Represented by D Values Can be Estimated by the Transition Temperature and the Coefficient of Linear Expansion.

    PubMed

    Nakanishi, Koichi; Kogure, Akinori; Deuchi, Keiji; Kuwana, Ritsuko; Takamatsu, Hiromu; Ito, Kiyoshi

    2015-01-01

    We previously developed a method for evaluating the heat resistance of microorganisms by measuring the transition temperature at which the coefficient of linear expansion of a cell changes. Here, we performed heat resistance measurements using a scanning probe microscope with a nano thermal analysis system. The microorganisms studied included six strains of the genus Bacillus or related genera, one strain each of the thermophilic obligate anaerobic bacterial genera Thermoanaerobacter and Moorella, two strains of heat-resistant mold, two strains of non-sporulating bacteria, and one strain of yeast. Both vegetative cells and spores were evaluated. The transition temperature at which the coefficient of linear expansion due to heating changed from a positive value to a negative value correlated strongly with the heat resistance of the microorganism as estimated from the D value. The microorganisms with greater heat resistance exhibited higher transition temperatures. There was also a strong negative correlation between the coefficient of linear expansion and heat resistance in bacteria and yeast, such that microorganisms with greater heat resistance showed lower coefficients of linear expansion. These findings suggest that our method could be useful for evaluating the heat resistance of microorganisms.

  13. Thermal expansion of UO2+x nuclear fuel rods from a model coupling heat transfer and oxygen diffusion

    SciTech Connect

    Mihaila, Bogden; Zubelewicz, Aleksander; Stan, Marius; Ramirez, Juan

    2008-01-01

    We study the thermal expansion of UO{sub 2+x} nuclear fuel rod in the context of a model coupling heat transfer and oxygen diffusion discussed previously by J.C. Ramirez, M. Stan and P. Cristea [J. Nucl. Mat. 359 (2006) 174]. We report results of simulations performed for steady-state and time-dependent regimes in one-dimensional configurations. A variety of initial- and boundary-value scenarios are considered. We use material properties obtained from previously published correlations or from analysis of previously published data. All simulations were performed using the commercial code COMSOL Multiphysics{sup TM} and are readily extendable to include multidimensional effects.

  14. Regular expansion solutions for small Peclet number heat or mass transfer in concentrated two-phase particulate systems

    NASA Technical Reports Server (NTRS)

    Yaron, I.

    1974-01-01

    Steady state heat or mass transfer in concentrated ensembles of drops, bubbles or solid spheres in uniform, slow viscous motion, is investigated. Convective effects at small Peclet numbers are taken into account by expanding the nondimensional temperature or concentration in powers of the Peclet number. Uniformly valid solutions are obtained, which reflect the effects of dispersed phase content and rate of internal circulation within the fluid particles. The dependence of the range of Peclet and Reynolds numbers, for which regular expansions are valid, on particle concentration is discussed.

  15. Study of the causes and identification of the dominant mechanisms of failure of bellows expansion joints used in district heating system pipelines at MOEK

    NASA Astrophysics Data System (ADS)

    Tomarov, G. V.; Nikolaev, A. E.; Semenov, V. N.; Shipkov, A. A.; Shepelev, S. V.

    2015-06-01

    The results of laboratory studies of material properties and of numerical and analytical investigations to assess the stress-strain state of the metal of the bellows expansion joints used in the district heating system pipelines at MOEK subjected to corrosion failure are presented. The main causes and the dominant mechanisms of failure of the expansion joints have been identified. The influence of the initial crevice defects and the operating conditions on the features and intensity of destruction processes in expansion joints used in the district heating system pipelines at MOEK has been established.

  16. The Kernel Energy Method: Construction of 3 & 4 tuple Kernels from a List of Double Kernel Interactions

    PubMed Central

    Huang, Lulu; Massa, Lou

    2010-01-01

    The Kernel Energy Method (KEM) provides a way to calculate the ab-initio energy of very large biological molecules. The results are accurate, and the computational time reduced. However, by use of a list of double kernel interactions a significant additional reduction of computational effort may be achieved, still retaining ab-initio accuracy. A numerical comparison of the indices that name the known double interactions in question, allow one to list higher order interactions having the property of topological continuity within the full molecule of interest. When, that list of interactions is unpacked, as a kernel expansion, which weights the relative importance of each kernel in an expression for the total molecular energy, high accuracy, and a further significant reduction in computational effort results. A KEM molecular energy calculation based upon the HF/STO3G chemical model, is applied to the protein insulin, as an illustration. PMID:21243065

  17. Loop expansion and the bosonic representation of loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Bianchi, E.; Guglielmon, J.; Hackl, L.; Yokomizo, N.

    2016-10-01

    We introduce a new loop expansion that provides a resolution of the identity in the Hilbert space of loop quantum gravity on a fixed graph. We work in the bosonic representation obtained by the canonical quantization of the spinorial formalism. The resolution of the identity gives a tool for implementing the projection of states in the full bosonic representation onto the space of solutions to the Gauss and area matching constraints of loop quantum gravity. This procedure is particularly efficient in the semiclassical regime, leading to explicit expressions for the loop expansions of coherent, heat kernel and squeezed states.

  18. Exact Analytical Solution for 3D Time-Dependent Heat Conduction in a Multilayer Sphere with Heat Sources Using Eigenfunction Expansion Method.

    PubMed

    Dalir, Nemat

    2014-01-01

    An exact analytical solution is obtained for the problem of three-dimensional transient heat conduction in the multilayered sphere. The sphere has multiple layers in the radial direction and, in each layer, time-dependent and spatially nonuniform volumetric internal heat sources are considered. To obtain the temperature distribution, the eigenfunction expansion method is used. An arbitrary combination of homogenous boundary condition of the first or second kind can be applied in the angular and azimuthal directions. Nevertheless, solution is valid for nonhomogeneous boundary conditions of the third kind (convection) in the radial direction. A case study problem for the three-layer quarter-spherical region is solved and the results are discussed.

  19. Exact Analytical Solution for 3D Time-Dependent Heat Conduction in a Multilayer Sphere with Heat Sources Using Eigenfunction Expansion Method

    PubMed Central

    Dalir, Nemat

    2014-01-01

    An exact analytical solution is obtained for the problem of three-dimensional transient heat conduction in the multilayered sphere. The sphere has multiple layers in the radial direction and, in each layer, time-dependent and spatially nonuniform volumetric internal heat sources are considered. To obtain the temperature distribution, the eigenfunction expansion method is used. An arbitrary combination of homogenous boundary condition of the first or second kind can be applied in the angular and azimuthal directions. Nevertheless, solution is valid for nonhomogeneous boundary conditions of the third kind (convection) in the radial direction. A case study problem for the three-layer quarter-spherical region is solved and the results are discussed. PMID:27433511

  20. Anomalous components of supercooled water expansivity, compressibility, and heat capacity (Cp and Cv) from binary formamide+water solution studies

    NASA Astrophysics Data System (ADS)

    Oguni, M.; Angell, C. A.

    1983-06-01

    Recently reported heat capacity studies of N2H4+H2O and H2O2+H2O solutions, from which an anomalous component of the pure water behavior could be extracted by extrapolation, have been extended to a system NH2CHO+H2O which has the chemical stability needed to permit expansivity and compressibility measurements as well. Data accurate to ±2% for each of these properties as well as for the heat capacity are reported. The expansivity data support almost quantitatively an earlier speculative separation of the bulk and supercooled water expansivity into a ``normal'' (or ``background'') part and an ``anomalous'' part, the latter part fitting a critical law αanom=A(T/Ts-1)-γ with exponent γ=1.0. According to the present analysis, the anomalous part of the expansivity which is always negative, yields Ts in the range 225-228, γ in the range 1.28-1.0, depending on the choice of background extrapolation function. The normal contribution to the heat capacity obtained from the present work is intermediate in character to that from the previous two systems and leads to similar equation parameters. The normal contribution to the compressibility on the other hand is very different from that speculated earlier by Kanno and Angell and approximately verified by Conde et al. for ethanol-water solutions. The background component from the present analysis is ˜50% larger, with the result that the anomalous component, at least when values above 0 °C are included in the analysis, cannot be sensibly fitted to the critical point equation. The possible origin and interest content of these differences is discussed. Combination of the new thermodynamic data permit estimation of Cv values for the solution, and by extrapolation, a normal Cv component for water. The anomalous component of Cv for pure water obtained by difference has the form of a Shottky anomaly in contrast with the corresponding Cp component which diverges.

  1. Modification of cassava starch using combination process lactic acid hydrolysis and micro wave heating to increase coated peanut expansion quality

    NASA Astrophysics Data System (ADS)

    Sumardiono, Siswo; Pudjihastuti, Isti; Jos, Bakti; Taufani, Muhammad; Yahya, Faad

    2017-05-01

    Modified cassava starch is very prospective products in the food industry. The main consideration of this study is the increasing volume of imported wheat and the demand for modified cassava starch industry. The purpose of this study is the assessing of lactic acid hydrolysis and microwave heating impact to the physicochemical and rheological properties of modified cassava starch, and test applications of modified cassava starch to coated peanut expansion quality. Experimental variables include the concentration of lactic acid (0.5% w/w, 1% w/w; 2% w/w), a time of hydrolysis (15, 30, 45 minutes), a time of microwave heating (1, 2, 3 hours). The research step is by dissolving lactic acid using aquadest in the stirred tank reactor, then added cassava starch. Hydrolysed cassava starch was then heated by microwave. Physicochemical properties and rheology of the modified cassava starch is determined by the solubility, swelling power, and test congestion. The optimum obtained results indicate that solubility, swelling power, congestion test, respectively for 19.75%; 24.25% and 826.10% in the hydrolysis treatment for 15 minutes, 1% w lactic acid and microwave heating 3 hours. The physicochemical and rheological properties of modified cassava starch have changed significantly when compared to the native cassava starch. Furthermore, these modified cassava starch are expected to be used for the substitution of food products.

  2. Compressibility, thermal expansion coefficient and heat capacity of CH4 and CO2 hydrate mixtures using molecular dynamics simulations.

    PubMed

    Ning, F L; Glavatskiy, K; Ji, Z; Kjelstrup, S; H Vlugt, T J

    2015-01-28

    Understanding the thermal and mechanical properties of CH4 and CO2 hydrates is essential for the replacement of CH4 with CO2 in natural hydrate deposits as well as for CO2 sequestration and storage. In this work, we present isothermal compressibility, isobaric thermal expansion coefficient and specific heat capacity of fully occupied single-crystal sI-CH4 hydrates, CO2 hydrates and hydrates of their mixture using molecular dynamics simulations. Eight rigid/nonpolarisable water interaction models and three CH4 and CO2 interaction potentials were selected to examine the atomic interactions in the sI hydrate structure. The TIP4P/2005 water model combined with the DACNIS united-atom CH4 potential and TraPPE CO2 rigid potential were found to be suitable molecular interaction models. Using these molecular models, the results indicate that both the lattice parameters and the compressibility of the sI hydrates agree with those from experimental measurements. The calculated bulk modulus for any mixture ratio of CH4 and CO2 hydrates varies between 8.5 GPa and 10.4 GPa at 271.15 K between 10 and 100 MPa. The calculated thermal expansion and specific heat capacities of CH4 hydrates are also comparable with experimental values above approximately 260 K. The compressibility and expansion coefficient of guest gas mixture hydrates increase with an increasing ratio of CO2-to-CH4, while the bulk modulus and specific heat capacity exhibit the opposite trend. The presented results for the specific heat capacities of 2220-2699.0 J kg(-1) K(-1) for any mixture ratio of CH4 and CO2 hydrates are the first reported so far. These computational results provide a useful database for practical natural gas recovery from CH4 hydrates in deep oceans where CO2 is considered to replace CH4, as well as for phase equilibrium and mechanical stability of gas hydrate-bearing sediments. The computational schemes also provide an appropriate balance between computational accuracy and cost for predicting

  3. An Implementation of Multiprogramming and Process Management for a Security Kernel Operating System.

    DTIC Science & Technology

    1980-06-01

    multiplexing technique for a distributed kernel and presents a virtual interrupt mechanism. Its structure is loop free to permit future expansion into more...coordinates the asynchronous interaction of system processes. This implementation describes a processor multiplexing technique for a distributed kernel...system. This implementation employs a processor multiplexing technique for a distributed kernel and presents a virtual interrupt mechanism. The

  4. Asymptotic expansions of solutions of the heat conduction equation in internally bounded cylindrical geometry

    USGS Publications Warehouse

    Ritchie, R.H.; Sakakura, A.Y.

    1956-01-01

    The formal solutions of problems involving transient heat conduction in infinite internally bounded cylindrical solids may be obtained by the Laplace transform method. Asymptotic series representing the solutions for large values of time are given in terms of functions related to the derivatives of the reciprocal gamma function. The results are applied to the case of the internally bounded infinite cylindrical medium with, (a) the boundary held at constant temperature; (b) with constant heat flow over the boundary; and (c) with the "radiation" boundary condition. A problem in the flow of gas through a porous medium is considered in detail.

  5. Semisupervised kernel matrix learning by kernel propagation.

    PubMed

    Hu, Enliang; Chen, Songcan; Zhang, Daoqiang; Yin, Xuesong

    2010-11-01

    The goal of semisupervised kernel matrix learning (SS-KML) is to learn a kernel matrix on all the given samples on which just a little supervised information, such as class label or pairwise constraint, is provided. Despite extensive research, the performance of SS-KML still leaves some space for improvement in terms of effectiveness and efficiency. For example, a recent pairwise constraints propagation (PCP) algorithm has formulated SS-KML into a semidefinite programming (SDP) problem, but its computation is very expensive, which undoubtedly restricts PCPs scalability in practice. In this paper, a novel algorithm, called kernel propagation (KP), is proposed to improve the comprehensive performance in SS-KML. The main idea of KP is first to learn a small-sized sub-kernel matrix (named seed-kernel matrix) and then propagate it into a larger-sized full-kernel matrix. Specifically, the implementation of KP consists of three stages: 1) separate the supervised sample (sub)set X(l) from the full sample set X; 2) learn a seed-kernel matrix on X(l) through solving a small-scale SDP problem; and 3) propagate the learnt seed-kernel matrix into a full-kernel matrix on X . Furthermore, following the idea in KP, we naturally develop two conveniently realizable out-of-sample extensions for KML: one is batch-style extension, and the other is online-style extension. The experiments demonstrate that KP is encouraging in both effectiveness and efficiency compared with three state-of-the-art algorithms and its related out-of-sample extensions are promising too.

  6. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  7. Experimental and theoretical study of heat transfer in constant-volume and compression-expansion systems including the effects of flame propagation

    SciTech Connect

    Woodard, J.B.

    1982-03-01

    The study of the heat transfer to the cylinder and piston surfaces in an internal combustion engine is of both practical and fundamental interest. Heat transfer processes are critical to the design and development of engines with respect to emissions, engine efficiency, and thermal stress in engine materials. In addition, experimental heat transfer data along with flame propagation results are needed to study and appraise analyses of the processes in engines. In this work, two studies of heat transfer phenomena are made. First an analysis of an observed reversal in the direction of the wall heat flux is presented. Then, an experimental study is reported in which simultaneous wall heat flux, pressure, and flame propagation data are obtained. In particular, the interaction of the flame propagation and wall heat transfer is discussed. An analytical solution for the reversal in direction of the wall heat flux during compression and expansion is derived. The results are in good agreement with experimental measurements. Temperature profiles were calculated. From these temperature profiles and the analytical solution of the wall heat flux, a physical explanation for the heat tranfer reversal is determined. Simultaneous measurements of pressure, wall temperature, wall heat flux, and flame location and shape were obtained for combustion in constant volume, expansion, and compression-expansion systems. The measurements showed that the wall temperature and heat flux rise rapidly when the flame passes. The flame speed, and correspondingly the time of the sharp rise in wall temperature and heat flux were found to vary considerably with equivalence ratio. For lean constant volume combustion, buoyancy was found to have a significant effect on the shape of the flame and on the corresponding wall temperature and heat flux variations.

  8. A heat wave during leaf expansion severely reduces productivity and modifies seasonal growth patterns in a northern hardwood forest.

    PubMed

    Stangler, Dominik Florian; Hamann, Andreas; Kahle, Hans-Peter; Spiecker, Heinrich; Mäkelä, Annikki

    2017-01-31

    A useful approach to monitor tree response to climate change and environmental extremes is the recording of long-term time series of stem radial variations obtained with precision dendrometers. Here, we study the impact of environmental stress on seasonal growth dynamics and productivity of yellow birch (Betula alleghaniensis Britton) and sugar maple (Acer saccharum Marsh.) in the Great Lakes, St Lawrence forest region of Ontario. Specifically, we research the effects of a spring heat wave in 2010, and a summer drought in 2012 that occurred during the 2005–14 study period. We evaluated both growth phenology (onset, cessation, duration of radial growth, time of maximum daily growth rate) and productivity (monthly and seasonal average growth rates, maximum daily growth rate, tree-ring width) and tested for differences and interactions among species and years. Productivity of sugar maple was drastically compromised by a 3-day spring heat wave in 2010 as indicated by low growth rates, very early growth cessation and a lagged growth onset in the following year. Sugar maple also responded more sensitively than yellow birch to a prolonged drought period in July 2012, but final tree-ring width was not significantly reduced due to positive responses to above-average temperatures in the preceding spring. We conclude that sugar maple, a species that currently dominates northern hardwood forests, is vulnerable to heat wave disturbances during leaf expansion, which might occur more frequently under anticipated climate change.

  9. The role of turbulence in coronal heating and solar wind expansion

    PubMed Central

    Cranmer, Steven R.; Asgari-Targhi, Mahboubeh; Miralles, Mari Paz; Raymond, John C.; Strachan, Leonard; Tian, Hui; Woolsey, Lauren N.

    2015-01-01

    Plasma in the Sun's hot corona expands into the heliosphere as a supersonic and highly magnetized solar wind. This paper provides an overview of our current understanding of how the corona is heated and how the solar wind is accelerated. Recent models of magnetohydrodynamic turbulence have progressed to the point of successfully predicting many observed properties of this complex, multi-scale system. However, it is not clear whether the heating in open-field regions comes mainly from the dissipation of turbulent fluctuations that are launched from the solar surface, or whether the chaotic ‘magnetic carpet’ in the low corona energizes the system via magnetic reconnection. To help pin down the physics, we also review some key observational results from ultraviolet spectroscopy of the collisionless outer corona. PMID:25848083

  10. The role of turbulence in coronal heating and solar wind expansion.

    PubMed

    Cranmer, Steven R; Asgari-Targhi, Mahboubeh; Miralles, Mari Paz; Raymond, John C; Strachan, Leonard; Tian, Hui; Woolsey, Lauren N

    2015-05-13

    Plasma in the Sun's hot corona expands into the heliosphere as a supersonic and highly magnetized solar wind. This paper provides an overview of our current understanding of how the corona is heated and how the solar wind is accelerated. Recent models of magnetohydrodynamic turbulence have progressed to the point of successfully predicting many observed properties of this complex, multi-scale system. However, it is not clear whether the heating in open-field regions comes mainly from the dissipation of turbulent fluctuations that are launched from the solar surface, or whether the chaotic 'magnetic carpet' in the low corona energizes the system via magnetic reconnection. To help pin down the physics, we also review some key observational results from ultraviolet spectroscopy of the collisionless outer corona.

  11. Anharmonic phonon quasiparticle theory of zero-point and thermal shifts in insulators: Heat capacity, bulk modulus, and thermal expansion

    NASA Astrophysics Data System (ADS)

    Allen, Philip B.

    2015-08-01

    The quasiharmonic (QH) approximation uses harmonic vibrational frequencies ωQ ,H(V ) computed at volumes V near V0 where the Born-Oppenheimer (BO) energy Eel(V ) is minimum. When this is used in the harmonic free energy, QH approximation gives a good zeroth order theory of thermal expansion and first-order theory of bulk modulus, where nth-order means smaller than the leading term by ɛn, where ɛ =ℏ ωvib/Eel or kBT /Eel , and Eel is an electronic energy scale, typically 2 to 10 eV. Experiment often shows evidence for next-order corrections. When such corrections are needed, anharmonic interactions must be included. The most accessible measure of anharmonicity is the quasiparticle (QP) energy ωQ(V ,T ) seen experimentally by vibrational spectroscopy. However, this cannot just be inserted into the harmonic free energy FH. In this paper, a free energy is found that corrects the double-counting of anharmonic interactions that is made when F is approximated by FH( ωQ(V ,T ) ) . The term "QP thermodynamics" is used for this way of treating anharmonicity. It enables (n +1 ) -order corrections if QH theory is accurate to order n . This procedure is used to give corrections to the specific heat and volume thermal expansion. The QH formulas for isothermal (BT) and adiabatic (BS) bulk moduli are clarified, and the route to higher-order corrections is indicated.

  12. Azadiradione ameliorates polyglutamine expansion disease in Drosophila by potentiating DNA binding activity of heat shock factor 1

    PubMed Central

    Dutta, Naibedya; Ghosh, Suvranil; Jana, Manas; Ganguli, Arnab; Komarov, Andrei; Paul, Soumyadip; Dwivedi, Vibha; Chatterjee, Subhrangsu; Jana, Nihar R.; Lakhotia, Subhash C.; Chakrabarti, Gopal; Misra, Anup K.; Mandal, Subhash C.; Pal, Mahadeb

    2016-01-01

    Aggregation of proteins with the expansion of polyglutamine tracts in the brain underlies progressive genetic neurodegenerative diseases (NDs) like Huntington's disease and spinocerebellar ataxias (SCA). An insensitive cellular proteotoxic stress response to non-native protein oligomers is common in such conditions. Indeed, upregulation of heat shock factor 1 (HSF1) function and its target protein chaperone expression has shown promising results in animal models of NDs. Using an HSF1 sensitive cell based reporter screening, we have isolated azadiradione (AZD) from the methanolic extract of seeds of Azadirachta indica, a plant known for its multifarious medicinal properties. We show that AZD ameliorates toxicity due to protein aggregation in cell and fly models of polyglutamine expansion diseases to a great extent. All these effects are correlated with activation of HSF1 function and expression of its target protein chaperone genes. Notably, HSF1 activation by AZD is independent of cellular HSP90 or proteasome function. Furthermore, we show that AZD directly interacts with purified human HSF1 with high specificity, and facilitates binding of HSF1 to its recognition sequence with higher affinity. These unique findings qualify AZD as an ideal lead molecule for consideration for drug development against NDs that affect millions worldwide. PMID:27835876

  13. MATERIALS THAT SHRINK ON HEATING: PRESSURE-INDUCED PHASE TRANSITIONS IN NEGATIVE THERMAL EXPANSION MATERIALS, AND THEIR ENERGETICS

    SciTech Connect

    Varga, Tamas

    2011-09-01

    Despite the fact that all chemical bonds expand on heating, a small class of materials shrinks when heated. These, so called negative thermal expansion (NTE) materials, are a unique class of materials with some exotic properties. The present chapter offers insight into the structural aspects of pressure- (or temperature-) induced phase transformations, and the energetics of those changes in these fascinating materials, in particular NTE compound cubic ZrW2O8, orthorhombic Sc2W3O12 and Sc2Mo3O12, as well as other members of the 'scandium tungstate family'. In subsequent sections, (i) combined in situ high-pressure synchrotron XRD and XAS studies of NTE material ZrW2O8; (ii) an in situ high-pressure synchrotron XRD study of Sc2W3O12, Sc2Mo3O12, and Al2W3O12; and (iii) thermochemical studies of the above materials are presented and discussed. In all of these studies, chemical bonds change, sometimes break and new ones form. Correlations between structure, chemistry, and energetics are revealed. It is also shown that (iv) NTE materials are good candidates as precursors to make novel solid state materials, such as the conducting Sc0.67WO4, using high-pressure, high-temperature synthesis, through modification of bonding and electronic structure, and thus provide vast opportunities for scientific exploration.

  14. On the calculation of turbulent heat transport downstream from an abrupt pipe expansion

    NASA Technical Reports Server (NTRS)

    Chieng, C. C.; Launder, B. E.

    1980-01-01

    A numerical study of flow and heat transfer in the separated flow region produced by an abrupt pipe explosion is reported, with emphasis on the region in the immediate vicinity of the wall where turbulent transport gives way to molecular conduction and diffusion. The analysis is based on a modified TEACH-2E program with the standard k-epsilon model of turbulence. Predictions of the experimental data of Zemanick and Dougall (1970) for a diameter ratio of 0.54 show generally encouraging agreement with experiment. At a diameter ratio of 0.43 different trends are discernable between measurement and calculation, though this appears to be due to effects unconnected with the wall region studied here.

  15. Internal Thermal Control System Hose Heat Transfer Fluid Thermal Expansion Evaluation Test Report

    NASA Technical Reports Server (NTRS)

    Wieland, P. O.; Hawk, H. D.

    2001-01-01

    During assembly of the International Space Station, the Internal Thermal Control Systems in adjacent modules are connected by jumper hoses referred to as integrated hose assemblies (IHAs). A test of an IHA has been performed at the Marshall Space Flight Center to determine whether the pressure in an IHA filled with heat transfer fluid would exceed the maximum design pressure when subjected to elevated temperatures (up to 60 C (140 F)) that may be experienced during storage or transportation. The results of the test show that the pressure in the IHA remains below 227 kPa (33 psia) (well below the 689 kPa (100 psia) maximum design pressure) even at a temperature of 71 C (160 F), with no indication of leakage or damage to the hose. Therefore, based on the results of this test, the IHA can safely be filled with coolant prior to launch. The test and results are documented in this Technical Memorandum.

  16. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2016-02-25

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  17. The Effects of Kernel Feeding by Halyomorpha halys (Hemiptera: Pentatomidae) on Commercial Hazelnuts.

    PubMed

    Hedstrom, C S; Shearer, P W; Miller, J C; Walton, V M

    2014-10-01

    Halyomorpha halys Stål, the brown marmorated stink bug (Hemiptera: Pentatomidae), is an invasive pest with established populations in Oregon. The generalist feeding habits of H. halys suggest it has the potential to be a pest of many specialty crops grown in Oregon, including hazelnuts, Corylus avellana L. The objectives of this study were to: 1) characterize the damage to developing hazelnut kernels resulting from feeding by H. halys adults, 2) determine how the timing of feeding during kernel development influences damage to kernels, and 3) determine if hazelnut shell thickness has an effect on feeding frequency on kernels. Adult brown marmorated stink bugs were allowed to feed on developing nuts for 1-wk periods from initial kernel development (spring) until harvest (fall). Developing nuts not exposed to feeding by H. halys served as a control treatment. The degree of damage and diagnostic symptoms corresponded with the hazelnut kernels' physiological development. Our results demonstrated that when H. halys fed on hazelnuts before kernel expansion, development of the kernels could cease, resulting in empty shells. When stink bugs fed during kernel expansion, kernels appeared malformed. When stink bugs fed on mature nuts the kernels exhibited corky, necrotic areas. Although significant differences in shell thickness were observed among the cultivars, no significant differences occurred in the proportions of damaged kernels based on field tests and laboratory choice tests. The results of these studies demonstrated that commercial hazelnuts are susceptible to damage caused by the feeding of H. halys throughout the entire period of kernel development.

  18. Nonlinear expansion and heating of a nonneutral electron plasma due to elastic collisions with background neutral gas

    SciTech Connect

    Davidson, R.C.; Chao, E.H.

    1996-07-01

    This paper investigates theoretically the heating and nonlinear expansion of a nonneutral electron plasma due to elastic collisions with constant collision frequency {nu}{sub {ital en}} between the plasma electrons and a background neutral gas. The model treats the electrons as a strongly magnetized fluid ({omega}{sub {ital pe}}{sup 2}/{omega}{sub {ital ce}}{sup 2}{lt}1) immersed in a uniform magnetic field {ital B}{sub 0}{ital {bold {cflx e}}}{sub {ital z}}. The model also assumes an axisymmetric plasma column ({partial_derivative}/{partial_derivative}{theta}=0) with negligible axial variation ({partial_derivative}/{partial_derivative}{ital z}=0), and that the process of heat conduction is sufficiently fast that the electrons have relaxed through electron-electron collisions to a quasi-equilibrium state with scalar pressure {ital P}({ital r},{ital t})={ital n}({ital r},{ital t}){ital T}, and isothermal temperature {ital T}. Assuming that the electrons undergo elastic collisions with infinitely massive background gas atoms, global energy conservation is used to calculate the electron heating rate, {ital dT}({ital t})/{ital dt}, as the plasma column expands on a time scale {tau}{sub {ital diff}}{approximately}({omega}{sub {ital pe}}{sup 2}{nu}{sub {ital en}}/{omega}{sub {ital ce}}{sup 2} ){sup {minus}1}, and the electrostatic potential energy decreases. Coupled dynamical equations that describe the nonlinear evolution of the mean-square column radius {ital r}{sup 2}{sub 0}({ital t}) and electron temperature {ital T}({ital t}) are derived and solved numerically. {copyright} {ital 1996 American Institute of Physics.}

  19. Modeling the relative roles of the foehn wind and urban expansion in the 2002 Beijing heat wave and possible mitigation by high reflective roofs

    NASA Astrophysics Data System (ADS)

    Ma, Hongyun; Shao, Haiyan; Song, Jie

    2014-02-01

    Rapid urbanization has intensified summer heat waves in recent decades in Beijing, China. In this study, effectiveness of applying high-reflectance roofs on mitigating the warming effects caused by urban expansion and foehn wind was simulated for a record-breaking heat wave occurred in Beijing during July 13-15, 2002. Simulation experiments were performed using the Weather Research and Forecast (WRF version 3.0) model coupled with an urban canopy model. The modeled diurnal air temperatures were compared well with station observations in the city and the wind convergence caused by urban heat island (UHI) effect could be simulated clearly. By increasing urban roof albedo, the simulated UHI effect was reduced due to decreased net radiation, and the simulated wind convergence in the urban area was weakened. Using WRF3.0 model, the warming effects caused by urban expansion and foehn wind were quantified separately, and were compared with the cooling effect due to the increased roof albedo. Results illustrated that the foehn warming effect under the northwesterly wind contributed greatly to this heat wave event in Beijing, while contribution from urban expansion accompanied by anthropogenic heating was secondary, and was mostly evident at night. Increasing roof albedo could reduce air temperature both in the day and at night, and could more than offset the urban expansion effect. The combined warming caused by the urban expansion and the foehn wind could be potentially offset with high-reflectance roofs by 58.8 % or cooled by 1.4 °C in the early afternoon on July 14, 2002, the hottest day during the heat wave.

  20. Built Expansion and Global Climate Change Drive Projected Urban Heat: Relative Magnitudes, Interactions, and Mitigation

    NASA Astrophysics Data System (ADS)

    Krayenhoff, E. S.; Georgescu, M.; Moustaoui, M.

    2016-12-01

    Surface climates are projected to warm due to global climate change over the course of the 21st century, and demographic projections suggest urban areas in the United States will continue to expand and develop, with associated local climate outcomes. Interactions between these two drivers of urban heat have not been robustly quantified to date. Here, simulations with the Weather Research and Forecasting model (coupled to a Single-Layer Urban Canopy Model) are performed at 20 km resolution over the continental U.S. for two 10-year periods: contemporary (2000-2009) and end-of-century (2090-2099). Present and end of century urban land use are derived from the Environmental Protection Agency's Integrated Climate and Land-Use Scenarios. Modelled effects on urban climates are evaluated regionally. Sensitivity to climate projection (Community Climate System Model 4.0, RCP 4.5 vs. RCP 8.5) and associated urban development scenarios are assessed. Effects on near-surface urban air temperature of RCP8.5 climate change are greater than those attributable to the corresponding urban development in many regions. Interaction effects vary by region, and while of lesser magnitude, are not negligible. Moreover, urban development and its interactions with RCP8.5 climate change modify the distribution of convective precipitation over the eastern US. Interaction effects result from the different meteorological effects of urban areas under current and future climate. Finally, the potential for design implementations such as green roofs and high albedo roofs to offset the projected warming is considered. Impacts of these implementations on precipitation are also assessed.

  1. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Iterative software kernels

    SciTech Connect

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  3. Feasibility of near infrared spectroscopy for analyzing corn kernel damage and viability of soybean and corn kernels

    USDA-ARS?s Scientific Manuscript database

    The current US corn grading system accounts for the portion of damaged kernels, which is measured by time-consuming and inaccurate visual inspection. Near infrared spectroscopy (NIRS), a non-destructive and fast analytical method, was tested as a tool for discriminating corn kernels with heat and f...

  4. Kernel Affine Projection Algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  5. Implications of Thermal Diffusity being Inversely Proportional to Temperature Times Thermal Expansivity on Lower Mantle Heat Transport

    NASA Astrophysics Data System (ADS)

    Hofmeister, A.

    2010-12-01

    Many measurements and models of heat transport in lower mantle candidate phases contain systematic errors: (1) conventional methods of insulators involve thermal losses that are pressure (P) and temperature (T) dependent due to physical contact with metal thermocouples, (2) measurements frequently contain unwanted ballistic radiative transfer which hugely increases with T, (3) spectroscopic measurements of dense samples in diamond anvil cells involve strong refraction by which has not been accounted for in analyzing transmission data, (4) the role of grain boundary scattering in impeding heat and light transfer has largely been overlooked, and (5) essentially harmonic physical properties have been used to predict anharmonic behavior. Improving our understanding of the physics of heat transport requires accurate data, especially as a function of temperature, where anharmonicity is the key factor. My laboratory provides thermal diffusivity (D) at T from laser flash analysis, which lacks the above experimental errors. Measuring a plethora of chemical compositions in diverse dense structures (most recently, perovskites, B1, B2, and glasses) as a function of temperature provides a firm basis for understanding microscopic behavior. Given accurate measurements for all quantities: (1) D is inversely proportional to [T x alpha(T)] from ~0 K to melting, where alpha is thermal expansivity, and (2) the damped harmonic oscillator model matches measured D(T), using only two parameters (average infrared dielectric peak width and compressional velocity), both acquired at temperature. These discoveries pertain to the anharmonic aspects of heat transport. I have previously discussed the easily understood quasi-harmonic pressure dependence of D. Universal behavior makes application to the Earth straightforward: due to the stiffness and slow motions of the plates and interior, and present-day, slow planetary cooling rates, Earth can be approximated as being in quasi

  6. Anomalies in thermal expansion and heat capacity of TmB50 at low temperatures: magnetic phase transition and crystal electric field effect.

    PubMed

    Novikov, V V; Zhemoedov, N A; Mitroshenkov, N V; Matovnikov, A V

    2016-11-01

    We experimentally study the heat capacity and thermal expansion of thulium boride (TmB50) at temperatures of 2-300 K. The wide temperature range (2-180 K) of boride negative expansion was revealed. We found the anomalies in C(T) heat capacity temperature dependence, attributed to the Schottky contribution (i.e. the influence of the crystal electric field: CEF), as well as the magnetic phase transition. CEF-splitting of the f-levels of the Tm(3+) ion was described by the Schottky function of heat capacity with a quasi-quartet in the ground state. Excited multiplets are separated from the ground state by energy gaps δ1 = 100 K, and δ2 ≈ 350 K. The heat capacity maximum at Tmax ≈ 2.4 K may be attributed to the possible magnetic transition in TmB50. Other possible causes of the low-temperature maximum of C(T) dependence are the nonspherical surroundings of rare earth atoms due to the boron atoms in the crystal lattice of the boride and the emergence of two-level systems, as well as the splitting of the ground multiplet due to local magnetic fields of the neighboring ions of thulium. Anomalies in heat capacity are mapped with the thermal expansion features of boride. It is found that the TmB50 thermal expansion characteristic features are due to the influence of the CEF, as well as the asymmetry of the spatial arrangement of boron atoms around the rare earth atoms in the crystal lattice of RB50. The Grüneisen parameters, corresponding to the excitation of different multiplets of CEF-splitting, were determined. A satisfactory accordance between the experimental and estimated temperature dependencies of the boride thermal expansion coefficient was achieved.

  7. Search for the Enhancement of the Thermal Expansion Coefficient of Superfluid ^4He near T_λ by a Heat Current

    NASA Astrophysics Data System (ADS)

    Liu, Yuanming; Israelsson, Ulf E.; Larson, Melora

    2001-03-01

    The superfluid transition in ^4He in the presence of a heat current (Q) provides an ideal system for the study of phase transitions under non-equilibrium, dynamical conditions. Many physical properties become nonlinear and Q-dependent near the transition temperature, T_ λ. For instance, the heat capacity enhancement by a heat current was predicted theoretically(R. Haussmann and V. Dohm, Phys. Rev. Lett. 72), 3060 (1994); T.C.P. Chui phet al., Phys. Rev. Lett. 77, 1793 (1996)., and observed experimentally(A.W. Harter phet al)., Phys. Rev. Lett. 84, 2195 (2000).. Because the thermal expansion coefficient is a linear function of the specific heat near T_ λ, both exhibit similar critical behaviors under equilibrium conditions. An enhancement of the thermal expansion coefficient is also expected if a similar relationship exists under non-equilibrium conditions. We report our experimental search of the enhancement of the thermal expansion of superfluid ^4He by a heat current (0 <= Q <= 100 μW/cm^2). We conducted the measurements in a thermal conductivity cell at sample pressures of SVP and 21.2 bar. The measurements were also performed in a reduced gravity environment of 0.01g provided by the low-gravity simulator we have developed at JPL.

  8. Monitoring ground-surface heating during expansion of the Casa Diablo production well field at Mammoth Lakes, California

    USGS Publications Warehouse

    Bergfeld, D.; Vaughan, R. Greg; Evans, William C.; Olsen, Eric

    2015-01-01

    The Long Valley hydrothermal system supports geothermal power production from 3 binary plants (Casa Diablo) near the town of Mammoth Lakes, California. Development and growth of thermal ground at sites west of Casa Diablo have created concerns over planned expansion of a new well field and the associated increases in geothermal fluid production. To ensure that all areas of ground heating are identified prior to new geothermal development, we obtained high-resolution aerial thermal infrared imagery across the region. The imagery covers the existing and proposed well fields and part of the town of Mammoth Lakes. Imagery results from a predawn flight on Oct. 9, 2014 readily identified the Shady Rest thermal area (SRST), one of two large areas of ground heating west of Casa Diablo, as well as other known thermal areas smaller in size. Maximum surface temperatures at 3 thermal areas were 26–28 °C. Numerous small areas with ground temperatures >16 °C were also identified and slated for field investigations in summer 2015. Some thermal anomalies in the town of Mammoth Lakes clearly reflect human activity.Previously established projects to monitor impacts from geothermal power production include yearly surveys of soil temperatures and diffuse CO2 emissions at SRST, and less regular surveys to collect samples from fumaroles and gas vents across the region. Soil temperatures at 20 cm depth at SRST are well correlated with diffuse CO2 flux, and both parameters show little variation during the 2011–14 field surveys. Maximum temperatures were between 55–67 °C and associated CO2 discharge was around 12–18 tonnes per day. The carbon isotope composition of CO2 is fairly uniform across the area ranging between –3.7 to –4.4 ‰. The gas composition of the Shady Rest fumarole however has varied with time, and H2S concentrations in the gas have been increasing since 2009.

  9. Tandem Duplication Events in the Expansion of the Small Heat Shock Protein Gene Family in Solanum lycopersicum (cv. Heinz 1706).

    PubMed

    Krsticevic, Flavia J; Arce, Débora P; Ezpeleta, Joaquín; Tapia, Elizabeth

    2016-10-13

    In plants, fruit maturation and oxidative stress can induce small heat shock protein (sHSP) synthesis to maintain cellular homeostasis. Although the tomato reference genome was published in 2012, the actual number and functionality of sHSP genes remain unknown. Using a transcriptomic (RNA-seq) and evolutionary genomic approach, putative sHSP genes in the Solanum lycopersicum (cv. Heinz 1706) genome were investigated. A sHSP gene family of 33 members was established. Remarkably, roughly half of the members of this family can be explained by nine independent tandem duplication events that determined, evolutionarily, their functional fates. Within a mitochondrial class subfamily, only one duplicated member, Solyc08g078700, retained its ancestral chaperone function, while the others, Solyc08g078710 and Solyc08g078720, likely degenerated under neutrality and lack ancestral chaperone function. Functional conservation occurred within a cytosolic class I subfamily, whose four members, Solyc06g076570, Solyc06g076560, Solyc06g076540, and Solyc06g076520, support ∼57% of the total sHSP RNAm in the red ripe fruit. Subfunctionalization occurred within a new subfamily, whose two members, Solyc04g082720 and Solyc04g082740, show heterogeneous differential expression profiles during fruit ripening. These findings, involving the birth/death of some genes or the preferential/plastic expression of some others during fruit ripening, highlight the importance of tandem duplication events in the expansion of the sHSP gene family in the tomato genome. Despite its evolutionary diversity, the sHSP gene family in the tomato genome seems to be endowed with a core set of four homeostasis genes: Solyc05g014280, Solyc03g082420, Solyc11g020330, and Solyc06g076560, which appear to provide a baseline protection during both fruit ripening and heat shock stress in different tomato tissues.

  10. Tandem Duplication Events in the Expansion of the Small Heat Shock Protein Gene Family in Solanum lycopersicum (cv. Heinz 1706)

    PubMed Central

    Krsticevic, Flavia J.; Arce, Débora P.; Ezpeleta, Joaquín; Tapia, Elizabeth

    2016-01-01

    In plants, fruit maturation and oxidative stress can induce small heat shock protein (sHSP) synthesis to maintain cellular homeostasis. Although the tomato reference genome was published in 2012, the actual number and functionality of sHSP genes remain unknown. Using a transcriptomic (RNA-seq) and evolutionary genomic approach, putative sHSP genes in the Solanum lycopersicum (cv. Heinz 1706) genome were investigated. A sHSP gene family of 33 members was established. Remarkably, roughly half of the members of this family can be explained by nine independent tandem duplication events that determined, evolutionarily, their functional fates. Within a mitochondrial class subfamily, only one duplicated member, Solyc08g078700, retained its ancestral chaperone function, while the others, Solyc08g078710 and Solyc08g078720, likely degenerated under neutrality and lack ancestral chaperone function. Functional conservation occurred within a cytosolic class I subfamily, whose four members, Solyc06g076570, Solyc06g076560, Solyc06g076540, and Solyc06g076520, support ∼57% of the total sHSP RNAm in the red ripe fruit. Subfunctionalization occurred within a new subfamily, whose two members, Solyc04g082720 and Solyc04g082740, show heterogeneous differential expression profiles during fruit ripening. These findings, involving the birth/death of some genes or the preferential/plastic expression of some others during fruit ripening, highlight the importance of tandem duplication events in the expansion of the sHSP gene family in the tomato genome. Despite its evolutionary diversity, the sHSP gene family in the tomato genome seems to be endowed with a core set of four homeostasis genes: Solyc05g014280, Solyc03g082420, Solyc11g020330, and Solyc06g076560, which appear to provide a baseline protection during both fruit ripening and heat shock stress in different tomato tissues. PMID:27565886

  11. Online Sequential Extreme Learning Machine With Kernels.

    PubMed

    Scardapane, Simone; Comminiello, Danilo; Scarpiniti, Michele; Uncini, Aurelio

    2015-09-01

    The extreme learning machine (ELM) was recently proposed as a unifying framework for different families of learning algorithms. The classical ELM model consists of a linear combination of a fixed number of nonlinear expansions of the input vector. Learning in ELM is hence equivalent to finding the optimal weights that minimize the error on a dataset. The update works in batch mode, either with explicit feature mappings or with implicit mappings defined by kernels. Although an online version has been proposed for the former, no work has been done up to this point for the latter, and whether an efficient learning algorithm for online kernel-based ELM exists remains an open problem. By explicating some connections between nonlinear adaptive filtering and ELM theory, in this brief, we present an algorithm for this task. In particular, we propose a straightforward extension of the well-known kernel recursive least-squares, belonging to the kernel adaptive filtering (KAF) family, to the ELM framework. We call the resulting algorithm the kernel online sequential ELM (KOS-ELM). Moreover, we consider two different criteria used in the KAF field to obtain sparse filters and extend them to our context. We show that KOS-ELM, with their integration, can result in a highly efficient algorithm, both in terms of obtained generalization error and training time. Empirical evaluations demonstrate interesting results on some benchmarking datasets.

  12. Multiple collaborative kernel tracking.

    PubMed

    Fan, Zhimin; Yang, Ming; Wu, Ying

    2007-07-01

    Those motion parameters that cannot be recovered from image measurements are unobservable in the visual dynamic system. This paper studies this important issue of singularity in the context of kernel-based tracking and presents a novel approach that is based on a motion field representation which employs redundant but sparsely correlated local motion parameters instead of compact but uncorrelated global ones. This approach makes it easy to design fully observable kernel-based motion estimators. This paper shows that these high-dimensional motion fields can be estimated efficiently by the collaboration among a set of simpler local kernel-based motion estimators, which makes the new approach very practical.

  13. Robotic Intelligence Kernel: Communications

    SciTech Connect

    Walton, Mike C.

    2009-09-16

    The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.

  14. Oil point pressure of Indian almond kernels

    NASA Astrophysics Data System (ADS)

    Aregbesola, O.; Olatunde, G.; Esuola, S.; Owolarafe, O.

    2012-07-01

    The effect of preprocessing conditions such as moisture content, heating temperature, heating time and particle size on oil point pressure of Indian almond kernel was investigated. Results showed that oil point pressure was significantly (P < 0.05) affected by above mentioned parameters. It was also observed that oil point pressure reduced with increase in heating temperature and heating time for both coarse and fine particles. Furthermore, an increase in moisture content resulted in increased oil point pressure for coarse particles while there was a reduction in oil point pressure with increase in moisture content for fine particles.

  15. Robotic Intelligence Kernel: Driver

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.

  16. Thermal expansion, heat capacity and Grüneisen parameter of iridium phosphide Ir2P from quasi-harmonic Debye model

    NASA Astrophysics Data System (ADS)

    Liu, Z. J.; Song, T.; Sun, X. W.; Ma, Q.; Wang, T.; Guo, Y.

    2017-03-01

    Thermal expansion coefficient, heat capacity, and Grüneisen parameter of iridium phosphide Ir2P are reported by means of quasi-harmonic Debye model for the first time in the current study. This model combines with first-principles calculations within generalized gradient approximation using pseudopotentials and a plane-wave basis in the framework of density functional theory, and it takes into account the phononic effects within the quasi-harmonic approximation. The Debye temperature as a function of volume, the Grüneisen parameter, thermal expansion coefficient, constant-volume and constant-pressure heat capacities, and entropy on the temperature T are also successfully obtained. All the thermodynamic properties of Ir2P in the whole pressure range from 0 to 100 GPa and temperature range from 0 to 3000 K are summarized and discussed in detail.

  17. Similarity and Boubaker Polynomials Expansion Scheme BPES comparative solutions to the heat transfer equation for incompressible non-Newtonian fluids: case of laminar boundary energy equation

    NASA Astrophysics Data System (ADS)

    Zheng, L. C.; Zhang, X. X.; Boubaker, K.; Yücel, U.; Gargouri-Ellouze, E.; Yıldırım, A.

    2011-08-01

    In this paper, a new model is proposed for the heat transfer characteristics of power law non- Newtonian fluids. The effects of power law viscosity on temperature field were taken into account by assuming that the temperature field is similar to the velocity field with modified Fourier's law of heat conduction for power law fluid media. The solutions obtained by using Boubaker Polynomials Expansion Scheme (BPES) technique are compared with those of the recent related similarity method in the literature with good agreement to verify the protocol exactness.

  18. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  19. Kernel mucking in top

    SciTech Connect

    LeFebvre, W.

    1994-08-01

    For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.

  20. Weighted Bergman Kernels and Quantization}

    NASA Astrophysics Data System (ADS)

    Engliš, Miroslav

    Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion for x,y near z, where φ(x,y) is an almost-analytic extension of &\\phi(x)=φ(x,x) and similarly for ψ. Further, . If in addition Ω is of finite type, φ,ψ behave reasonably at the boundary, and - log φ, - log ψ are strictly plurisubharmonic on Ω, we obtain also an analogous asymptotic expansion for the Berezin transform and give applications to the Berezin quantization. Finally, for Ω smoothly bounded and strictly pseudoconvex and φ a smooth strictly plurisubharmonic defining function for Ω, we also obtain results on the Berezin-Toeplitz quantization.

  1. Heating rate measurements over 30 deg and 40 deg (half angle) blunt cones in air and helium in the Langley expansion tube facility

    NASA Technical Reports Server (NTRS)

    Reddy, N. M.

    1980-01-01

    Convective heat transfer measurements, made on the conical portion of spherically blunted cones (30 deg and 40 deg half angle) in an expansion tube are discussed. The test gases used were helium and air; flow velocities were about 6.8 km/sec for helium and about 5.1 km/sec for air. The measured heating rates are compared with calculated results using a viscous shock layer computer code. For air, various techniques to determine flow velocity yielded identical results, but for helium, the flow velocity varied by as much as eight percent depending on which technique was used. The measured heating rates are in satisfactory agreement with calculation for helium, assuming the lower flow velocity, the measurements are significantly greater than theory and the discrepancy increased with increasing distance along the cone.

  2. Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.

    PubMed

    Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I

    2016-03-01

    The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor.

  3. Heat flow in anharmonic crystals with internal and external stochastic baths: a convergent polymer expansion for a model with discrete time and long range interparticle interaction

    NASA Astrophysics Data System (ADS)

    Pereira, Emmanuel; Mendonça, Mateus S.; Lemos, Humberto C. F.

    2015-09-01

    We investigate a chain of oscillators with anharmonic on-site potentials, with long range interparticle interactions, and coupled both to external and internal stochastic thermal reservoirs of Ornstein-Uhlenbeck type. We develop an integral representation, a` la Feynman-Kac, for the correlations and the heat current. We assume the approximation of discrete times in the integral formalism (together with a simplification in a subdominant part of the harmonic interaction) and develop a suitable polymer expansion for the model. In the regime of strong anharmonicity, strong harmonic pinning, and for the interparticle interaction with integrable polynomial decay, we prove the convergence of the polymer expansion uniformly in volume (number of sites and time). We also show that the two-point correlation decays in space such as the interparticle interaction. The existence of a convergent polymer expansion is of practical interest: it establishes a rigorous support for a perturbative analysis of the heat flow problem and for the computation of the thermal conductivity in related anharmonic crystals, including those with inhomogeneous potentials and long range interparticle interactions. To show the usefulness and trustworthiness of our approach, we compute the thermal conductivity of a specific anharmonic chain, and make a comparison with related numerical results presented in the literature.

  4. Stochastic subset selection for learning with kernel machines.

    PubMed

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  5. Robotic Intelligence Kernel: Visualization

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.

  6. Robotic Intelligence Kernel: Architecture

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  7. Resummed memory kernels in generalized system-bath master equations.

    PubMed

    Mavros, Michael G; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  8. Resummed memory kernels in generalized system-bath master equations

    NASA Astrophysics Data System (ADS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  9. Resummed memory kernels in generalized system-bath master equations

    SciTech Connect

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  10. A small concentration expansion for the effective heat conductivity of a random disperse two-component material; an assessment of Batchelor's renormalization method

    NASA Astrophysics Data System (ADS)

    Vanbeek, P.

    1987-11-01

    The difficulty in the expansion of the effective properties of random disperse media in powers of the volume concentration c of the disperse phase presented by the divergence of certain integrals that perform averaging of two-particle approximations is considered. The random heat conduction problem analyzed by Jeffrey (1974) is treated using Batchelor's (1974) renormalization method. Batchelor's two-particle equation is extended to a hierarchical set of n-particle equations for arbitrary n. The solution of the hierarchy is seen to consist of a sequence of two, three, and more particle terms. The two and three-particle terms are calculated. It is proved that all i-particle terms (i greater than or = 2) can be averaged convergently showing that the hierarchical approach yields a well-defined expansion in integer powers of c of the effective conductivity. It follows that Jeffrey's expression for the effective conductivity is 0(c sq) - accurate.

  11. Development of a single kernel analysis method for detection of 2-acetyl-1-pyrroline in aromatic rice germplasm

    USDA-ARS?s Scientific Manuscript database

    Solid-phase microextraction (SPME) in conjunction with GC/MS was used to distinguish non-aromatic rice (Oryza sativa, L.) kernels from aromatic rice kernels. In this method, single kernels along with 10 µl of 0.1 ng 2,4,6-Trimethylpyridine (TMP) were placed in sealed vials and heated to 80oC for 18...

  12. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2015-12-22

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  13. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2016-06-01

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  14. Kernel Optimization in Discriminant Analysis

    PubMed Central

    You, Di; Hamsici, Onur C.; Martinez, Aleix M.

    2011-01-01

    Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results using a large number of databases and classifiers demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates. PMID:20820072

  15. Decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio.

    PubMed

    Hu, Kainan; Zhang, Hongwu; Geng, Shaojuan

    2016-10-01

    A decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio is proposed. The local equilibrium distribution function including the rotational velocity of particle is decoupled into two parts, i.e., the local equilibrium distribution function of the translational velocity of particle and that of the rotational velocity of particle. From these two local equilibrium functions, two lattice Boltzmann models are derived via the Hermite expansion, namely one is in relation to the translational velocity and the other is connected with the rotational velocity. Accordingly, the distribution function is also decoupled. After this, the evolution equation is decoupled into the evolution equation of the translational velocity and that of the rotational velocity. The two evolution equations evolve separately. The lattice Boltzmann models used in the scheme proposed by this work are constructed via the Hermite expansion, so it is easy to construct new schemes of higher-order accuracy. To validate the proposed scheme, a one-dimensional shock tube simulation is performed. The numerical results agree with the analytical solutions very well.

  16. Thermal Expansion "Paradox."

    ERIC Educational Resources Information Center

    Fakhruddin, Hasan

    1993-01-01

    Describes a paradox in the equation for thermal expansion. If the calculations for heating a rod and subsequently cooling a rod are determined, the new length of the cool rod is shorter than expected. (PR)

  17. Thermal Expansion "Paradox."

    ERIC Educational Resources Information Center

    Fakhruddin, Hasan

    1993-01-01

    Describes a paradox in the equation for thermal expansion. If the calculations for heating a rod and subsequently cooling a rod are determined, the new length of the cool rod is shorter than expected. (PR)

  18. Thermal expansion and specific heat of a superior IR-SOFC cathode material Sr1-xCexCoO3-δ

    NASA Astrophysics Data System (ADS)

    Srivastava, Archana; Thakur, Rasna; Gaur, N. K.

    2017-05-01

    We present the specific heat (Cv) and thermal expansion (α) of lightly doped Sr1-xCexCoO3-δ (x=0.0-0.15) using Modified Rigid Ion Model (MRIM) and a novel atomistic approach of Atom in Molecules(AIM) theory. We partial replaced the A-site Strontium cation by other element (Cerium) of different size, valence and mass. The effect of Cerium doping on lattice specific heat (Cv)lat, thermal expansion(α) of Sr1-xCexCoO3-δ (x = 0.0-0.15) as a function of temperature (20K≤T≤ 1000K) is reported probably for the first time. The results indicate better thermal compatibility of Sr0.95Ce0.05CoO3 with Samaria doped Ceria (SDC) electrolyte than other studied compounds. The Debye temperature of these perovskite material as cathode for Intermediate Range Solid Oxide Fuel Cell (IR-SOFC) is also predicted.

  19. Partially ionized gas flow and heat transfer in the separation, reattachment, and redevelopment regions downstream of an abrupt circular channel expansion.

    NASA Technical Reports Server (NTRS)

    Back, L. H.; Massier, P. F.; Roschke, E. J.

    1972-01-01

    Heat transfer and pressure measurements obtained in the separation, reattachment, and redevelopment regions along a tube and nozzle located downstream of an abrupt channel expansion are presented for a very high enthalpy flow of argon. The ionization energy fraction extended up to 0.6 at the tube inlet just downstream of the arc heater. Reattachment resulted from the growth of an instability in the vortex sheet-like shear layer between the central jet that discharged into the tube and the reverse flow along the wall at the lower Reynolds numbers, as indicated by water flow visualization studies which were found to dynamically model the high-temperature gas flow. A reasonably good prediction of the heat transfer in the reattachment region where the highest heat transfer occurred and in the redevelopment region downstream can be made by using existing laminar boundary layer theory for a partially ionized gas. In the experiments as much as 90 per cent of the inlet energy was lost by heat transfer to the tube and the nozzle wall.

  20. Kernel machine SNP-set testing under multiple candidate kernels.

    PubMed

    Wu, Michael C; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M; Harmon, Quaker E; Lin, Xinyi; Engel, Stephanie M; Molldrem, Jeffrey J; Armistead, Paul M

    2013-04-01

    Joint testing for the cumulative effect of multiple single-nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large-scale genetic association studies. The kernel machine (KM)-testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori because this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest P-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power vs. using the best candidate kernel.

  1. Kernel component analysis using an epsilon-insensitive robust loss function.

    PubMed

    Alzate, Carlos; Suykens, Johan A K

    2008-09-01

    Kernel principal component analysis (PCA) is a technique to perform feature extraction in a high-dimensional feature space, which is nonlinearly related to the original input space. The kernel PCA formulation corresponds to an eigendecomposition of the kernel matrix: eigenvectors with large eigenvalues correspond to the principal components in the feature space. Starting from the least squares support vector machine (LS-SVM) formulation to kernel PCA, we extend it to a generalized form of kernel component analysis (KCA) with a general underlying loss function made explicit. For classical kernel PCA, the underlying loss function is L(2) . In this generalized form, one can plug in also other loss functions. In the context of robust statistics, it is known that the L(2) loss function is not robust because its influence function is not bounded. Therefore, outliers can skew the solution from the desired one. Another issue with kernel PCA is the lack of sparseness: the principal components are dense expansions in terms of kernel functions. In this paper, we introduce robustness and sparseness into kernel component analysis by using an epsilon-insensitive robust loss function. We propose two different algorithms. The first method solves a set of nonlinear equations with kernel PCA as starting points. The second method uses a simplified iterative weighting procedure that leads to solving a sequence of generalized eigenvalue problems. Simulations with toy and real-life data show improvements in terms of robustness together with a sparse representation.

  2. Thermal conductivity and thermal linear expansion measurements on molten salts for assessing their behaviour as heat transport fluid in thermodynamics solar systems

    NASA Astrophysics Data System (ADS)

    Coppa, P.; Bovesecchi, G.; Fabrizi, F.

    2010-08-01

    Molten salts (sodium and potassium nitrides) are going to be used in many different plants as heat transferring fluids, e.g. concentration solar plants, nuclear power plants, etc. In fact they present may important advantages: their absolute safety and non toxicity, availability and low cost. But their use, e.g. in the energy receiving pipe in the focus of the parabolic mirror concentrator of the solar thermodynamic plant, requires the accurate knowledge of the thermophysical properties, above all thermal conductivity, viscosity, specific heat and thermal linear expansion, in the temperature range 200°C÷600°C. In the new laboratory by ENEA Casaccia, SolTerm Department all these properties are going to be measured. Thermal conductivity is measured with the standard probe method (linear heat source inserted into the material) manufacturing a special probe suited to the foreseen temperature range (190-550°C). The probe is made of a ceramic quadrifilar pipe containing in different holes the heater (Ni wire) and the thermometer (type J thermocouple). The thermal linear expansion will be measured by a special system designed and built to this end, measuring the sample dilatation by the reflection of a laser beam by the bottom of the meniscus in the liquid solid interface. The viscosity will be evaluated detecting the start of the natural convection in the same experiment as to measure thermal conductivity. In the paper the construction of the devices, the results of preliminary tests and an evaluation of the obtainable accuracy are reported.

  3. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or...

  4. Kernel phase and kernel amplitude in Fizeau imaging

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin J. S.

    2016-12-01

    Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent history of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.

  5. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including pieces and particles, regardless of whether edible or inedible, contained in any lot of almonds...

  6. A direct approach to Bergman kernel asymptotics for positive line bundles

    NASA Astrophysics Data System (ADS)

    Berman, Robert; Berndtsson, Bo; Sjöstrand, Johannes

    2008-10-01

    We give an elementary proof of the existence of an asymptotic expansion in powers of k of the Bergman kernel associated to L k , where L is a positive line bundle over a compact complex manifold. We also give an algorithm for computing the coefficients in the expansion.

  7. The Adaptive Kernel Neural Network

    DTIC Science & Technology

    1989-10-01

    A neural network architecture for clustering and classification is described. The Adaptive Kernel Neural Network (AKNN) is a density estimation...classification layer. The AKNN retains the inherent parallelism common in neural network models. Its relationship to the kernel estimator allows the network to

  8. Atmosphere expansion and mass loss of close-orbit giant exoplanets heated by stellar XUV. I. Modeling of hydrodynamic escape of upper atmospheric material

    SciTech Connect

    Shaikhislamov, I. F.; Khodachenko, M. L.; Sasunov, Yu. L.; Lammer, H.; Kislyakova, K. G.; Erkaev, N. V.

    2014-11-10

    In the present series of papers we propose a consistent description of the mass loss process. To study in a comprehensive way the effects of the intrinsic magnetic field of a close-orbit giant exoplanet (a so-called hot Jupiter) on atmospheric material escape and the formation of a planetary inner magnetosphere, we start with a hydrodynamic model of an upper atmosphere expansion in this paper. While considering a simple hydrogen atmosphere model, we focus on the self-consistent inclusion of the effects of radiative heating and ionization of the atmospheric gas with its consequent expansion in the outer space. Primary attention is paid to an investigation of the role of the specific conditions at the inner and outer boundaries of the simulation domain, under which different regimes of material escape (free and restricted flow) are formed. A comparative study is performed of different processes, such as X-ray and ultraviolet (XUV) heating, material ionization and recombination, H{sub 3}{sup +} cooling, adiabatic and Lyα cooling, and Lyα reabsorption. We confirm the basic consistency of the outcomes of our modeling with the results of other hydrodynamic models of expanding planetary atmospheres. In particular, we determine that, under the typical conditions of an orbital distance of 0.05 AU around a Sun-type star, a hot Jupiter plasma envelope may reach maximum temperatures up to ∼9000 K with a hydrodynamic escape speed of ∼9 km s{sup –1}, resulting in mass loss rates of ∼(4-7) · 10{sup 10} g s{sup –1}. In the range of the considered stellar-planetary parameters and XUV fluxes, that is close to the mass loss in the energy-limited case. The inclusion of planetary intrinsic magnetic fields in the model is a subject of the follow-up paper (Paper II).

  9. Robotic intelligence kernel

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  10. Flexible Kernel Memory

    PubMed Central

    Nowicki, Dimitri; Siegelmann, Hava

    2010-01-01

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013

  11. Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Theodoridis, Sergios

    2008-12-01

    Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.

  12. Popping the Kernel Modeling the States of Matter

    ERIC Educational Resources Information Center

    Hitt, Austin; White, Orvil; Hanson, Debbie

    2005-01-01

    This article discusses how to use popcorn to engage students in model building and to teach them about the nature of matter. Popping kernels is a simple and effective method to connect the concepts of heat, motion, and volume with the different phases of matter. Before proceeding with the activity the class should discuss the nature of scientific…

  13. Popping the Kernel Modeling the States of Matter

    ERIC Educational Resources Information Center

    Hitt, Austin; White, Orvil; Hanson, Debbie

    2005-01-01

    This article discusses how to use popcorn to engage students in model building and to teach them about the nature of matter. Popping kernels is a simple and effective method to connect the concepts of heat, motion, and volume with the different phases of matter. Before proceeding with the activity the class should discuss the nature of scientific…

  14. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle...

  15. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off....

  16. The surface urban heat island response to urban expansion: A panel analysis for the conterminous United States.

    PubMed

    Li, Xiaoma; Zhou, Yuyu; Asrar, Ghassem R; Imhoff, Marc; Li, Xuecao

    2017-12-15

    Urban heat island (UHI), the phenomenon that urban areas experience higher temperatures compared to their surrounding rural areas, has significant socioeconomic and environmental impacts. With current and anticipated rapid urbanization, improved understanding of the response of UHI to urbanization is important for developing effective adaptation measures and mitigation strategies. Current studies mainly focus on a single or a few big cities and knowledge on the response of UHI to urbanization for large areas is limited. As a major indicator of urbanization, urban area size lends itself well for representation in prognostic models. However, we have little knowledge on how UHI responds to urban area size increase and its spatial and temporal variation over large areas. In this study, we investigated the relationship between surface UHI (SUHI) and urban area size in the climate and ecological context, and its spatial and temporal variations, based on a panel analysis of about 5000 urban areas of 10km(2) or larger, in the conterminous U.S. We found statistically significant positive relationship between SUHI and urban area size, and doubling the urban area size led to a SUHI increase as high as 0.7°C. The response of SUHI to the increase of urban area size shows spatial and temporal variations, with stronger SUHI increase in Northern U.S., and during daytime and summer. Urban area size alone can explain as much as 87% of the variance of SUHI among cities studied, but with large spatial and temporal variations. Urban area size shows higher association with SUHI in regions where the thermal characteristics of land cover surrounding the urban area are more homogeneous, such as in Eastern U.S., and in the summer months. This study provides a practical approach for large-scale assessment and modeling of the impact of urbanization on SUHI, both spatially and temporally. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Source identity and kernel functions for Inozemtsev-type systems

    NASA Astrophysics Data System (ADS)

    Langmann, Edwin; Takemura, Kouichi

    2012-08-01

    The Inozemtsev Hamiltonian is an elliptic generalization of the differential operator defining the BCN trigonometric quantum Calogero-Sutherland model, and its eigenvalue equation is a natural many-variable generalization of the Heun differential equation. We present kernel functions for Inozemtsev Hamiltonians and Chalykh-Feigin-Veselov-Sergeev-type deformations thereof. Our main result is a solution of a heat-type equation for a generalized Inozemtsev Hamiltonian which is the source of all these kernel functions. Applications are given, including a derivation of simple exact eigenfunctions and eigenvalues of the Inozemtsev Hamiltonian.

  18. FUV Continuum in Flare Kernels Observed by IRIS

    NASA Astrophysics Data System (ADS)

    Daw, Adrian N.; Kowalski, Adam; Allred, Joel C.; Cauzzi, Gianna

    2016-05-01

    Fits to Interface Region Imaging Spectrograph (IRIS) spectra observed from bright kernels during the impulsive phase of solar flares are providing long-sought constraints on the UV/white-light continuum emission. Results of fits of continua plus numerous atomic and molecular emission lines to IRIS far ultraviolet (FUV) spectra of bright kernels are presented. Constraints on beam energy and cross sectional area are provided by cotemporaneous RHESSI, FERMI, ROSA/DST, IRIS slit-jaw and SDO/AIA observations, allowing for comparison of the observed IRIS continuum to calculations of non-thermal electron beam heating using the RADYN radiative-hydrodynamic loop model.

  19. Use of meixner functions in estimation of Volterra kernels of nonlinear systems with delay.

    PubMed

    Asyali, Musa H; Juusola, Mikko

    2005-02-01

    Volterra series representation of nonlinear systems is a mathematical analysis tool that has been successfully applied in many areas of biological sciences, especially in the area of modeling of hemodynamic response. In this study, we explored the possibility of using discrete time Meixner basis functions (MBFs) in estimating Volterra kernels of nonlinear systems. The problem of estimation of Volterra kernels can be formulated as a multiple regression problem and solved using least squares estimation. By expanding system kernels with some suitable basis functions, it is possible to reduce the number of parameters to be estimated and obtain better kernel estimates. Thus far, Laguerre basis functions have been widely used in this framework. However, research in signal processing indicates that when the kernels have a slow initial onset or delay, Meixner functions, which can be made to have a slow start, are more suitable in terms of providing a more accurate approximation to the kernels. We, therefore, compared the performance of Meixner functions, in kernel estimation, to that of Laguerre functions in some test cases that we constructed and in a real experimental case where we studied photoreceptor responses of photoreceptor cells of adult fruitflies (Drosophila melanogaster). Our results indicate that when there is a slow initial onset or delay, MBF expansion provides better kernel estimates.

  20. Many Molecular Properties from One Kernel in Chemical Space

    SciTech Connect

    Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole

    2015-01-01

    We introduce property-independent kernels for machine learning modeling of arbitrarily many molecular properties. The kernels encode molecular structures for training sets of varying size, as well as similarity measures sufficiently diffuse in chemical space to sample over all training molecules. Corresponding molecular reference properties provided, they enable the instantaneous generation of ML models which can systematically be improved through the addition of more data. This idea is exemplified for single kernel based modeling of internal energy, enthalpy, free energy, heat capacity, polarizability, electronic spread, zero-point vibrational energy, energies of frontier orbitals, HOMOLUMO gap, and the highest fundamental vibrational wavenumber. Models of these properties are trained and tested using 112 kilo organic molecules of similar size. Resulting models are discussed as well as the kernels’ use for generating and using other property models.

  1. Force Field Benchmark of Organic Liquids: Density, Enthalpy of Vaporization, Heat Capacities, Surface Tension, Isothermal Compressibility, Volumetric Expansion Coefficient, and Dielectric Constant.

    PubMed

    Caleman, Carl; van Maaren, Paul J; Hong, Minyan; Hub, Jochen S; Costa, Luciano T; van der Spoel, David

    2012-01-10

    The chemical composition of small organic molecules is often very similar to amino acid side chains or the bases in nucleic acids, and hence there is no a priori reason why a molecular mechanics force field could not describe both organic liquids and biomolecules with a single parameter set. Here, we devise a benchmark for force fields in order to test the ability of existing force fields to reproduce some key properties of organic liquids, namely, the density, enthalpy of vaporization, the surface tension, the heat capacity at constant volume and pressure, the isothermal compressibility, the volumetric expansion coefficient, and the static dielectric constant. Well over 1200 experimental measurements were used for comparison to the simulations of 146 organic liquids. Novel polynomial interpolations of the dielectric constant (32 molecules), heat capacity at constant pressure (three molecules), and the isothermal compressibility (53 molecules) as a function of the temperature have been made, based on experimental data, in order to be able to compare simulation results to them. To compute the heat capacities, we applied the two phase thermodynamics method (Lin et al. J. Chem. Phys.2003, 119, 11792), which allows one to compute thermodynamic properties on the basis of the density of states as derived from the velocity autocorrelation function. The method is implemented in a new utility within the GROMACS molecular simulation package, named g_dos, and a detailed exposé of the underlying equations is presented. The purpose of this work is to establish the state of the art of two popular force fields, OPLS/AA (all-atom optimized potential for liquid simulation) and GAFF (generalized Amber force field), to find common bottlenecks, i.e., particularly difficult molecules, and to serve as a reference point for future force field development. To make for a fair playing field, all molecules were evaluated with the same parameter settings, such as thermostats and barostats

  2. Force Field Benchmark of Organic Liquids: Density, Enthalpy of Vaporization, Heat Capacities, Surface Tension, Isothermal Compressibility, Volumetric Expansion Coefficient, and Dielectric Constant

    PubMed Central

    2011-01-01

    The chemical composition of small organic molecules is often very similar to amino acid side chains or the bases in nucleic acids, and hence there is no a priori reason why a molecular mechanics force field could not describe both organic liquids and biomolecules with a single parameter set. Here, we devise a benchmark for force fields in order to test the ability of existing force fields to reproduce some key properties of organic liquids, namely, the density, enthalpy of vaporization, the surface tension, the heat capacity at constant volume and pressure, the isothermal compressibility, the volumetric expansion coefficient, and the static dielectric constant. Well over 1200 experimental measurements were used for comparison to the simulations of 146 organic liquids. Novel polynomial interpolations of the dielectric constant (32 molecules), heat capacity at constant pressure (three molecules), and the isothermal compressibility (53 molecules) as a function of the temperature have been made, based on experimental data, in order to be able to compare simulation results to them. To compute the heat capacities, we applied the two phase thermodynamics method (Lin et al. J. Chem. Phys.2003, 119, 11792), which allows one to compute thermodynamic properties on the basis of the density of states as derived from the velocity autocorrelation function. The method is implemented in a new utility within the GROMACS molecular simulation package, named g_dos, and a detailed exposé of the underlying equations is presented. The purpose of this work is to establish the state of the art of two popular force fields, OPLS/AA (all-atom optimized potential for liquid simulation) and GAFF (generalized Amber force field), to find common bottlenecks, i.e., particularly difficult molecules, and to serve as a reference point for future force field development. To make for a fair playing field, all molecules were evaluated with the same parameter settings, such as thermostats and barostats

  3. Lattice Constant, Resistivity, Specific Heat, and Thermal Expansion Studies in the Mixed Valent-Kondo System CERIUM-INDIUM(3-X)TIN(X)

    NASA Astrophysics Data System (ADS)

    Maury, Alvaro

    This thesis focuses on a study of the CeIn(,3 -x)Sn(,x) system, of which, the terminal compound CeIn(,3) was known to be trivalent and exhibit Kondo behavior, while CeSn(,3) was thought to be weakly mixed valent. The object of the study was primarily to determine how the thermodynamic and transport properties evolve as we go from the mixed valent behavior of CeSn(,3) into the trivalent behavior of CeIn(,3), by alloying. From room temperature x-ray measurements, the lattice constants of the CeIn(,3-x)Sn(,x) system follow a linear behavior for x < 1.8 indicative of a stable trivalent character of Ce. The lattice constants depart from the linear behavior at x = 1.8, suggesting a mixed valent region for 1.8 < x < 3.0. The resistivity measurements yield a behavior of the maximum in the magnetic resistivity of CeIn(,3-x)Sn(,x) that increases in the mixed valent region (i.e. 1.8 < x < 3.0) as the mixed valent-trivalent boundary is approached, that is, as x decreases; the maximum magnetic resistivity peaks at the transition region (i.e. x (TURNEQ) 1.8) and drops precipitously in the trivalent side. The coefficient of the electronic specific heat as measured by us and other workers, as well as the very low temperature values of the magnetic susceptibility, also increase as x decreases in the mixed valent region, peak at x (TURNEQ) 1.8, and decrease in the trivalent region (i.e. 0 < x < 1.8). The behavior of the three quantities, resistivity, electronic coefficient of specific heat and zero temperature susceptibility can be fitted to what a Fermi liquid theory of mixed valence predicts, but only if the valence of CeSn(,3) is taken to be 3.6 at T = 0 K and not 3.1 as previously thought. From thermal expansion measurements we obtain a behavior of the valence of CeSn(,3) that is well fitted by the same Fermi liquid theory of mixed valence. The temperature dependence of the magnetic resistivity of samples in the mixed valent region near the mixed valent -trivalent boundary is

  4. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  5. Relating dispersal and range expansion of California sea otters.

    PubMed

    Krkosek, Martin; Lauzon-Guay, Jean-Sébastien; Lewis, Mark A

    2007-06-01

    Linking dispersal and range expansion of invasive species has long challenged theoretical and quantitative ecologists. Subtle differences in dispersal can yield large differences in geographic spread, with speeds ranging from constant to rapidly increasing. We developed a stage-structured integrodifference equation (IDE) model of the California sea otter range expansion that occurred between 1914 and 1986. The non-spatial model, a linear matrix population model, was coupled to a suite of candidate dispersal kernels to form stage-structured IDEs. Demographic and dispersal parameters were estimated independent of range expansion data. Using a single dispersal parameter, alpha, we examined how well these stage-structured IDEs related small scale demographic and dispersal processes with geographic population expansion. The parameter alpha was estimated by fitting the kernels to dispersal data and by fitting the IDE model to range expansion data. For all kernels, the alpha estimate from range expansion data fell within the 95% confidence intervals of the alpha estimate from dispersal data. The IDE models with exponentially bounded kernels predicted invasion velocities that were captured within the 95% confidence bounds on the observed northbound invasion velocity. However, the exponentially bounded kernels yielded range expansions that were in poor qualitative agreement with range expansion data. An IDE model with fat (exponentially unbounded) tails and accelerating spatial spread yielded the best qualitative match. This model explained 94% and 97% of the variation in northbound and southbound range expansions when fit to range expansion data. These otters may have been fat-tailed accelerating invaders or they may have followed a piece-wise linear spread first over kelp forests and then over sandy habitats. Further, habitat-specific dispersal data could resolve these explanations.

  6. RTOS kernel in portable electrocardiograph

    NASA Astrophysics Data System (ADS)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  7. Volterra series truncation and kernel estimation of nonlinear systems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Billings, S. A.

    2017-02-01

    The Volterra series model is a direct generalisation of the linear convolution integral and is capable of displaying the intrinsic features of a nonlinear system in a simple and easy to apply way. Nonlinear system analysis using Volterra series is normally based on the analysis of its frequency-domain kernels and a truncated description. But the estimation of Volterra kernels and the truncation of Volterra series are coupled with each other. In this paper, a novel complex-valued orthogonal least squares algorithm is developed. The new algorithm provides a powerful tool to determine which terms should be included in the Volterra series expansion and to estimate the kernels and thus solves the two problems all together. The estimated results are compared with those determined using the analytical expressions of the kernels to validate the method. To further evaluate the effectiveness of the method, the physical parameters of the system are also extracted from the measured kernels. Simulation studies demonstrates that the new approach not only can truncate the Volterra series expansion and estimate the kernels of a weakly nonlinear system, but also can indicate the applicability of the Volterra series analysis in a severely nonlinear system case.

  8. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  9. Universal Expansion.

    ERIC Educational Resources Information Center

    McArdle, Heather K.

    1997-01-01

    Describes a week-long activity for general to honors-level students that addresses Hubble's law and the universal expansion theory. Uses a discrepant event-type activity to lead up to the abstract principles of the universal expansion theory. (JRH)

  10. Universal Expansion.

    ERIC Educational Resources Information Center

    McArdle, Heather K.

    1997-01-01

    Describes a week-long activity for general to honors-level students that addresses Hubble's law and the universal expansion theory. Uses a discrepant event-type activity to lead up to the abstract principles of the universal expansion theory. (JRH)

  11. New analytical TEMOM solutions for a class of collision kernels in the theory of Brownian coagulation

    NASA Astrophysics Data System (ADS)

    He, Qing; Shchekin, Alexander K.; Xie, Ming-Liang

    2015-06-01

    New analytical solutions in the theory of the Brownian coagulation with a wide class of collision kernels have been found with using the Taylor-series expansion method of moments (TEMOM). It has been shown at different power exponents in the collision kernels from this class and at arbitrary initial conditions that the relative rates of changing zeroth and second moments of the particle volume distribution have the same long time behavior with power exponent -1, while the dimensionless particle moment related to the geometric standard deviation tends to the constant value which equals 2. The power exponent in the collision kernel in the class studied affects the time of approaching the self-preserving distribution, the smaller the value of the index, the longer time. It has also been shown that constant collision kernel gives for the moments in the Brownian coagulation the results which are very close to that in the continuum regime.

  12. Low-temperature heat capacities of CaAl2SiO6 glass and pyroxene and thermal expansion of CaAl2SiO6 pyroxene.

    USGS Publications Warehouse

    Haselton, H.T.; Hemingway, B.S.; Robie, R.A.

    1984-01-01

    Low-T heat capacities (5-380 K) have been measured by adiabatic calorimetry for synthetic CaAl2SiO6 glass and pyroxene. High-T unit cell parameters were measured for CaAl2SiO6 pyroxene by means of a Nonius Guinier-Lenne powder camera in order to determine the mean coefficient of thermal expansion in the T range 25-1200oC. -J.A.Z.

  13. Travel-Time and Amplitude Sensitivity Kernels

    DTIC Science & Technology

    2011-09-01

    amplitude sensitivity kernels shown in the lower panels concentrate about the corresponding eigenrays . Each 3D kernel exhibits a broad negative...in 2 and 3 dimensions have similar 11 shapes to corresponding travel-time sensitivity kernels (TSKs), centered about the respective eigenrays

  14. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  15. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  16. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  17. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  18. Polar lipids from oat kernels

    USDA-ARS?s Scientific Manuscript database

    Oat (Avena sativa L.) kernels appear to contain much higher polar lipid concentrations than other plant tissues. We have extracted, identified, and quantified polar lipids from 18 oat genotypes grown in replicated plots in three environments in order to determine genotypic or environmental variation...

  19. Adaptive wiener image restoration kernel

    DOEpatents

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  20. Diffusion Kernels on Statistical Manifolds

    DTIC Science & Technology

    2004-01-16

    International Press, 1994. Michael Spivak . Differential Geometry, volume 1. Publish or Perish, 1979. 36 Chengxiang Zhai and John Lafferty. A study of smoothing...construction of information diffusion kernels, since these concepts are not widely used in machine learning. We refer to Spivak (1979) for details and further

  1. Identification of quantitative trait loci for popping traits and kernel characteristics in sorghum grain

    USDA-ARS?s Scientific Manuscript database

    Popped grain sorghum has developed a niche among specialty snack-food consumers. In contrast to popcorn, sorghum has not benefited from persistent selective breeding for popping efficiency and kernel expansion ratio. While recent studies have already demonstrated that popping characteristics are h...

  2. Trajectory, Development, and Temperature of Spark Kernels Exiting into Quiescent Air (Preprint)

    DTIC Science & Technology

    2012-04-01

    measurements of the Hencken burner flames. Jay Gore is the academic advisor for the corresponding author. His tutelage is gratefully acknowledged...165-184. 6Au, S., Haley , R., Smy, P., “The Influence of the Igniter-Induced Blast Wave Upon the Initial Volume and Expansion of the Flame Kernel

  3. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    SciTech Connect

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  4. The flare kernel in the impulsive phase

    NASA Technical Reports Server (NTRS)

    Dejager, C.

    1986-01-01

    The impulsive phase of a flare is characterized by impulsive bursts of X-ray and microwave radiation, related to impulsive footpoint heating up to 50 or 60 MK, by upward gas velocities (150 to 400 km/sec) and by a gradual increase of the flare's thermal energy content. These phenomena, as well as non-thermal effects, are all related to the impulsive energy injection into the flare. The available observations are also quantitatively consistent with a model in which energy is injected into the flare by beams of energetic electrons, causing ablation of chromospheric gas, followed by convective rise of gas. Thus, a hole is burned into the chromosphere; at the end of impulsive phase of an average flare the lower part of that hole is situated about 1800 km above the photosphere. H alpha and other optical and UV line emission is radiated by a thin layer (approx. 20 km) at the bottom of the flare kernel. The upward rising and outward streaming gas cools down by conduction in about 45 s. The non-thermal effects in the initial phase are due to curtailing of the energy distribution function by escape of energetic electrons. The single flux tube model of a flare does not fit with these observations; instead we propose the spaghetti-bundle model. Microwave and gamma-ray observations suggest the occurrence of dense flare knots of approx. 800 km diameter, and of high temperature. Future observations should concentrate on locating the microwave/gamma-ray sources, and on determining the kernel's fine structure and the related multi-loop structure of the flaring area.

  5. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  6. Optimization of numerical orbitals using the Helmholtz kernel.

    PubMed

    Solala, Eelis; Losilla, Sergio A; Sundholm, Dage; Xu, Wenhua; Parkkinen, Pauli

    2017-02-28

    We present an integration scheme for optimizing the orbitals in numerical electronic structure calculations on general molecules. The orbital optimization is performed by integrating the Helmholtz kernel in the double bubble and cube basis, where bubbles represent the steep part of the functions in the vicinity of the nuclei, whereas the remaining cube part is expanded on an equidistant three-dimensional grid. The bubbles' part is treated by using one-center expansions of the Helmholtz kernel in spherical harmonics multiplied with modified spherical Bessel functions of the first and second kinds. The angular part of the bubble functions can be integrated analytically, whereas the radial part is integrated numerically. The cube part is integrated using a similar method as we previously implemented for numerically integrating two-electron potentials. The behavior of the integrand of the auxiliary dimension introduced by the integral transformation of the Helmholtz kernel has also been investigated. The correctness of the implementation has been checked by performing Hartree-Fock self-consistent-field calculations on H2, H2O, and CO. The obtained energies are compared with reference values in the literature showing that an accuracy of 10(-4) to 10(-7) Eh can be obtained with our approach.

  7. Optimization of numerical orbitals using the Helmholtz kernel

    NASA Astrophysics Data System (ADS)

    Solala, Eelis; Losilla, Sergio A.; Sundholm, Dage; Xu, Wenhua; Parkkinen, Pauli

    2017-02-01

    We present an integration scheme for optimizing the orbitals in numerical electronic structure calculations on general molecules. The orbital optimization is performed by integrating the Helmholtz kernel in the double bubble and cube basis, where bubbles represent the steep part of the functions in the vicinity of the nuclei, whereas the remaining cube part is expanded on an equidistant three-dimensional grid. The bubbles' part is treated by using one-center expansions of the Helmholtz kernel in spherical harmonics multiplied with modified spherical Bessel functions of the first and second kinds. The angular part of the bubble functions can be integrated analytically, whereas the radial part is integrated numerically. The cube part is integrated using a similar method as we previously implemented for numerically integrating two-electron potentials. The behavior of the integrand of the auxiliary dimension introduced by the integral transformation of the Helmholtz kernel has also been investigated. The correctness of the implementation has been checked by performing Hartree-Fock self-consistent-field calculations on H2, H2O, and CO. The obtained energies are compared with reference values in the literature showing that an accuracy of 10-4 to 10-7 Eh can be obtained with our approach.

  8. A reduced volumetric expansion factor plot

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.

    1979-01-01

    A reduced volumetric expansion factor plot was constructed for simple fluids which is suitable for engineering computations in heat transfer. Volumetric expansion factors were found useful in correlating heat transfer data over a wide range of operating conditions including liquids, gases and the near critical region.

  9. A reduced volumetric expansion factor plot

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.

    1979-01-01

    A reduced volumetric expansion factor plot has been constructed for simple fluids which is suitable for engineering computations in heat transfer. Volumetric expansion factors have been found useful in correlating heat transfer data over a wide range of operating conditions including liquids, gases and the near critical region.

  10. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    PubMed

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.

  11. QTL mapping of 1000-kernel weight, kernel length, and kernel width in bread wheat (Triticum aestivum L.).

    PubMed

    Ramya, P; Chaubal, A; Kulkarni, K; Gupta, L; Kadoo, N; Dhaliwal, H S; Chhuneja, P; Lagu, M; Gupta, V

    2010-01-01

    Kernel size and morphology influence the market value and milling yield of bread wheat (Triticum aestivum L.). The objective of this study was to identify quantitative trait loci (QTLs) controlling kernel traits in hexaploid wheat. We recorded 1000-kernel weight, kernel length, and kernel width for 185 recombinant inbred lines from the cross Rye Selection 111 × Chinese Spring grown in 2 agro-climatic regions in India for many years. Composite interval mapping (CIM) was employed for QTL detection using a linkage map with 169 simple sequence repeat (SSR) markers. For 1000-kernel weight, 10 QTLs were identified on wheat chromosomes 1A, 1D, 2B, 2D, 4B, 5B, and 6B, whereas 6 QTLs for kernel length were detected on 1A, 2B, 2D, 5A, 5B and 5D. Chromosomes 1D, 2B, 2D, 4B, 5B and 5D had 9 QTLs for kernel width. Chromosomal regions with QTLs detected consistently for multiple year-location combinations were identified for each trait. Pleiotropic QTLs were found on chromosomes 2B, 2D, 4B, and 5B. The identified genomic regions controlling wheat kernel size and shape can be targeted during further studies for their genetic dissection.

  12. Filters, reproducing kernel, and adaptive meshfree method

    NASA Astrophysics Data System (ADS)

    You, Y.; Chen, J.-S.; Lu, H.

    Reproducing kernel, with its intrinsic feature of moving averaging, can be utilized as a low-pass filter with scale decomposition capability. The discrete convolution of two nth order reproducing kernels with arbitrary support size in each kernel results in a filtered reproducing kernel function that has the same reproducing order. This property is utilized to separate the numerical solution into an unfiltered lower order portion and a filtered higher order portion. As such, the corresponding high-pass filter of this reproducing kernel filter can be used to identify the locations of high gradient, and consequently serves as an operator for error indication in meshfree analysis. In conjunction with the naturally conforming property of the reproducing kernel approximation, a meshfree adaptivity method is also proposed.

  13. Image texture analysis of crushed wheat kernels

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Martin, C. R.; Steele, James L.; Dempster, Richard E.

    1992-03-01

    The development of new approaches for wheat hardness assessment may impact the grain industry in marketing, milling, and breeding. This study used image texture features for wheat hardness evaluation. Application of digital imaging to grain for grading purposes is principally based on morphometrical (shape and size) characteristics of the kernels. A composite sample of 320 kernels for 17 wheat varieties were collected after testing and crushing with a single kernel hardness characterization meter. Six wheat classes where represented: HRW, HRS, SRW, SWW, Durum, and Club. In this study, parameters which characterize texture or spatial distribution of gray levels of an image were determined and used to classify images of crushed wheat kernels. The texture parameters of crushed wheat kernel images were different depending on class, hardness and variety of the wheat. Image texture analysis of crushed wheat kernels showed promise for use in class, hardness, milling quality, and variety discrimination.

  14. Solving the homogeneous Boltzmann equation with arbitrary scattering kernel

    NASA Astrophysics Data System (ADS)

    Hohenegger, A.

    2009-03-01

    With applications in astroparticle physics in mind, we generalize a method for the solution of the nonlinear, space-homogeneous Boltzmann equation with an isotropic distribution function to arbitrary matrix elements. The method is based on the expansion of the scattering kernel in terms of two cosines of the “scattering angles.” The scattering functions used by previous authors in particle physics for matrix elements in the Fermi approximation are retrieved as lowest order results in this expansion. The method is designed for the unified treatment of reactive mixtures of particles obeying different scattering laws, including the quantum statistical terms for blocking or stimulated emission, in possibly large networks of Boltzmann equations. Although our notation is the relativistic one, as it is used in astroparticle physics, the results can also be applied in the classical case.

  15. A method for computing the kernel of the downwash integral equation for arbitrary complex frequencies

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.; Rowe, W. S.

    1984-01-01

    For the design of active controls to stabilize flight vehicles, which requires the use of unsteady aerodynamics that are valid for arbitrary complex frequencies, algorithms are derived for evaluating the nonelementary part of the kernel of the integral equation that relates unsteady pressure to downwash. This part of the kernel is separated into an infinite limit integral that is evaluated using Bessel and Struve functions and into a finite limit integral that is expanded in series and integrated termwise in closed form. The developed series expansions gave reliable answers for all complex reduced frequencies and executed faster than exponential approximations for many pressure stations.

  16. Several new kernel estimators for population abundance

    NASA Astrophysics Data System (ADS)

    Albadareen, Baker; Ismail, Noriszura

    2017-04-01

    The parameter f(0) is crucial in line transect sampling which is regularly used for computing population abundance in wildlife. The usual kernel estimator of f(0) has a high negative bias. Our study proposes several new estimators which are shown to be more efficient than the usual kernel estimator. A simulation technique is adopted to compare the performance of the proposed estimators with the classical kernel estimator. An application of the new estimators on real data set is discussed.

  17. Diffusion Map Kernel Analysis for Target Classification

    DTIC Science & Technology

    2010-06-01

    Gaussian and Polynomial kernels are most familiar from support vector machines. The Laplacian and Rayleigh were introduced previously in [7]. IV ...Cancer • Clev. Heart: Heart Disease Data Set, Cleveland • Wisc . BC: Wisconsin Breast Cancer Original • Sonar2: Shallow Water Acoustic Toolset [9...the Rayleigh kernel captures the embedding with an average PC of 77.3% and a slightly higher PFA than the Gaussian kernel. For the Wisc . BC

  18. Numerical simulations on influence of urban land cover expansion and anthropogenic heat release on urban meteorological environment in Pearl River Delta

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Wang, Xuemei; Chen, Yan; Dai, Wei; Wang, Xueyuan

    2016-11-01

    Urbanization is an extreme way in which human being changes the land use/land cover of the earth surface, and anthropogenic heat release occurs at the same time. In this paper, the anthropogenic heat release parameterization scheme in the Weather Research and Forecasting model is modified to consider the spatial heterogeneity of the release; and the impacts of land use change and anthropogenic heat release on urban boundary layer structure in the Pearl River Delta, China, are studied with a series of numerical experiments. The results show that the anthropogenic heat release contributes nearly 75 % to the urban heat island intensity in our studied period. The impact of anthropogenic heat release on near-surface specific humidity is very weak, but that on relative humidity is apparent due to the near-surface air temperature change. The near-surface wind speed decreases after the local land use is changed to urban type due to the increased land surface roughness, but the anthropogenic heat release leads to increases of the low-level wind speed and decreases above in the urban boundary layer because the anthropogenic heat release reduces the boundary layer stability and enhances the vertical mixing.

  19. Kernel earth mover's distance for EEG classification.

    PubMed

    Daliri, Mohammad Reza

    2013-07-01

    Here, we propose a new kernel approach based on the earth mover's distance (EMD) for electroencephalography (EEG) signal classification. The EEG time series are first transformed into histograms in this approach. The distance between these histograms is then computed using the EMD in a pair-wise manner. We bring the distances into a kernel form called kernel EMD. The support vector classifier can then be used for the classification of EEG signals. The experimental results on the real EEG data show that the new kernel method is very effective, and can classify the data with higher accuracy than traditional methods.

  20. Modeling an Operating System Kernel

    NASA Astrophysics Data System (ADS)

    Börger, Egon; Craig, Iain

    We define a high-level model of an operating system (OS) kernel which can be refined to concrete systems in various ways, reflecting alternative design decisions. We aim at an exposition practitioners and lecturers can use effectively to communicate (document and teach) design ideas for operating system functionality at a conceptual level. The operational and rigorous nature of our definition provides a basis for the practitioner to validate and verify precisely stated system properties of interest, thus helping to make OS code reliable. As a by-product we introduce a novel combination of parallel and interruptable sequential Abstract State Machine steps.

  1. Molecular Hydrodynamics from Memory Kernels

    NASA Astrophysics Data System (ADS)

    Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin

    2016-04-01

    The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t-3 /2 . We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius.

  2. Thermophysical properties of ilvaite CaFe22+Fe3+Si2O7O (OH); heat capacity from 7 to 920 K and thermal expansion between 298 and 856 K

    USGS Publications Warehouse

    Robie, R.A.; Evans, H.T.; Hemingway, B.S.

    1988-01-01

    The heat capacity of ilvaite from Seriphos, Greece was measured by adiabatic shield calorimetry (6.4 to 380.7 K) and by differential scanning calorimetry (340 to 950 K). The thermal expansion of ilvaite was also investigated, by X-ray methods, between 308 and 853 K. At 298.15 K the standard molar heat capacity and entropy for ilvaite are 298.9??0.6 and 292.3??0.6 J/(mol. K) respectively. Between 333 and 343 K ilvaite changes from monoclinic to orthorhombic. The antiferromagnetic transition is shown by a hump in Cp0with a Ne??el temperature of 121.9??0.5 K. A rounded hump in Cp0between 330 and 400 K may possibily arise from the thermally activated electron delocalization (hopping) known to take place in this temperature region. ?? 1988 Springer-Verlag.

  3. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  4. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...

  5. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...

  6. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  7. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  8. ELECTROMAGNETISM, OPTICS, ACOUSTICS, HEAT TRANSFER, CLASSICAL MECHANICS, AND FLUID DYNAMICS: Double Symplectic Eigenfunction Expansion Method of Free Vibration of Rectangular Thin Plates

    NASA Astrophysics Data System (ADS)

    Wang, Hua; Alatancang; Huang, Jun-Jie

    2009-12-01

    The free vibration problem of rectangular thin plates is rewritten as a new upper triangular matrix differential system. For the associated operator matrix, we find that the two diagonal block operators are Hamiltonian. Moreover, the existence and completeness of normed symplectic orthogonal eigenfunction systems of these two block operators are demonstrated. Based on the completeness, the general solution of the free vibration of rectangular thin plates is given by double symplectic eigenfunction expansion method.

  9. Kernel current source density method.

    PubMed

    Potworowski, Jan; Jakuczun, Wit; Lȩski, Szymon; Wójcik, Daniel

    2012-02-01

    Local field potentials (LFP), the low-frequency part of extracellular electrical recordings, are a measure of the neural activity reflecting dendritic processing of synaptic inputs to neuronal populations. To localize synaptic dynamics, it is convenient, whenever possible, to estimate the density of transmembrane current sources (CSD) generating the LFP. In this work, we propose a new framework, the kernel current source density method (kCSD), for nonparametric estimation of CSD from LFP recorded from arbitrarily distributed electrodes using kernel methods. We test specific implementations of this framework on model data measured with one-, two-, and three-dimensional multielectrode setups. We compare these methods with the traditional approach through numerical approximation of the Laplacian and with the recently developed inverse current source density methods (iCSD). We show that iCSD is a special case of kCSD. The proposed method opens up new experimental possibilities for CSD analysis from existing or new recordings on arbitrarily distributed electrodes (not necessarily on a grid), which can be obtained in extracellular recordings of single unit activity with multiple electrodes.

  10. KERNEL PHASE IN FIZEAU INTERFEROMETRY

    SciTech Connect

    Martinache, Frantz

    2010-11-20

    The detection of high contrast companions at small angular separation appears feasible in conventional direct images using the self-calibration properties of interferometric observable quantities. The friendly notion of closure phase, which is key to the recent observational successes of non-redundant aperture masking interferometry used with adaptive optics, appears to be one example of a wide family of observable quantities that are not contaminated by phase noise. In the high-Strehl regime, soon to be available thanks to the coming generation of extreme adaptive optics systems on ground-based telescopes, and already available from space, closure phase like information can be extracted from any direct image, even taken with a redundant aperture. These new phase-noise immune observable quantities, called kernel phases, are determined a priori from the knowledge of the geometry of the pupil only. Re-analysis of archive data acquired with the Hubble Space Telescope NICMOS instrument using this new kernel-phase algorithm demonstrates the power of the method as it clearly detects and locates with milliarcsecond precision a known companion to a star at angular separation less than the diffraction limit.

  11. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  12. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  13. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  14. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  15. Fission gas retention and axial expansion of irradiated metallic fuel

    SciTech Connect

    Fenske, G.R.; Emerson, J.E.; Savoie, F.E.; Johanson, E.W.

    1986-05-01

    Out-of-reactor experiments utilizing direct electrical heating and infrared heating techniques were performed on irradiated metallic fuel. The results indicate accelerated expansion can occur during thermal transients and that the accelerated expansion is driven by retained fission gases. The results also demonstrate gas retention and, hence, expansion behavior is a function of axial position within the pin.

  16. Kernel method for corrections to scaling.

    PubMed

    Harada, Kenji

    2015-07-01

    Scaling analysis, in which one infers scaling exponents and a scaling function in a scaling law from given data, is a powerful tool for determining universal properties of critical phenomena in many fields of science. However, there are corrections to scaling in many cases, and then the inference problem becomes ill-posed by an uncontrollable irrelevant scaling variable. We propose a new kernel method based on Gaussian process regression to fix this problem generally. We test the performance of the new kernel method for some example cases. In all cases, when the precision of the example data increases, inference results of the new kernel method correctly converge. Because there is no limitation in the new kernel method for the scaling function even with corrections to scaling, unlike in the conventional method, the new kernel method can be widely applied to real data in critical phenomena.

  17. The context-tree kernel for strings.

    PubMed

    Cuturi, Marco; Vert, Jean-Philippe

    2005-10-01

    We propose a new kernel for strings which borrows ideas and techniques from information theory and data compression. This kernel can be used in combination with any kernel method, in particular Support Vector Machines for string classification, with notable applications in proteomics. By using a Bayesian averaging framework with conjugate priors on a class of Markovian models known as probabilistic suffix trees or context-trees, we compute the value of this kernel in linear time and space while only using the information contained in the spectrum of the considered strings. This is ensured through an adaptation of a compression method known as the context-tree weighting algorithm. Encouraging classification results are reported on a standard protein homology detection experiment, showing that the context-tree kernel performs well with respect to other state-of-the-art methods while using no biological prior knowledge.

  18. Functional diversity among seed dispersal kernels generated by carnivorous mammals.

    PubMed

    González-Varo, Juan P; López-Bao, José V; Guitián, José

    2013-05-01

    1. Knowledge of the spatial scale of the dispersal service provided by important seed dispersers (i.e. common and/or keystone species) is essential to our understanding of their role on plant ecology, ecosystem functioning and, ultimately, biodiversity conservation. 2. Carnivores are the main mammalian frugivores and seed dispersers in temperate climate regions. However, information on the seed dispersal distances they generate is still very limited. We focused on two common temperate carnivores differing in body size and spatial ecology - red fox (Vulpes vulpes) and European pine marten (Martes martes) - for evaluating possible functional diversity in their seed dispersal kernels. 3. We measured dispersal distances using colour-coded seed mimics embedded in experimental fruits that were offered to the carnivores in feeding stations (simulating source trees). The exclusive colour code of each simulated tree allowed us to assign the exact origin of seed mimics found later in carnivore faeces. We further designed an explicit sampling strategy aiming to detect the longest dispersal events; as far we know, the most robust sampling scheme followed for tracking carnivore-dispersed seeds. 4. We found a marked functional heterogeneity among both species in their seed dispersal kernels according to their home range size: multimodality and long-distance dispersal in the case of the fox and unimodality and short-distance dispersal in the case of the marten (maximum distances = 2846 and 1233 m, respectively). As a consequence, emergent kernels at the guild level (overall and in two different years) were highly dependent on the relative contribution of each carnivore species. 5. Our results provide the first empirical evidence of functional diversity among seed dispersal kernels generated by carnivorous mammals. Moreover, they illustrate for the first time how seed dispersal kernels strongly depend on the relative contribution of different disperser species, thus on the

  19. Heat capacity and entropy at the temperatures 5 K to 720 K and thermal expansion from the temperatures 298 K to 573 K of synthetic enargite (Cu3AsS4)

    USGS Publications Warehouse

    Seal, R.R.; Robie, R.A.; Hemingway, B.S.; Evans, H.T.

    1996-01-01

    The heat capacity of synthetic Cu3AsS4 (enargite) was measured by quasi-adiabatic calorimetry from the temperatures 5 K to 355 K and by differential scanning calorimetry from T = 339 K to T = 720 K. Heat-capacity anomalies were observed at T = (58.5 ?? 0.5) K (??trsHom = 1.4??R??K; ??trsSom = 0.02??R) and at T = (66.5 ?? 0.5) K (??trsHom = 4.6??R??K; ??trsSom = 0.08??R), where R = 8.31451 J??K-1??mol-1. The causes of the anomalies are unknown. At T = 298.15 K, Cop,m and Som(T) are (190.4 ?? 0.2) J??K-1??mol-1 and (257.6 ?? 0.6) J??K-1??mol-1, respectively. The superambient heat capacities are described from T = 298.15 K to T = 944 K by the least-squares regression equation: Cop,m/(J??K-1??mol-1) = (196.7 ?? 1.2) + (0.0499 ?? 0.0016)??(T/K) -(1918 000 ?? 84 000)??(T/K)-2. The thermal expansion of synthetic enargite was measured from T = 298.15 K to T = 573 K by powder X-ray diffraction. The thermal expansion of the unit-cell volume (Z = 2) is described from T = 298.15 K to T = 573 K by the least-squares equation: V/pm3 = 106??(288.2 ?? 0.2) + 104??(1.49 ?? 0.04)??(T/K). ?? 1996 Academic Press Limited.

  20. Fast algorithms for Quadrature by Expansion I: Globally valid expansions

    NASA Astrophysics Data System (ADS)

    Rachh, Manas; Klöckner, Andreas; O'Neil, Michael

    2017-09-01

    The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.

  1. Bayesian Kernel Mixtures for Counts.

    PubMed

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  2. MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES

    PubMed Central

    Dunson, David B.

    2013-01-01

    Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563

  3. Load regulating expansion fixture

    DOEpatents

    Wagner, L.M.; Strum, M.J.

    1998-12-15

    A free standing self contained device for bonding ultra thin metallic films, such as 0.001 inch beryllium foils is disclosed. The device will regulate to a predetermined load for solid state bonding when heated to a bonding temperature. The device includes a load regulating feature, whereby the expansion stresses generated for bonding are regulated and self adjusting. The load regulator comprises a pair of friction isolators with a plurality of annealed copper members located therebetween. The device, with the load regulator, will adjust to and maintain a stress level needed to successfully and economically complete a leak tight bond without damaging thin foils or other delicate components. 1 fig.

  4. Load regulating expansion fixture

    DOEpatents

    Wagner, Lawrence M.; Strum, Michael J.

    1998-01-01

    A free standing self contained device for bonding ultra thin metallic films, such as 0.001 inch beryllium foils. The device will regulate to a predetermined load for solid state bonding when heated to a bonding temperature. The device includes a load regulating feature, whereby the expansion stresses generated for bonding are regulated and self adjusting. The load regulator comprises a pair of friction isolators with a plurality of annealed copper members located therebetween. The device, with the load regulator, will adjust to and maintain a stress level needed to successfully and economically complete a leak tight bond without damaging thin foils or other delicate components.

  5. Dielectric properties of almond kernels associated with radio frequency and microwave pasteurization

    NASA Astrophysics Data System (ADS)

    Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin

    2017-02-01

    To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10-300 MHz), but gradually over the measured MW range (300-3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R2 greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27-2450 MHz), moisture content (4.2-19.6% w.b.) and temperature (20-90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity.

  6. Dielectric properties of almond kernels associated with radio frequency and microwave pasteurization.

    PubMed

    Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin

    2017-02-10

    To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10-300 MHz), but gradually over the measured MW range (300-3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R(2) greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27-2450 MHz), moisture content (4.2-19.6% w.b.) and temperature (20-90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity.

  7. Search for the enhancement of the thermal expansion coefficient of superfluid 4HE Near T_Lambada by a heat current

    NASA Technical Reports Server (NTRS)

    Liu, Y.; Israelsson, U.; Larson, M.

    2001-01-01

    Presentation on the transition in 4He in the presence of a heat current (Q) provides an ideal system for the study of phase transitions under non-equlibrium, dynamical conditions. Many physical properties become nonlinear and Q-dependant near the transition temperature, T_Lambada.

  8. Search for the enhancement of the thermal expansion coefficient of superfluid 4HE Near T_Lambada by a heat current

    NASA Technical Reports Server (NTRS)

    Liu, Y.; Israelsson, U.; Larson, M.

    2001-01-01

    Presentation on the transition in 4He in the presence of a heat current (Q) provides an ideal system for the study of phase transitions under non-equlibrium, dynamical conditions. Many physical properties become nonlinear and Q-dependant near the transition temperature, T_Lambada.

  9. Perturbed kernel approximation on homogeneous manifolds

    NASA Astrophysics Data System (ADS)

    Levesley, J.; Sun, X.

    2007-02-01

    Current methods for interpolation and approximation within a native space rely heavily on the strict positive-definiteness of the underlying kernels. If the domains of approximation are the unit spheres in euclidean spaces, then zonal kernels (kernels that are invariant under the orthogonal group action) are strongly favored. In the implementation of these methods to handle real world problems, however, some or all of the symmetries and positive-definiteness may be lost in digitalization due to small random errors that occur unpredictably during various stages of the execution. Perturbation analysis is therefore needed to address the stability problem encountered. In this paper we study two kinds of perturbations of positive-definite kernels: small random perturbations and perturbations by Dunkl's intertwining operators [C. Dunkl, Y. Xu, Orthogonal polynomials of several variables, Encyclopedia of Mathematics and Its Applications, vol. 81, Cambridge University Press, Cambridge, 2001]. We show that with some reasonable assumptions, a small random perturbation of a strictly positive-definite kernel can still provide vehicles for interpolation and enjoy the same error estimates. We examine the actions of the Dunkl intertwining operators on zonal (strictly) positive-definite kernels on spheres. We show that the resulted kernels are (strictly) positive-definite on spheres of lower dimensions.

  10. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  11. Expansion Microscopy

    PubMed Central

    Chen, Fei; Tillberg, Paul W.; Boyden, Edward S.

    2014-01-01

    In optical microscopy, fine structural details are resolved by using refraction to magnify images of a specimen. Here we report the discovery that, by synthesizing a swellable polymer network within a specimen, it can be physically expanded, resulting in physical magnification. By covalently anchoring specific labels located within the specimen directly to the polymer network, labels spaced closer than the optical diffraction limit can be isotropically separated and optically resolved, a process we call expansion microscopy (ExM). Thus, this process can be used to perform scalable super-resolution microscopy with diffraction-limited microscopes. We demonstrate ExM with effective ~70 nm lateral resolution in both cultured cells and brain tissue, performing three-color super-resolution imaging of ~107 μm3 of the mouse hippocampus with a conventional confocal microscope. PMID:25592419

  12. Approximating W projection as a separable kernel

    NASA Astrophysics Data System (ADS)

    Merry, Bruce

    2016-02-01

    W projection is a commonly used approach to allow interferometric imaging to be accelerated by fast Fourier transforms, but it can require a huge amount of storage for convolution kernels. The kernels are not separable, but we show that they can be closely approximated by separable kernels. The error scales with the fourth power of the field of view, and so is small enough to be ignored at mid- to high frequencies. We also show that hybrid imaging algorithms combining W projection with either faceting, snapshotting, or W stacking allow the error to be made arbitrarily small, making the approximation suitable even for high-resolution wide-field instruments.

  13. Invariance kernel of biological regulatory networks.

    PubMed

    Ahmad, Jamil; Roux, Olivier

    2010-01-01

    The analysis of Biological Regulatory Network (BRN) leads to the computing of the set of the possible behaviours of the biological components. These behaviours are seen as trajectories and we are specifically interested in cyclic trajectories since they stand for stability. The set of cycles is given by the so-called invariance kernel of a BRN. This paper presents a method for deriving symbolic formulae for the length, volume and diameter of a cylindrical invariance kernel. These formulae are expressed in terms of delay parameters expressions and give the existence of an invariance kernel and a hint of the number of cyclic trajectories.

  14. Induction of expression and co-localization of heat shock polypeptides with the polyalanine expansion mutant of poly(A)-binding protein N1 after chemical stress

    SciTech Connect

    Wang Qishan Bag, Jnanankur

    2008-05-23

    Formation of nuclear inclusions consisting of aggregates of a polyalanine expansion mutant of nuclear poly(A)-binding protein (PABPN1) is the hallmark of oculopharyngeal muscular dystrophy (OPMD). OPMD is a late onset autosomal dominant disease. Patients with this disorder exhibit progressive swallowing difficulty and drooping of their eye lids, which starts around the age of 50. Previously we have shown that treatment of cells expressing the mutant PABPN1 with a number of chemicals such as ibuprofen, indomethacin, ZnSO{sub 4}, and 8-hydroxy-quinoline induces HSP70 expression and reduces PABPN1 aggregation. In these studies we have shown that expression of additional HSPs including HSP27, HSP40, and HSP105 were induced in mutant PABPN1 expressing cells following exposure to the chemicals mentioned above. Furthermore, all three additional HSPs were translocated to the nucleus and probably helped to properly fold the mutant PABPN1 by co-localizing with this protein.

  15. Using TOPEX Satellite El Niño Altimetry Data to Introduce Thermal Expansion and Heat Capacity Concepts in Chemistry Courses

    NASA Astrophysics Data System (ADS)

    Blanck, Harvey F.

    1999-12-01

    draw and is a reasonable visual representation of the way in which the thermocline is depressed by warm water along a warm-water ridge. Discussion Various factors must be taken into account to modify the raw TOPEX radar altimeter data to obtain meaningful information. For example, as mentioned at JPL's TOPEX Web site, radar propagation speed is altered slightly by variations in water vapor in the atmosphere, and therefore atmospheric water vapor content must be determined by the satellite to correct the radar altimeter data. Studies of heat storage using direct temperature measurements have been conducted (5), and comparison of TOPEX altimetry data with actual temperature measurements shows them to be in reasonably good agreement (6). Low-profile hills and valleys on the ocean are generated or influenced by a variety of factors other than thermal energy. Ocean dynamics are complex indeed. Comparisons of thermal energy (steric effect) and wind-induced surface changes have been examined in relation to TOPEX data (7). The calculations of thermal energy excess in warm-water ocean bumps from radar altimetry data alone, while not unreasonable, must be understood to be a simplification for an extremely complex system. The Gaussian model proposed for the cross section of a warm-water ridge requires more study, but it is a useful visual model of the warm-water bump above the normal surface and its subsurface warm-water wedge. I believe students will enjoy these relevant calculations and learn a bit about density, thermal expansion, and heat capacity in the process. I have tried to present sufficient data and detail to allow teachers to pick and choose calculations appropriate to the level of their students. It is evident that dimensional analysis is a distinct advantage in using these equations. I have also tried to include enough descriptive detail of the TOPEX data and El Niño to answer many of the questions students may ask. The Web sites mentioned are very informative with

  16. A simple method for computing the relativistic Compton scattering kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Prasad, M. K.; Kershaw, D. S.; Beason, J. D.

    1986-01-01

    Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.

  17. Kernel map compression for speeding the execution of kernel-based methods.

    PubMed

    Arif, Omar; Vela, Patricio A

    2011-06-01

    The use of Mercer kernel methods in statistical learning theory provides for strong learning capabilities, as seen in kernel principal component analysis and support vector machines. Unfortunately, after learning, the computational complexity of execution through a kernel is of the order of the size of the training set, which is quite large for many applications. This paper proposes a two-step procedure for arriving at a compact and computationally efficient execution procedure. After learning in the kernel space, the proposed extension exploits the universal approximation capabilities of generalized radial basis function neural networks to efficiently approximate and replace the projections onto the empirical kernel map used during execution. Sample applications demonstrate significant compression of the kernel representation with graceful performance loss.

  18. Relationship between cyanogenic compounds in kernels, leaves, and roots of sweet and bitter kernelled almonds.

    PubMed

    Dicenta, F; Martínez-Gómez, P; Grané, N; Martín, M L; León, A; Cánovas, J A; Berenguer, V

    2002-03-27

    The relationship between the levels of cyanogenic compounds (amygdalin and prunasin) in kernels, leaves, and roots of 5 sweet-, 5 slightly bitter-, and 5 bitter-kernelled almond trees was determined. Variability was observed among the genotypes for these compounds. Prunasin was found only in the vegetative part (roots and leaves) for all genotypes tested. Amygdalin was detected only in the kernels, mainly in bitter genotypes. In general, bitter-kernelled genotypes had higher levels of prunasin in their roots than nonbitter ones, but the correlation between cyanogenic compounds in the different parts of plants was not high. While prunasin seems to be present in most almond roots (with a variable concentration) only bitter-kernelled genotypes are able to transform it into amygdalin in the kernel. Breeding for prunasin-based resistance to the buprestid beetle Capnodis tenebrionis L. is discussed.

  19. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more...

  20. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...

  1. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a lot...

  2. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  3. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  4. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a lot...

  5. Discriminant Kernel Assignment for Image Coding.

    PubMed

    Deng, Yue; Zhao, Yanyu; Ren, Zhiquan; Kong, Youyong; Bao, Feng; Dai, Qionghai

    2017-06-01

    This paper proposes discriminant kernel assignment (DKA) in the bag-of-features framework for image representation. DKA slightly modifies existing kernel assignment to learn width-variant Gaussian kernel functions to perform discriminant local feature assignment. When directly applying gradient-descent method to solve DKA, the optimization may contain multiple time-consuming reassignment implementations in iterations. Accordingly, we introduce a more practical way to locally linearize the DKA objective and the difficult task is cast as a sequence of easier ones. Since DKA only focuses on the feature assignment part, it seamlessly collaborates with other discriminative learning approaches, e.g., discriminant dictionary learning or multiple kernel learning, for even better performances. Experimental evaluations on multiple benchmark datasets verify that DKA outperforms other image assignment approaches and exhibits significant efficiency in feature coding.

  6. Kernel-Based Equiprobabilistic Topographic Map Formation.

    PubMed

    Van Hulle MM

    1998-09-15

    We introduce a new unsupervised competitive learning rule, the kernel-based maximum entropy learning rule (kMER), which performs equiprobabilistic topographic map formation in regular, fixed-topology lattices, for use with nonparametric density estimation as well as nonparametric regression analysis. The receptive fields of the formal neurons are overlapping radially symmetric kernels, compatible with radial basis functions (RBFs); but unlike other learning schemes, the radii of these kernels do not have to be chosen in an ad hoc manner: the radii are adapted to the local input density, together with the weight vectors that define the kernel centers, so as to produce maps of which the neurons have an equal probability to be active (equiprobabilistic maps). Both an "online" and a "batch" version of the learning rule are introduced, which are applied to nonparametric density estimation and regression, respectively. The application envisaged is blind source separation (BSS) from nonlinear, noisy mixtures.

  7. Quantum kernel applications in medicinal chemistry.

    PubMed

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.

  8. KITTEN Lightweight Kernel 0.1 Beta

    SciTech Connect

    Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne; VanDyke, John; Hudson, Trammell

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.

  9. Thermal Expansion

    NASA Astrophysics Data System (ADS)

    Ventura, Guglielmo; Perfetti, Mauro

    All solid materials, when cooled to low temperatures experience a change in physical dimensions which called "thermal contraction" and is typically lower than 1 % in volume in the 4-300 K temperature range. Although the effect is small, it can have a heavy impact on the design of cryogenic devices. The thermal contraction of different materials may vary by as much as an order of magnitude: since cryogenic devices are constructed at room temperature with a lot of different materials, one of the major concerns is the effect of the different thermal contraction and the resulting thermal stress that may occur when two dissimilar materials are bonded together. In this chapter, theory of thermal contraction is reported in Sect. 1.2 . Section 1.3 is devoted to the phenomenon of negative thermal expansion and its applications.

  10. TICK: Transparent Incremental Checkpointing at Kernel Level

    SciTech Connect

    Petrini, Fabrizio; Gioiosa, Roberto

    2004-10-25

    TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5

  11. RKF-PCA: robust kernel fuzzy PCA.

    PubMed

    Heo, Gyeongyong; Gader, Paul; Frigui, Hichem

    2009-01-01

    Principal component analysis (PCA) is a mathematical method that reduces the dimensionality of the data while retaining most of the variation in the data. Although PCA has been applied in many areas successfully, it suffers from sensitivity to noise and is limited to linear principal components. The noise sensitivity problem comes from the least-squares measure used in PCA and the limitation to linear components originates from the fact that PCA uses an affine transform defined by eigenvectors of the covariance matrix and the mean of the data. In this paper, a robust kernel PCA method that extends the kernel PCA and uses fuzzy memberships is introduced to tackle the two problems simultaneously. We first introduce an iterative method to find robust principal components, called Robust Fuzzy PCA (RF-PCA), which has a connection with robust statistics and entropy regularization. The RF-PCA method is then extended to a non-linear one, Robust Kernel Fuzzy PCA (RKF-PCA), using kernels. The modified kernel used in the RKF-PCA satisfies the Mercer's condition, which means that the derivation of the K-PCA is also valid for the RKF-PCA. Formal analyses and experimental results suggest that the RKF-PCA is an efficient non-linear dimension reduction method and is more noise-robust than the original kernel PCA.

  12. Transient laminar opposing mixed convection in a symmetrically heated duct with a plane symmetric sudden contraction-expansion: Buoyancy an inclination effects

    NASA Astrophysics Data System (ADS)

    Martínez-Suástegui, Lorenzo; Barreto, Enrique; Treviño, César

    2015-11-01

    Transient laminar opposing mixed convection is studied experimentally in an open vertical rectangular channel with two discrete protruded heat sources subjected to uniform heat flux simulating electronic components. Experiments are performed for a Reynolds number of Re = 700, Prandtl number of Pr = 7, inclination angles with respect to the horizontal of γ =0o , 45o and 90o, and different values of buoyancy strength or modified Richardson number, Ri* =Gr* /Re2 . From the experimental measurements, the space averaged surface temperatures, overall Nusselt number of each simulated electronic chip, phase-space plots of the self-oscillatory system, characteristic times of temperature oscillations and spectral distribution of the fluctuating energy have been obtained. Results show that when a threshold in the buoyancy parameter is reached, strong three-dimensional secondary flow oscillations develop in the axial and spanwise directions. This research was supported by the Consejo Nacional de Ciencia y Tecnología (CONACYT), Grant number 167474 and by the Secretaría de Investigación y Posgrado del IPN, Grant number SIP 20141309.

  13. On the formation of new ignition kernels in the chemically active dispersed mixtures

    NASA Astrophysics Data System (ADS)

    Ivanov, M. F.; Kiverin, A. D.

    2015-11-01

    The specific features of the combustion waves propagating through the channels filled with chemically active gaseous mixture and non-uniformly suspended micro particles are studied numerically. It is shown that the heat radiated by the hot products, absorbed by the micro particles and then transferred to the environmental fresh mixture can be the source of new ignition kernels in the regions of particles' clusters. Herewith the spatial distribution of the particles determines the features of combustion regimes arising in these kernels. One can highlight the multi-kernel ignition in the polydisperse mixtures and ignition of the combustion regimes with shocks and detonation formation in the mixtures with pronounced gradients of microparticles concentration.

  14. Negative thermal expansion materials: technological key for control of thermal expansion.

    PubMed

    Takenaka, Koshi

    2012-02-01

    Most materials expand upon heating. However, although rare, some materials contract upon heating. Such negative thermal expansion (NTE) materials have enormous industrial merit because they can control the thermal expansion of materials. Recent progress in materials research enables us to obtain materials exhibiting negative coefficients of linear thermal expansion over -30 ppm K(-1). Such giant NTE is opening a new phase of control of thermal expansion in composites. Specifically examining practical aspects, this review briefly summarizes materials and mechanisms of NTE as well as composites containing NTE materials, based mainly on activities of the last decade.

  15. Negative thermal expansion materials: technological key for control of thermal expansion

    PubMed Central

    Takenaka, Koshi

    2012-01-01

    Most materials expand upon heating. However, although rare, some materials contract upon heating. Such negative thermal expansion (NTE) materials have enormous industrial merit because they can control the thermal expansion of materials. Recent progress in materials research enables us to obtain materials exhibiting negative coefficients of linear thermal expansion over −30 ppm K−1. Such giant NTE is opening a new phase of control of thermal expansion in composites. Specifically examining practical aspects, this review briefly summarizes materials and mechanisms of NTE as well as composites containing NTE materials, based mainly on activities of the last decade. PMID:27877465

  16. Thermal expansion, thermal conductivity, and heat capacity measurements for boreholes UE25 NRG-4, UE25 NRG-5, USW NRG-6, and USW NRG-7/7A

    SciTech Connect

    Brodsky, N.S.; Riggins, M.; Connolly, J.; Ricci, P.

    1997-09-01

    Specimens were tested from four thermal-mechanical units, namely Tiva Canyon (TCw), Paintbrush Tuff (PTn), and two Topopah Spring units (TSw1 and TSw2), and from two lithologies, i.e., welded devitrified (TCw, TSw1, TSw2) and nonwelded vitric tuff (PTn). Thermal conductivities in W(mk){sup {minus}1} averaged over all boreholes, ranged (depending upon temperature and saturation state) from 1.2 to 1.9 for TCw, from 0.4 to 0.9 for PTn, from 1.0 to 1.7 for TSw1, and from 1.5 to 2.3 for TSw2. Mean coefficients of thermal expansion were highly temperature dependent and values, averaged over all boreholes, ranged (depending upon temperature and saturation state) from 6.6 {times} 10{sup {minus}6} to 49 {times} 10{sup {minus}6} C{sup {minus}1} for TCw, from the negative range to 16 {times} 10{sup {minus}6} {center_dot} {degree}C{sup {minus}1} for PTn, from 6.3 {times} 10{sup {minus}6} to 44 {times} 10{sup {minus}6} C{sup {minus}1} for TSw1, and from 6.7 {times} 10{sup {minus}6} to 37 {times} 10{sup {minus}6} {center_dot} {degree}C{sup {minus}1} for TSw2. Mean values of thermal capacitance in J/cm{sup 3}K (averaged overall specimens) ranged from 1.6 J to 2.1 for TSw1 and from 1.8 to 2.5 for TSw2. In general, the lithostratigraphic classifications of rock assigned by the USGS are consistent with the mineralogical data presented in this report.

  17. Moisture Sorption Isotherms and Properties of Sorbed Water of Neem ( Azadirichta indica A. Juss) Kernels

    NASA Astrophysics Data System (ADS)

    Ngono Mbarga, M. C.; Bup Nde, D.; Mohagir, A.; Kapseu, C.; Elambo Nkeng, G.

    2017-01-01

    A neem tree growing abundantly in India as well as in some regions of Asia and Africa gives fruits whose kernels have about 40-50% oil. This oil has high therapeutic and cosmetic qualities and is recently projected to be an important raw material for the production of biodiesel. Its seed is harvested at high moisture contents, which leads tohigh post-harvest losses. In the paper, the sorption isotherms are determined by the static gravimetric method at 40, 50, and 60°C to establish a database useful in defining drying and storage conditions of neem kernels. Five different equations are validated for modeling the sorption isotherms of neem kernels. The properties of sorbed water, such as the monolayer moisture content, surface area of adsorbent, number of adsorbed monolayers, and the percent of bound water are also defined. The critical moisture content necessary for the safe storage of dried neem kernels is shown to range from 5 to 10% dry basis, which can be obtained at a relative humidity less than 65%. The isosteric heats of sorption at 5% moisture content are 7.40 and 22.5 kJ/kg for the adsorption and desorption processes, respectively. This work is the first, to the best of our knowledge, to give the important parameters necessary for drying and storage of neem kernels, a potential raw material for the production of oil to be used in pharmaceutics, cosmetics, and biodiesel manufacturing.

  18. Kernel-Based Reconstruction of Graph Signals

    NASA Astrophysics Data System (ADS)

    Romero, Daniel; Ma, Meng; Giannakis, Georgios B.

    2017-02-01

    A number of applications in engineering, social sciences, physics, and biology involve inference over networks. In this context, graph signals are widely encountered as descriptors of vertex attributes or features in graph-structured data. Estimating such signals in all vertices given noisy observations of their values on a subset of vertices has been extensively analyzed in the literature of signal processing on graphs (SPoG). This paper advocates kernel regression as a framework generalizing popular SPoG modeling and reconstruction and expanding their capabilities. Formulating signal reconstruction as a regression task on reproducing kernel Hilbert spaces of graph signals permeates benefits from statistical learning, offers fresh insights, and allows for estimators to leverage richer forms of prior information than existing alternatives. A number of SPoG notions such as bandlimitedness, graph filters, and the graph Fourier transform are naturally accommodated in the kernel framework. Additionally, this paper capitalizes on the so-called representer theorem to devise simpler versions of existing Thikhonov regularized estimators, and offers a novel probabilistic interpretation of kernel methods on graphs based on graphical models. Motivated by the challenges of selecting the bandwidth parameter in SPoG estimators or the kernel map in kernel-based methods, the present paper further proposes two multi-kernel approaches with complementary strengths. Whereas the first enables estimation of the unknown bandwidth of bandlimited signals, the second allows for efficient graph filter selection. Numerical tests with synthetic as well as real data demonstrate the merits of the proposed methods relative to state-of-the-art alternatives.

  19. Oecophylla longinoda (Hymenoptera: Formicidae) Lead to Increased Cashew Kernel Size and Kernel Quality.

    PubMed

    Anato, F M; Sinzogan, A A C; Offenberg, J; Adandonon, A; Wargui, R B; Deguenon, J M; Ayelo, P M; Vayssières, J-F; Kossou, D K

    2017-03-03

    Weaver ants, Oecophylla spp., are known to positively affect cashew, Anacardium occidentale L., raw nut yield, but their effects on the kernels have not been reported. We compared nut size and the proportion of marketable kernels between raw nuts collected from trees with and without ants. Raw nuts collected from trees with weaver ants were 2.9% larger than nuts from control trees (i.e., without weaver ants), leading to 14% higher proportion of marketable kernels. On trees with ants, the kernel: raw nut ratio from nuts damaged by formic acid was 4.8% lower compared with nondamaged nuts from the same trees. Weaver ants provided three benefits to cashew production by increasing yields, yielding larger nuts, and by producing greater proportions of marketable kernel mass.

  20. A new Mercer sigmoid kernel for clinical data classification.

    PubMed

    Carrington, André M; Fieguth, Paul W; Chen, Helen H

    2014-01-01

    In classification with Support Vector Machines, only Mercer kernels, i.e. valid kernels, such as the Gaussian RBF kernel, are widely accepted and thus suitable for clinical data. Practitioners would also like to use the sigmoid kernel, a non-Mercer kernel, but its range of validity is difficult to determine, and even within range its validity is in dispute. Despite these shortcomings the sigmoid kernel is used by some, and two kernels in the literature attempt to emulate and improve upon it. We propose the first Mercer sigmoid kernel, that is therefore trustworthy for the classification of clinical data. We show the similarity between the Mercer sigmoid kernel and the sigmoid kernel and, in the process, identify a normalization technique that improves the classification accuracy of the latter. The Mercer sigmoid kernel achieves the best mean accuracy on three clinical data sets, detecting melanoma in skin lesions better than the most popular kernels; while with non-clinical data sets it has no significant difference in median accuracy as compared with the Gaussian RBF kernel. It consistently classifies some points correctly that the Gaussian RBF kernel does not and vice versa.

  1. Analog forecasting with dynamics-adapted kernels

    NASA Astrophysics Data System (ADS)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  2. Kernel bandwidth optimization in spike rate estimation.

    PubMed

    Shimazaki, Hideaki; Shinomoto, Shigeru

    2010-08-01

    Kernel smoother and a time-histogram are classical tools for estimating an instantaneous rate of spike occurrences. We recently established a method for selecting the bin width of the time-histogram, based on the principle of minimizing the mean integrated square error (MISE) between the estimated rate and unknown underlying rate. Here we apply the same optimization principle to the kernel density estimation in selecting the width or "bandwidth" of the kernel, and further extend the algorithm to allow a variable bandwidth, in conformity with data. The variable kernel has the potential to accurately grasp non-stationary phenomena, such as abrupt changes in the firing rate, which we often encounter in neuroscience. In order to avoid possible overfitting that may take place due to excessive freedom, we introduced a stiffness constant for bandwidth variability. Our method automatically adjusts the stiffness constant, thereby adapting to the entire set of spike data. It is revealed that the classical kernel smoother may exhibit goodness-of-fit comparable to, or even better than, that of modern sophisticated rate estimation methods, provided that the bandwidth is selected properly for a given set of spike data, according to the optimization methods presented here.

  3. The connection between regularization operators and support vector kernels.

    PubMed

    Smola, Alex J.; Schölkopf, Bernhard; Müller, Klaus Robert

    1998-06-01

    In this paper a correspondence is derived between regularization operators used in regularization networks and support vector kernels. We prove that the Green's Functions associated with regularization operators are suitable support vector kernels with equivalent regularization properties. Moreover, the paper provides an analysis of currently used support vector kernels in the view of regularization theory and corresponding operators associated with the classes of both polynomial kernels and translation invariant kernels. The latter are also analyzed on periodical domains. As a by-product we show that a large number of radial basis functions, namely conditionally positive definite functions, may be used as support vector kernels.

  4. Fusion and kernel type selection in adaptive image retrieval

    NASA Astrophysics Data System (ADS)

    Doloc-Mihu, Anca; Raghavan, Vijay V.

    2007-04-01

    In this work we investigate the relationships between features representing images, fusion schemes for these features and kernel types used in an Web-based Adaptive Image Retrieval System. Using the Kernel Rocchio learning method, several kernels having polynomial and Gaussian forms are applied to general images represented by annotations and by color histograms in RGB and HSV color spaces. We propose different fusion schemes, which incorporate kernel selector component(s). We perform experiments to study the relationships between a concatenated vector and several kernel types. Experimental results show that an appropriate kernel could significantly improve the performance of the retrieval system.

  5. Heat pump system

    DOEpatents

    Swenson, Paul F.; Moore, Paul B.

    1979-01-01

    An air heating and cooling system for a building includes an expansion-type refrigeration circuit and a heat engine. The refrigeration circuit includes two heat exchangers, one of which is communicated with a source of indoor air from the building and the other of which is communicated with a source of air from outside the building. The heat engine includes a heat rejection circuit having a source of rejected heat and a primary heat exchanger connected to the source of rejected heat. The heat rejection circuit also includes an evaporator in heat exchange relation with the primary heat exchanger, a heat engine indoor heat exchanger, and a heat engine outdoor heat exchanger. The indoor heat exchangers are disposed in series air flow relationship, with the heat engine indoor heat exchanger being disposed downstream from the refrigeration circuit indoor heat exchanger. The outdoor heat exchangers are also disposed in series air flow relationship, with the heat engine outdoor heat exchanger disposed downstream from the refrigeration circuit outdoor heat exchanger. A common fluid is used in both of the indoor heat exchangers and in both of the outdoor heat exchangers. In a first embodiment, the heat engine is a Rankine cycle engine. In a second embodiment, the heat engine is a non-Rankine cycle engine.

  6. Heat pump system

    DOEpatents

    Swenson, Paul F.; Moore, Paul B.

    1982-01-01

    An air heating and cooling system for a building includes an expansion-type refrigeration circuit and a heat engine. The refrigeration circuit includes two heat exchangers, one of which is communicated with a source of indoor air from the building and the other of which is communicated with a source of air from outside the building. The heat engine includes a heat rejection circuit having a source of rejected heat and a primary heat exchanger connected to the source of rejected heat. The heat rejection circuit also includes an evaporator in heat exchange relation with the primary heat exchanger, a heat engine indoor heat exchanger, and a heat engine outdoor heat exchanger. The indoor heat exchangers are disposed in series air flow relationship, with the heat engine indoor heat exchanger being disposed downstream from the refrigeration circuit indoor heat exchanger. The outdoor heat exchangers are also disposed in series air flow relationship, with the heat engine outdoor heat exchanger disposed downstream from the refrigeration circuit outdoor heat exchanger. A common fluid is used in both of the indoor heat exchanges and in both of the outdoor heat exchangers. In a first embodiment, the heat engine is a Rankine cycle engine. In a second embodiment, the heat engine is a non-Rankine cycle engine.

  7. Heat exchanger

    DOEpatents

    Wolowodiuk, Walter

    1976-01-06

    A heat exchanger of the straight tube type in which different rates of thermal expansion between the straight tubes and the supply pipes furnishing fluid to those tubes do not result in tube failures. The supply pipes each contain a section which is of helical configuration.

  8. Robust C-Loss Kernel Classifiers.

    PubMed

    Xu, Guibiao; Hu, Bao-Gang; Principe, Jose C

    2016-12-29

    The correntropy-induced loss (C-loss) function has the nice property of being robust to outliers. In this paper, we study the C-loss kernel classifier with the Tikhonov regularization term, which is used to avoid overfitting. After using the half-quadratic optimization algorithm, which converges much faster than the gradient optimization algorithm, we find out that the resulting C-loss kernel classifier is equivalent to an iterative weighted least square support vector machine (LS-SVM). This relationship helps explain the robustness of iterative weighted LS-SVM from the correntropy and density estimation perspectives. On the large-scale data sets which have low-rank Gram matrices, we suggest to use incomplete Cholesky decomposition to speed up the training process. Moreover, we use the representer theorem to improve the sparseness of the resulting C-loss kernel classifier. Experimental results confirm that our methods are more robust to outliers than the existing common classifiers.

  9. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  10. Fast generation of sparse random kernel graphs

    SciTech Connect

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.

  11. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  12. Kernel bandwidth estimation for nonparametric modeling.

    PubMed

    Bors, Adrian G; Nasios, Nikolaos

    2009-12-01

    Kernel density estimation is a nonparametric procedure for probability density modeling, which has found several applications in various fields. The smoothness and modeling ability of the functional approximation are controlled by the kernel bandwidth. In this paper, we describe a Bayesian estimation method for finding the bandwidth from a given data set. The proposed bandwidth estimation method is applied in three different computational-intelligence methods that rely on kernel density estimation: 1) scale space; 2) mean shift; and 3) quantum clustering. The third method is a novel approach that relies on the principles of quantum mechanics. This method is based on the analogy between data samples and quantum particles and uses the SchrOdinger potential as a cost function. The proposed methodology is used for blind-source separation of modulated signals and for terrain segmentation based on topography information.

  13. Phenolic constituents of shea (Vitellaria paradoxa) kernels.

    PubMed

    Maranz, Steven; Wiesman, Zeev; Garti, Nissim

    2003-10-08

    Analysis of the phenolic constituents of shea (Vitellaria paradoxa) kernels by LC-MS revealed eight catechin compounds-gallic acid, catechin, epicatechin, epicatechin gallate, gallocatechin, epigallocatechin, gallocatechin gallate, and epigallocatechin gallate-as well as quercetin and trans-cinnamic acid. The mean kernel content of the eight catechin compounds was 4000 ppm (0.4% of kernel dry weight), with a 2100-9500 ppm range. Comparison of the profiles of the six major catechins from 40 Vitellaria provenances from 10 African countries showed that the relative proportions of these compounds varied from region to region. Gallic acid was the major phenolic compound, comprising an average of 27% of the measured total phenols and exceeding 70% in some populations. Colorimetric analysis (101 samples) of total polyphenols extracted from shea butter into hexane gave an average of 97 ppm, with the values for different provenances varying between 62 and 135 ppm of total polyphenols.

  14. Fractal Weyl law for Linux Kernel architecture

    NASA Astrophysics Data System (ADS)

    Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.

    2011-01-01

    We study the properties of spectrum and eigenstates of the Google matrix of a directed network formed by the procedure calls in the Linux Kernel. Our results obtained for various versions of the Linux Kernel show that the spectrum is characterized by the fractal Weyl law established recently for systems of quantum chaotic scattering and the Perron-Frobenius operators of dynamical maps. The fractal Weyl exponent is found to be ν ≈ 0.65 that corresponds to the fractal dimension of the network d ≈ 1.3. An independent computation of the fractal dimension by the cluster growing method, generalized for directed networks, gives a close value d ≈ 1.4. The eigenmodes of the Google matrix of Linux Kernel are localized on certain principal nodes. We argue that the fractal Weyl law should be generic for directed networks with the fractal dimension d < 2.

  15. Tile-Compressed FITS Kernel for IRAF

    NASA Astrophysics Data System (ADS)

    Seaman, R.

    2011-07-01

    The Flexible Image Transport System (FITS) is a ubiquitously supported standard of the astronomical community. Similarly, the Image Reduction and Analysis Facility (IRAF), developed by the National Optical Astronomy Observatory, is a widely used astronomical data reduction package. IRAF supplies compatibility with FITS format data through numerous tools and interfaces. The most integrated of these is IRAF's FITS image kernel that provides access to FITS from any IRAF task that uses the basic IMIO interface. The original FITS kernel is a complex interface of purpose-built procedures that presents growing maintenance issues and lacks recent FITS innovations. A new FITS kernel is being developed at NOAO that is layered on the CFITSIO library from the NASA Goddard Space Flight Center. The simplified interface will minimize maintenance headaches as well as add important new features such as support for the FITS tile-compressed (fpack) format.

  16. A kernel-based approach for biomedical named entity recognition.

    PubMed

    Patra, Rakesh; Saha, Sujan Kumar

    2013-01-01

    Support vector machine (SVM) is one of the popular machine learning techniques used in various text processing tasks including named entity recognition (NER). The performance of the SVM classifier largely depends on the appropriateness of the kernel function. In the last few years a number of task-specific kernel functions have been proposed and used in various text processing tasks, for example, string kernel, graph kernel, tree kernel and so on. So far very few efforts have been devoted to the development of NER task specific kernel. In the literature we found that the tree kernel has been used in NER task only for entity boundary detection or reannotation. The conventional tree kernel is unable to execute the complete NER task on its own. In this paper we have proposed a kernel function, motivated by the tree kernel, which is able to perform the complete NER task. To examine the effectiveness of the proposed kernel, we have applied the kernel function on the openly available JNLPBA 2004 data. Our kernel executes the complete NER task and achieves reasonable accuracy.

  17. A dynamic kernel modifier for linux

    SciTech Connect

    Minnich, R. G.

    2002-09-03

    Dynamic Kernel Modifier, or DKM, is a kernel module for Linux that allows user-mode programs to modify the execution of functions in the kernel without recompiling or modifying the kernel source in any way. Functions may be traced, either function entry only or function entry and exit; nullified; or replaced with some other function. For the tracing case, function execution results in the activation of a watchpoint. When the watchpoint is activated, the address of the function is logged in a FIFO buffer that is readable by external applications. The watchpoints are time-stamped with the resolution of the processor high resolution timers, which on most modem processors are accurate to a single processor tick. DKM is very similar to earlier systems such as the SunOS trace device or Linux TT. Unlike these two systems, and other similar systems, DKM requires no kernel modifications. DKM allows users to do initial probing of the kernel to look for performance problems, or even to resolve potential problems by turning functions off or replacing them. DKM watchpoints are not without cost: it takes about 200 nanoseconds to make a log entry on an 800 Mhz Pentium-Ill. The overhead numbers are actually competitive with other hardware-based trace systems, although it has less 'Los Alamos National Laboratory is operated by the University of California for the National Nuclear Security Administration of the United States Department of Energy under contract W-7405-ENG-36. accuracy than an In-Circuit Emulator such as the American Arium. Once the user has zeroed in on a problem, other mechanisms with a higher degree of accuracy can be used.

  18. Experimental study of turbulent flame kernel propagation

    SciTech Connect

    Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve

    2008-07-15

    Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{sub j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)

  19. Kernel abortion in maize. II. Distribution of /sup 14/C among kernel carboydrates

    SciTech Connect

    Hanft, J.M.; Jones, R.J.

    1986-06-01

    This study was designed to compare the uptake and distribution of /sup 14/C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35/sup 0/C were transferred to (/sup 14/C)sucrose media 10 days after pollination. Kernels cultured at 35/sup 0/C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on (/sup 14/C)sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35/sup 0/C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35/sup 0/C compared to kernels cultured at 30/sup 0/C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35/sup 0/C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30/sup 0/C (89%). Kernels cultured at 35/sup 0/C had a correspondingly higher proportion of /sup 14/C in endosperm fructose, glucose, and sucrose.

  20. Full Waveform Inversion Using Waveform Sensitivity Kernels

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    2013-04-01

    We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver

  1. Volatile compound formation during argan kernel roasting.

    PubMed

    El Monfalouti, Hanae; Charrouf, Zoubida; Giordano, Manuela; Guillaume, Dominique; Kartah, Badreddine; Harhar, Hicham; Gharby, Saïd; Denhez, Clément; Zeppa, Giuseppe

    2013-01-01

    Virgin edible argan oil is prepared by cold-pressing argan kernels previously roasted at 110 degrees C for up to 25 minutes. The concentration of 40 volatile compounds in virgin edible argan oil was determined as a function of argan kernel roasting time. Most of the volatile compounds begin to be formed after 15 to 25 minutes of roasting. This suggests that a strictly controlled roasting time should allow the modulation of argan oil taste and thus satisfy different types of consumers. This could be of major importance considering the present booming use of edible argan oil.

  2. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  3. Regularization techniques for PSF-matching kernels - I. Choice of kernel basis

    NASA Astrophysics Data System (ADS)

    Becker, A. C.; Homrighausen, D.; Connolly, A. J.; Genovese, C. R.; Owen, R.; Bickerton, S. J.; Lupton, R. H.

    2012-09-01

    We review current methods for building point spread function (PSF)-matching kernels for the purposes of image subtraction or co-addition. Such methods use a linear decomposition of the kernel on a series of basis functions. The correct choice of these basis functions is fundamental to the efficiency and effectiveness of the matching - the chosen bases should represent the underlying signal using a reasonably small number of shapes, and/or have a minimum number of user-adjustable tuning parameters. We examine methods whose bases comprise multiple Gauss-Hermite polynomials, as well as a form-free basis composed of delta-functions. Kernels derived from delta-functions are unsurprisingly shown to be more expressive; they are able to take more general shapes and perform better in situations where sum-of-Gaussian methods are known to fail. However, due to its many degrees of freedom (the maximum number allowed by the kernel size) this basis tends to overfit the problem and yields noisy kernels having large variance. We introduce a new technique to regularize these delta-function kernel solutions, which bridges the gap between the generality of delta-function kernels and the compactness of sum-of-Gaussian kernels. Through this regularization we are able to create general kernel solutions that represent the intrinsic shape of the PSF-matching kernel with only one degree of freedom, the strength of the regularization λ. The role of λ is effectively to exchange variance in the resulting difference image with variance in the kernel itself. We examine considerations in choosing the value of λ, including statistical risk estimators and the ability of the solution to predict solutions for adjacent areas. Both of these suggest moderate strengths of λ between 0.1 and 1.0, although this optimization is likely data set dependent. This model allows for flexible representations of the convolution kernel that have significant predictive ability and will prove useful in implementing

  4. Accuracy of Reduced and Extended Thin-Wire Kernels

    SciTech Connect

    Burke, G J

    2008-11-24

    Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.

  5. Integrated heat pump system

    SciTech Connect

    Reedy, W.R.

    1988-03-01

    An integrated heat pump and hot water system is described that includes: a heat pump having an indoor heat exchanger and an outdoor heat exchanger that are selectively connected to the suction line and the discharge line respectively of a compressor by a flow reversing means, and to each other by a liquid line having an expansion device mounted therein, whereby heating and cooling is provided to an indoor comfort zone by cycling the flow reversing means, a refrigerant to water heat exchanger having a hot water flow circuit in heat transfer relation with a first refrigerant condensing circuit and a second refrigerant evaporating circuit, a connection mounted in the liquid between the indoor heat exchanger and the expansion device, control means for regulating the flow of refrigerant through the refrigerant to water heat exchanger to selectively transfer heat into and out of the hot water flow circuit.

  6. Analysis of maize ( Zea mays ) kernel density and volume using microcomputed tomography and single-kernel near-infrared spectroscopy.

    PubMed

    Gustin, Jeffery L; Jackson, Sean; Williams, Chekeria; Patel, Anokhee; Armstrong, Paul; Peter, Gary F; Settles, A Mark

    2013-11-20

    Maize kernel density affects milling quality of the grain. Kernel density of bulk samples can be predicted by near-infrared reflectance (NIR) spectroscopy, but no accurate method to measure individual kernel density has been reported. This study demonstrates that individual kernel density and volume are accurately measured using X-ray microcomputed tomography (μCT). Kernel density was significantly correlated with kernel volume, air space within the kernel, and protein content. Embryo density and volume did not influence overall kernel density. Partial least-squares (PLS) regression of μCT traits with single-kernel NIR spectra gave stable predictive models for kernel density (R(2) = 0.78, SEP = 0.034 g/cm(3)) and volume (R(2) = 0.86, SEP = 2.88 cm(3)). Density and volume predictions were accurate for data collected over 10 months based on kernel weights calculated from predicted density and volume (R(2) = 0.83, SEP = 24.78 mg). Kernel density was significantly correlated with bulk test weight (r = 0.80), suggesting that selection of dense kernels can translate to improved agronomic performance.

  7. Rock expansion caused by ultrasound

    NASA Astrophysics Data System (ADS)

    Hedberg, C.; Gray, A.

    2013-12-01

    It has during many years been reported that materials' elastic modulus decrease when exposed to influences like mechanical impacts, ultrasound, magnetic fields, electricity and even humidity. Non-perfect atomic structures like rocks, concrete, or damaged metals exhibit a larger effect. This softening has most often been recorded by wave resonance measurements. The motion towards equilibrium is slow - often taking hours or days, which is why the effect is called Slow Dynamics [1]. The question had been raised, if a material expansion also occurs. 'The most fundamental parameter to consider is the volume expansion predicted to occur when positive hole charge carriers become activated, causing a decrease of the electron density in the O2- sublattice of the rock-forming minerals. This decrease of electron density should affect essentially all physical parameters, including the volume.' [2]. A new type of configuration has measured expansion of a rock subjected to ultrasound. A PZT was used as a pressure sensor while the combined thickness of the rock sample and the PZT sensor was held fixed. The expansion increased the stress in both the rock and the PZT, which gave an out-put voltage from the PZT. Knowing its material properties then made it possible to calculate the rock expansion. The equivalent strain caused by the ultrasound was approximately 3 x 10-5. The temperature was monitored and accounted for during the tests and for the maximum expansion the increase was 0.7 C, which means the expansion is at least to some degree caused by heating of the material by the ultrasound. The fraction of bonds activated by ultrasound was estimated to be around 10-5. References: [1] Guyer, R.A., Johnson, P.A.: Nonlinear Mesoscopic Elasticity: The Complex Behaviour of Rocks, Soils, Concrete. Wiley-VCH 2009 [2] M.M. Freund, F.F. Freund, Manipulating the Toughness of Rocks through Electric Potentials, Final Report CIF 2011 Award NNX11AJ84A, NAS Ames 2012.

  8. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  9. Kernel maximum autocorrelation factor and minimum noise fraction transformations.

    PubMed

    Nielsen, Allan Aasbjerg

    2011-03-01

    This paper introduces kernel versions of maximum autocorrelation factor (MAF) analysis and minimum noise fraction (MNF) analysis. The kernel versions are based upon a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version, the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA), kernel MAF, and kernel MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. Three examples show the very successful application of kernel MAF/MNF analysis to: 1) change detection in DLR 3K camera data recorded 0.7 s apart over a busy motorway, 2) change detection in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt to even abruptly varying multi and hypervariate backgrounds and focus on extreme observations.

  10. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    SciTech Connect

    Charles Barnes; CLay Richardson; Scott Nagley; John Hunn; Eric Shaber

    2010-10-01

    Babcock and Wilcox (B&W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-µm, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B&W produced 425-µm, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B&W also produced 500-µm, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B&W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  11. End-use quality of soft kernel durum wheat

    USDA-ARS?s Scientific Manuscript database

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  12. Reduction of complex signaling networks to a representative kernel.

    PubMed

    Kim, Jeong-Rae; Kim, Junil; Kwon, Yung-Keun; Lee, Hwang-Yeol; Heslop-Harrison, Pat; Cho, Kwang-Hyun

    2011-05-31

    The network of biomolecular interactions that occurs within cells is large and complex. When such a network is analyzed, it can be helpful to reduce the complexity of the network to a "kernel" that maintains the essential regulatory functions for the output under consideration. We developed an algorithm to identify such a kernel and showed that the resultant kernel preserves the network dynamics. Using an integrated network of all of the human signaling pathways retrieved from the KEGG (Kyoto Encyclopedia of Genes and Genomes) database, we identified this network's kernel and compared the properties of the kernel to those of the original network. We found that the percentage of essential genes to the genes encoding nodes outside of the kernel was about 10%, whereas ~32% of the genes encoding nodes within the kernel were essential. In addition, we found that 95% of the kernel nodes corresponded to Mendelian disease genes and that 93% of synthetic lethal pairs associated with the network were contained in the kernel. Genes corresponding to nodes in the kernel had low evolutionary rates, were ubiquitously expressed in various tissues, and were well conserved between species. Furthermore, kernel genes included many drug targets, suggesting that other kernel nodes may be potential drug targets. Owing to the simplification of the entire network, the efficient modeling of a large-scale signaling network and an understanding of the core structure within a complex framework become possible.

  13. NIRS method for precise identification of Fusarium damaged wheat kernels

    USDA-ARS?s Scientific Manuscript database

    Development of scab resistant wheat varieties may be enhanced by non-destructive evaluation of kernels for Fusarium damaged kernels (FDKs) and deoxynivalenol (DON) levels. Fusarium infection generally affects kernel appearance, but insect damage and other fungi can cause similar symptoms. Also, some...

  14. Thermomechanical property of rice kernels studied by DMA

    USDA-ARS?s Scientific Manuscript database

    The thermomechanical property of the rice kernels was investigated using a dynamic mechanical analyzer (DMA). The length change of rice kernel with a loaded constant force along the major axis direction was detected during temperature scanning. The thermomechanical transition occurred in rice kernel...

  15. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  16. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  17. Multiple spectral kernel learning and a gaussian complexity computation.

    PubMed

    Reyhani, Nima

    2013-07-01

    Multiple kernel learning (MKL) partially solves the kernel selection problem in support vector machines and similar classifiers by minimizing the empirical risk over a subset of the linear combination of given kernel matrices. For large sample sets, the size of the kernel matrices becomes a numerical issue. In many cases, the kernel matrix is of low-efficient rank. However, the low-rank property is not efficiently utilized in MKL algorithms. Here, we suggest multiple spectral kernel learning that efficiently uses the low-rank property by finding a kernel matrix from a set of Gram matrices of a few eigenvectors from all given kernel matrices, called a spectral kernel set. We provide a new bound for the gaussian complexity of the proposed kernel set, which depends on both the geometry of the kernel set and the number of Gram matrices. This characterization of the complexity implies that in an MKL setting, adding more kernels may not monotonically increase the complexity, while previous bounds show otherwise.

  18. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  19. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a..., packaging, transporting, or holding food, subject to the provisions of this section. (a) Tamarind seed...

  20. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  1. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  2. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  3. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  4. Arbitrary-resolution global sensitivity kernels

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Fournier, A.; Dahlen, F.

    2007-12-01

    Extracting observables out of any part of a seismogram (e.g. including diffracted phases such as Pdiff) necessitates the knowledge of 3-D time-space wavefields for the Green functions that form the backbone of Fréchet sensitivity kernels. While known for a while, this idea is still computationally intractable in 3-D, facing major simulation and storage issues when high-frequency wavefields are considered at the global scale. We recently developed a new "collapsed-dimension" spectral-element method that solves the 3-D system of elastodynamic equations in a 2-D space, based on exploring symmetry considerations of the seismic-wave radiation patterns. We will present the technical background on the computation of waveform kernels, various examples of time- and frequency-dependent sensitivity kernels and subsequently extracted time-window kernels (e.g. banana- doughnuts). Given the computationally light-weighted 2-D nature, we will explore some crucial parameters such as excitation type, source time functions, frequency, azimuth, discontinuity locations, and phase type, i.e. an a priori view into how, when, and where seismograms carry 3-D Earth signature. A once-and-for-all database of 2-D waveforms for various source depths shall then serve as a complete set of global time-space sensitivity for a given spherically symmetric background model, thereby allowing for tomographic inversions with arbitrary frequencies, observables, and phases.

  5. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... washing: Provided, That the presence of web or frass shall not be considered serious damage for the...

  6. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 8 2014-01-01 2014-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... washing: Provided, That the presence of web or frass shall not be considered serious damage for the...

  7. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 8 2011-01-01 2011-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... washing: Provided, That the presence of web or frass shall not be considered serious damage for the...

  8. Kernel Temporal Differences for Neural Decoding

    PubMed Central

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  9. Kernel temporal differences for neural decoding.

    PubMed

    Bae, Jihye; Sanchez Giraldo, Luis G; Pohlmeyer, Eric A; Francis, Joseph T; Sanchez, Justin C; Príncipe, José C

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces.

  10. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  11. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  12. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  13. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  14. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  15. Symbol recognition with kernel density matching.

    PubMed

    Zhang, Wan; Wenyin, Liu; Zhang, Kun

    2006-12-01

    We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.

  16. Dielectric properties of almond kernels associated with radio frequency and microwave pasteurization

    PubMed Central

    Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin

    2017-01-01

    To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10–300 MHz), but gradually over the measured MW range (300–3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R2 greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27–2450 MHz), moisture content (4.2–19.6% w.b.) and temperature (20–90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity. PMID:28186149

  17. Spark Ignited Turbulent Flame Kernel Growth

    SciTech Connect

    Santavicca, D.A.

    1995-06-01

    An experimental study of the effects of spark power and of incomplete fuel-air mixing on spark-ignited flame kernel growth was conducted in turbulent propane-air mixtures at 1 atm, 300K conditions. The results showed that increased spark power resulted in an increased growth rate, where the effect of short duration breakdown sparks was found to persist for times of the order of milliseconds. The effectiveness of increased spark power was found to be less at high turbulence and high dilution conditions. Increased spark power had a greater effect on the 0-5 mm burn time than on the 5-13 mm burn time, in part because of the effect of breakdown energy on the initial size of the flame kernel. And finally, when spark power was increased by shortening the spark duration while keeping the effective energy the same there was a significant increase in the misfire rate, however when the spark power was further increased by increasing the breakdown energy the misfire rate dropped to zero. The results also showed that fluctuations in local mixture strength due to incomplete fuel-air mixing cause the flame kernel surface to become wrinkled and distorted; and that the amount of wrinkling increases as the degree of incomplete fuel-air mixing increases. Incomplete fuel-air mixing was also found to result in a significant increase in cyclic variations in the flame kernel growth. The average flame kernel growth rates for the premixed and the incompletely mixed cases were found to be within the experimental uncertainty except for the 33%-RMS-fluctuation case where the growth rate was significantly lower. The premixed and 6%-RMS-fluctuation cases had a 0% misfire rate. The misfire rates were 1% and 2% for the 13%-RMS-fluctuation and 24%-RMS-fluctuation cases, respectively; however, it drastically increased to 23% in the 33%-RMS-fluctuation case.

  18. Kernel weights optimization for error diffusion halftoning method

    NASA Astrophysics Data System (ADS)

    Fedoseev, Victor

    2015-02-01

    This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.

  19. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  20. The pre-image problem in kernel methods.

    PubMed

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  1. Selection and properties of alternative forming fluids for TRISO fuel kernel production

    NASA Astrophysics Data System (ADS)

    Baker, M. P.; King, J. C.; Gorman, B. P.; Marshall, D. W.

    2013-01-01

    Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ˜10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.

  2. Selection and properties of alternative forming fluids for TRISO fuel kernel production

    SciTech Connect

    Baker, M. P.; King, J. C.; Gorman, B. P.; Marshall, Doug W.

    2013-01-01

    Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1- bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.

  3. Difference image analysis: automatic kernel design using information criteria

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.

    2016-03-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.

  4. Learning bounds for kernel regression using effective data dimensionality.

    PubMed

    Zhang, Tong

    2005-09-01

    Kernel methods can embed finite-dimensional data into infinite-dimensional feature spaces. In spite of the large underlying feature dimensionality, kernel methods can achieve good generalization ability. This observation is often wrongly interpreted, and it has been used to argue that kernel learning can magically avoid the "curse-of-dimensionality" phenomenon encountered in statistical estimation problems. This letter shows that although using kernel representation, one can embed data into an infinite-dimensional feature space; the effective dimensionality of this embedding, which determines the learning complexity of the underlying kernel machine, is usually small. In particular, we introduce an algebraic definition of a scale-sensitive effective dimension associated with a kernel representation. Based on this quantity, we derive upper bounds on the generalization performance of some kernel regression methods. Moreover, we show that the resulting convergent rates are optimal under various circumstances.

  5. Efficient $\\chi ^{2}$ Kernel Linearization via Random Feature Maps.

    PubMed

    Yuan, Xiao-Tong; Wang, Zhenzhen; Deng, Jiankang; Liu, Qingshan

    2016-11-01

    Explicit feature mapping is an appealing way to linearize additive kernels, such as χ(2) kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of χ(2) kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize χ(2) kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to χ(2) multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the χ(2) kernel SVMs at almost no cost of testing accuracy.

  6. A fast integral equation model with a dedicated Green's kernel for eddy-current inspection of fastener holes

    NASA Astrophysics Data System (ADS)

    Pipis, Konstantinos; Skarlatos, Anastassios; Lesselier, Dominique; Theodoulidis, Theodoros

    2015-03-01

    A fast integral equation model for the eddy-current signal calculation of thin cracks breaking the wall of a borehole in a conducting plate is presented. The model is based on the surface integral equation formalism using a dedicated Green's function, which takes into account the effect of the borehole. The approach followed for the construction of the Green's kernel and the primary field is the Truncated Region Eigenfunction Expansion (TREE) method.

  7. A Novel Framework for Learning Geometry-Aware Kernels.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo

    2016-05-01

    The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.

  8. Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets.

    PubMed

    Wang, Shitong; Wang, Jun; Chung, Fu-lai

    2014-01-01

    Kernel methods such as the standard support vector machine and support vector regression trainings take O(N(3)) time and O(N(2)) space complexities in their naïve implementations, where N is the training set size. It is thus computationally infeasible in applying them to large data sets, and a replacement of the naive method for finding the quadratic programming (QP) solutions is highly desirable. By observing that many kernel methods can be linked up with kernel density estimate (KDE) which can be efficiently implemented by some approximation techniques, a new learning method called fast KDE (FastKDE) is proposed to scale up kernel methods. It is based on establishing a connection between KDE and the QP problems formulated for kernel methods using an entropy-based integrated-squared-error criterion. As a result, FastKDE approximation methods can be applied to solve these QP problems. In this paper, the latest advance in fast data reduction via KDE is exploited. With just a simple sampling strategy, the resulted FastKDE method can be used to scale up various kernel methods with a theoretical guarantee that their performance does not degrade a lot. It has a time complexity of O(m(3)) where m is the number of the data points sampled from the training set. Experiments on different benchmarking data sets demonstrate that the proposed method has comparable performance with the state-of-art method and it is effective for a wide range of kernel methods to achieve fast learning in large data sets.

  9. Reaction Kernel Structure of a Slot Jet Diffusion Flame in Microgravity

    NASA Technical Reports Server (NTRS)

    Takahashi, F.; Katta, V. R.

    2001-01-01

    Diffusion flame stabilization in normal earth gravity (1 g) has long been a fundamental research subject in combustion. Local flame-flow phenomena, including heat and species transport and chemical reactions, around the flame base in the vicinity of condensed surfaces control flame stabilization and fire spreading processes. Therefore, gravity plays an important role in the subject topic because buoyancy induces flow in the flame zone, thus increasing the convective (and diffusive) oxygen transport into the flame zone and, in turn, reaction rates. Recent computations show that a peak reactivity (heat-release or oxygen-consumption rate) spot, or reaction kernel, is formed in the flame base by back-diffusion and reactions of radical species in the incoming oxygen-abundant flow at relatively low temperatures (about 1550 K). Quasi-linear correlations were found between the peak heat-release or oxygen-consumption rate and the velocity at the reaction kernel for cases including both jet and flat-plate diffusion flames in airflow. The reaction kernel provides a stationary ignition source to incoming reactants, sustains combustion, and thus stabilizes the trailing diffusion flame. In a quiescent microgravity environment, no buoyancy-induced flow exits and thus purely diffusive transport controls the reaction rates. Flame stabilization mechanisms in such purely diffusion-controlled regime remain largely unstudied. Therefore, it will be a rigorous test for the reaction kernel correlation if it can be extended toward zero velocity conditions in the purely diffusion-controlled regime. The objectives of this study are to reveal the structure of the flame-stabilizing region of a two-dimensional (2D) laminar jet diffusion flame in microgravity and develop a unified diffusion flame stabilization mechanism. This paper reports the recent progress in the computation and experiment performed in microgravity.

  10. Inverse of the string theory KLT kernel

    NASA Astrophysics Data System (ADS)

    Mizera, Sebastian

    2017-06-01

    The field theory Kawai-Lewellen-Tye (KLT) kernel, which relates scattering amplitudes of gravitons and gluons, turns out to be the inverse of a matrix whose components are bi-adjoint scalar partial amplitudes. In this note we propose an analogous construction for the string theory KLT kernel. We present simple diagrammatic rules for the computation of the α'-corrected bi-adjoint scalar amplitudes that are exact in α'. We find compact expressions in terms of graphs, where the standard Feynman propagators 1 /p 2 are replaced by either 1 /sin(π α' p 2 /2) or 1 /tan(π α' p 2 /2), as determined by a recursive procedure. We demonstrate how the same object can be used to conveniently expand open string partial amplitudes in a BCJ basis.

  11. Motion Blur Kernel Estimation via Deep Learning.

    PubMed

    Xu, Xiangyu; Pan, Jinshan; Zhang, Yu-Jin; Yang, Ming-Hsuan

    2017-09-18

    The success of the state-of-the-art deblurring methods mainly depends on restoration of sharp edges in a coarse-tofine kernel estimation process. In this paper, we propose to learn a deep convolutional neural network for extracting sharp edges from blurred images. Motivated by the success of the existing filtering based deblurring methods, the proposed model consists of two stages: suppressing extraneous details and enhancing sharp edges. We show that the two-stage model simplifies the learning process and effectively restores sharp edges. Facilitated by the learned sharp edges, the proposed deblurring algorithm does not require any coarse-to-fine strategy or edge selection, thereby significantly simplifying kernel estimation and reducing computation load. Extensive experimental results on challenging blurry images demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of visual quality and run-time.

  12. Wilson Dslash Kernel From Lattice QCD Optimization

    SciTech Connect

    Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  13. Bergman kernel and complex singularity exponent

    NASA Astrophysics Data System (ADS)

    Chen, Boyong; Lee, Hanjin

    2009-12-01

    We give a precise estimate of the Bergman kernel for the model domain defined by $\\Omega_F=\\{(z,w)\\in \\mathbb{C}^{n+1}:{\\rm Im}w-|F(z)|^2>0\\},$ where $F=(f_1,...,f_m)$ is a holomorphic map from $\\mathbb{C}^n$ to $\\mathbb{C}^m$, in terms of the complex singularity exponent of $F$.

  14. Control Transfer in Operating System Kernels

    DTIC Science & Technology

    1994-05-13

    the Programming Symposium, pages 181-203, 1974. [Leffler et al. 89] S. Leffler, M. McKusick, M. Karels, and J. Quarterman. The Design and...increased modularity in operating systems only increases the importance of control transfer. My thesis is that a programming language abstraction...continuations provide allows the kernel designer when necessary to choose implementation performance over convenience, without affecting the design of

  15. FABRICATION OF URANIUM OXYCARBIDE KERNELS AND COMPACTS FOR HTR FUEL

    SciTech Connect

    Dr. Jeffrey A. Phillips; Eric L. Shaber; Scott G. Nagley

    2012-10-01

    As part of the program to demonstrate tristructural isotropic (TRISO)-coated fuel for the Next Generation Nuclear Plant (NGNP), Advanced Gas Reactor (AGR) fuel is being irradiation tested in the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL). This testing has led to improved kernel fabrication techniques, the formation of TRISO fuel particles, and upgrades to the overcoating, compaction, and heat treatment processes. Combined, these improvements provide a fuel manufacturing process that meets the stringent requirements associated with testing in the AGR experimentation program. Researchers at Idaho National Laboratory (INL) are working in conjunction with a team from Babcock and Wilcox (B&W) and Oak Ridge National Laboratory (ORNL) to (a) improve the quality of uranium oxycarbide (UCO) fuel kernels, (b) deposit TRISO layers to produce a fuel that meets or exceeds the standard developed by German researches in the 1980s, and (c) develop a process to overcoat TRISO particles with the same matrix material, but applies it with water using equipment previously and successfully employed in the pharmaceutical industry. A primary goal of this work is to simplify the process, making it more robust and repeatable while relying less on operator technique than prior overcoating efforts. A secondary goal is to improve first-pass yields to greater than 95% through the use of established technology and equipment. In the first test, called “AGR-1,” graphite compacts containing approximately 300,000 coated particles were irradiated from December 2006 to November 2009. The AGR-1 fuel was designed to closely replicate many of the properties of German TRISO-coated particles, thought to be important for good fuel performance. No release of gaseous fission product, indicative of particle coating failure, was detected in the nearly 3-year irradiation to a peak burn up of 19.6% at a time-average temperature of 1038–1121°C. Before fabricating AGR-2 fuel, each

  16. The Palomar kernel-phase experiment: testing kernel phase interferometry for ground-based astronomical observations

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin; Tuthill, Peter; Hinkley, Sasha; Ireland, Michael J.; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D.; Martinache, Frantz

    2016-01-01

    At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently developed adaptive optics post-processing technique, called kernel-phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here, we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the α Ophiuchi binary system near periastron, using the Palomar 200-Inch Telescope. This is the first case in which kernel phase has been used with a full aperture to resolve a system close to the diffraction limit with ground-based extreme adaptive optics observations. Excellent agreement in astrometric quantities is found between kernel phase and masking, and kernel phase significantly outperforms PSF fitting and bispectral analysis, demonstrating its viability as an alternative to conventional non-redundant masking under appropriate conditions.

  17. Kernel methods for phenotyping complex plant architecture.

    PubMed

    Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien

    2014-02-07

    The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.

  18. Balancing continuous covariates based on Kernel densities.

    PubMed

    Ma, Zhenjun; Hu, Feifang

    2013-03-01

    The balance of important baseline covariates is essential for convincing treatment comparisons. Stratified permuted block design and minimization are the two most commonly used balancing strategies, both of which require the covariates to be discrete. Continuous covariates are typically discretized in order to be included in the randomization scheme. But breakdown of continuous covariates into subcategories often changes the nature of the covariates and makes distributional balance unattainable. In this article, we propose to balance continuous covariates based on Kernel density estimations, which keeps the continuity of the covariates. Simulation studies show that the proposed Kernel-Minimization can achieve distributional balance of both continuous and categorical covariates, while also keeping the group size well balanced. It is also shown that the Kernel-Minimization is less predictable than stratified permuted block design and minimization. Finally, we apply the proposed method to redesign the NINDS trial, which has been a source of controversy due to imbalance of continuous baseline covariates. Simulation shows that imbalances such as those observed in the NINDS trial can be generally avoided through the implementation of the new method. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Kernel Non-Rigid Structure from Motion

    PubMed Central

    Gotardo, Paulo F. U.; Martinez, Aleix M.

    2013-01-01

    Non-rigid structure from motion (NRSFM) is a difficult, underconstrained problem in computer vision. The standard approach in NRSFM constrains 3D shape deformation using a linear combination of K basis shapes; the solution is then obtained as the low-rank factorization of an input observation matrix. An important but overlooked problem with this approach is that non-linear deformations are often observed; these deformations lead to a weakened low-rank constraint due to the need to use additional basis shapes to linearly model points that move along curves. Here, we demonstrate how the kernel trick can be applied in standard NRSFM. As a result, we model complex, deformable 3D shapes as the outputs of a non-linear mapping whose inputs are points within a low-dimensional shape space. This approach is flexible and can use different kernels to build different non-linear models. Using the kernel trick, our model complements the low-rank constraint by capturing non-linear relationships in the shape coefficients of the linear model. The net effect can be seen as using non-linear dimensionality reduction to further compress the (shape) space of possible solutions. PMID:24002226

  20. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.

  1. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  2. Abrasion resistant heat pipe

    DOEpatents

    Ernst, D.M.

    1984-10-23

    A specially constructed heat pipe is described for use in fluidized bed combustors. Two distinct coatings are spray coated onto a heat pipe casing constructed of low thermal expansion metal, each coating serving a different purpose. The first coating forms aluminum oxide to prevent hydrogen permeation into the heat pipe casing, and the second coating contains stabilized zirconium oxide to provide abrasion resistance while not substantially affecting the heat transfer characteristics of the system.

  3. Abrasion resistant heat pipe

    DOEpatents

    Ernst, Donald M.

    1984-10-23

    A specially constructed heat pipe for use in fluidized bed combustors. Two distinct coatings are spray coated onto a heat pipe casing constructed of low thermal expansion metal, each coating serving a different purpose. The first coating forms aluminum oxide to prevent hydrogen permeation into the heat pipe casing, and the second coating contains stabilized zirconium oxide to provide abrasion resistance while not substantially affecting the heat transfer characteristics of the system.

  4. Towards smart energy systems: application of kernel machine regression for medium term electricity load forecasting.

    PubMed

    Alamaniotis, Miltiadis; Bargiotas, Dimitrios; Tsoukalas, Lefteri H

    2016-01-01

    Integration of energy systems with information technologies has facilitated the realization of smart energy systems that utilize information to optimize system operation. To that end, crucial in optimizing energy system operation is the accurate, ahead-of-time forecasting of load demand. In particular, load forecasting allows planning of system expansion, and decision making for enhancing system safety and reliability. In this paper, the application of two types of kernel machines for medium term load forecasting (MTLF) is presented and their performance is recorded based on a set of historical electricity load demand data. The two kernel machine models and more specifically Gaussian process regression (GPR) and relevance vector regression (RVR) are utilized for making predictions over future load demand. Both models, i.e., GPR and RVR, are equipped with a Gaussian kernel and are tested on daily predictions for a 30-day-ahead horizon taken from the New England Area. Furthermore, their performance is compared to the ARMA(2,2) model with respect to mean average percentage error and squared correlation coefficient. Results demonstrate the superiority of RVR over the other forecasting models in performing MTLF.

  5. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  6. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  7. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  8. Fine tuning of process parameters for improving briquette production from palm kernel shell gasification waste.

    PubMed

    Bazargan, Alireza; Rough, Sarah L; McKay, Gordon

    2017-05-05

    Palm kernel shell biochars (PKSB) ejected as residues from a gasifier have been used for solid fuel briquette production. With this approach, palm kernel shells can be used for energy production twice: first, by producing rich syngas during gasification; second, by compacting the leftover residues from gasification into high calorific value briquettes. Herein, the process parameters for the manufacture of PKSB biomass briquettes via compaction are optimized. Two possible optimum process scenarios are considered. In the first, the compaction speed is increased from 0.5 to 10 mm/s, the compaction pressure is decreased from 80 Pa to 40 MPa, the retention time is reduced from 10 s to zero, and the starch binder content of the briquette is halved from 0.1 to 0.05 kg/kg. With these adjustments, the briquette production rate increases by more than 20-fold; hence capital and operational costs can be reduced and the service life of compaction equipment can be increased. The resulting product satisfactorily passes tensile (compressive) crushing strength and impact resistance tests. The second scenario involves reducing the starch weight content to 0.03 kg/kg, while reducing the compaction pressure to a value no lower than 60 MPa. Overall, in both cases, the PKSB biomass briquettes show excellent potential as a solid fuel with calorific values on par with good-quality coal. CHNS: carbon, hydrogen, nitrogen, sulfur; FFB: fresh fruit bunch(es); HHV: higher heating value [J/kg]; LHV: lower heating value [J/kg]; PKS: palm kernel shell(s); PKSB: palm kernel shell biochar(s); POME: palm oil mill effluent; RDF: refuse-derived fuel; TGA: thermogravimetric analysis.

  9. Edgeworth expansions of stochastic trading time

    NASA Astrophysics Data System (ADS)

    Decamps, Marc; De Schepper, Ann

    2010-08-01

    Under most local and stochastic volatility models the underlying forward is assumed to be a positive function of a time-changed Brownian motion. It relates nicely the implied volatility smile to the so-called activity rate in the market. Following Young and DeWitt-Morette (1986) [8], we propose to apply the Duru-Kleinert process-cum-time transformation in path integral to formulate the transition density of the forward. The method leads to asymptotic expansions of the transition density around a Gaussian kernel corresponding to the average activity in the market conditional on the forward value. The approximation is numerically illustrated for pricing vanilla options under the CEV model and the popular normal SABR model. The asymptotics can also be used for Monte Carlo simulations or backward integration schemes.

  10. Microscale Regenerative Heat Exchanger

    NASA Technical Reports Server (NTRS)

    Moran, Matthew E.; Stelter, Stephan; Stelter, Manfred

    2006-01-01

    The device described herein is designed primarily for use as a regenerative heat exchanger in a miniature Stirling engine or Stirling-cycle heat pump. A regenerative heat exchanger (sometimes called, simply, a "regenerator" in the Stirling-engine art) is basically a thermal capacitor: Its role in the Stirling cycle is to alternately accept heat from, then deliver heat to, an oscillating flow of a working fluid between compression and expansion volumes, without introducing an excessive pressure drop. These volumes are at different temperatures, and conduction of heat between these volumes is undesirable because it reduces the energy-conversion efficiency of the Stirling cycle.

  11. Convolution kernel design and efficient algorithm for sampling density correction.

    PubMed

    Johnson, Kenneth O; Pipe, James G

    2009-02-01

    Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally efficient algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the algorithm are extended to 3D. Copyright 2009 Wiley-Liss, Inc.

  12. A Kernel-based Account of Bibliometric Measures

    NASA Astrophysics Data System (ADS)

    Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji

    The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.

  13. Isolation of bacterial endophytes from germinated maize kernels.

    PubMed

    Rijavec, Tomaz; Lapanje, Ales; Dermastia, Marina; Rupnik, Maja

    2007-06-01

    The germination of surface-sterilized maize kernels under aseptic conditions proved to be a suitable method for isolation of kernel-associated bacterial endophytes. Bacterial strains identified by partial 16S rRNA gene sequencing as Pantoea sp., Microbacterium sp., Frigoribacterium sp., Bacillus sp., Paenibacillus sp., and Sphingomonas sp. were isolated from kernels of 4 different maize cultivars. Genus Pantoea was associated with a specific maize cultivar. The kernels of this cultivar were often overgrown with the fungus Lecanicillium aphanocladii; however, those exhibiting Pantoea growth were never colonized with it. Furthermore, the isolated bacterium strain inhibited fungal growth in vitro.

  14. Geometric tree kernels: classification of COPD from airway tree geometry.

    PubMed

    Feragen, Aasa; Petersen, Jens; Grimm, Dominik; Dirksen, Asger; Pedersen, Jesper Holst; Borgwardt, Karsten; de Bruijne, Marleen

    2013-01-01

    Methodological contributions: This paper introduces a family of kernels for analyzing (anatomical) trees endowed with vector valued measurements made along the tree. While state-of-the-art graph and tree kernels use combinatorial tree/graph structure with discrete node and edge labels, the kernels presented in this paper can include geometric information such as branch shape, branch radius or other vector valued properties. In addition to being flexible in their ability to model different types of attributes, the presented kernels are computationally efficient and some of them can easily be computed for large datasets (N - 10.000) of trees with 30 - 600 branches. Combining the kernels with standard machine learning tools enables us to analyze the relation between disease and anatomical tree structure and geometry. Experimental results: The kernels are used to compare airway trees segmented from low-dose CT, endowed with branch shape descriptors and airway wall area percentage measurements made along the tree. Using kernelized hypothesis testing we show that the geometric airway trees are significantly differently distributed in patients with Chronic Obstructive Pulmonary Disease (COPD) than in healthy individuals. The geometric tree kernels also give a significant increase in the classification accuracy of COPD from geometric tree structure endowed with airway wall thickness measurements in comparison with state-of-the-art methods, giving further insight into the relationship between airway wall thickness and COPD. Software: Software for computing kernels and statistical tests is available at http://image.diku.dk/aasa/software.php.

  15. Influence of wheat kernel physical properties on the pulverizing process.

    PubMed

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  16. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  17. Model-based online learning with kernels.

    PubMed

    Li, Guoqi; Wen, Changyun; Li, Zheng Guo; Zhang, Aimin; Yang, Feng; Mao, Kezhi

    2013-03-01

    New optimization models and algorithms for online learning with Kernels (OLK) in classification, regression, and novelty detection are proposed in a reproducing Kernel Hilbert space. Unlike the stochastic gradient descent algorithm, called the naive online Reg minimization algorithm (NORMA), OLK algorithms are obtained by solving a constrained optimization problem based on the proposed models. By exploiting the techniques of the Lagrange dual problem like Vapnik's support vector machine (SVM), the solution of the optimization problem can be obtained iteratively and the iteration process is similar to that of the NORMA. This further strengthens the foundation of OLK and enriches the research area of SVM. We also apply the obtained OLK algorithms to problems in classification, regression, and novelty detection, including real time background substraction, to show their effectiveness. It is illustrated that, based on the experimental results of both classification and regression, the accuracy of OLK algorithms is comparable with traditional SVM-based algorithms, such as SVM and least square SVM (LS-SVM), and with the state-of-the-art algorithms, such as Kernel recursive least square (KRLS) method and projectron method, while it is slightly higher than that of NORMA. On the other hand, the computational cost of the OLK algorithm is comparable with or slightly lower than existing online methods, such as above mentioned NORMA, KRLS, and projectron methods, but much lower than that of SVM-based algorithms. In addition, different from SVM and LS-SVM, it is possible for OLK algorithms to be applied to non-stationary problems. Also, the applicability of OLK in novelty detection is illustrated by simulation results.

  18. Neutron scattering kernel for solid deuterium

    NASA Astrophysics Data System (ADS)

    Granada, J. R.

    2009-06-01

    A new scattering kernel to describe the interaction of slow neutrons with solid deuterium was developed. The main characteristics of that system are contained in the formalism, including the lattice's density of states, the Young-Koppel quantum treatment of the rotations, and the internal molecular vibrations. The elastic processes involving coherent and incoherent contributions are fully described, as well as the spin-correlation effects. The results from the new model are compared with the best available experimental data, showing very good agreement.

  19. Verification of Chare-kernel programs

    SciTech Connect

    Bhansali, S.; Kale, L.V. )

    1989-01-01

    Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.

  20. Fixed kernel regression for voltammogram feature extraction

    NASA Astrophysics Data System (ADS)

    Acevedo Rodriguez, F. J.; López-Sastre, R. J.; Gil-Jiménez, P.; Ruiz-Reyes, N.; Maldonado Bascón, S.

    2009-12-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals.

  1. Prediction of kernel density of corn using single-kernel near infrared spectroscopy

    USDA-ARS?s Scientific Manuscript database

    Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...

  2. Thermal Expansion of AuIn2

    SciTech Connect

    Saw, C K; Siekhaus, W J

    2004-07-12

    The thermal expansion of AuIn{sub 2} gold is of great interest in soldering technology. Indium containing solders have been used to make gold wire interconnects at low soldering temperature and over time, AuIn{sub 2} is formed between the gold wire and the solder due to the high heat of formation and the high inter-metallic diffusion of indium. Hence, the thermal expansion of AuIn{sub 2} alloy in comparison with that of the gold wire and the indium-containing solder is critical in determining the integrity of the connection. We present the results of x-ray diffraction measurement of the coefficient of linear expansion of AuIn{sub 2} as well as the bulk expansion and density changes over the temperature range of 30 to 500 C.

  3. Femtosecond dynamics of cluster expansion

    NASA Astrophysics Data System (ADS)

    Gao, Xiaohui; Wang, Xiaoming; Shim, Bonggu; Arefiev, Alexey; Tushentsov, Mikhail; Breizman, Boris; Downer, Mike

    2010-03-01

    Noble gas clusters irradiated by intense ultrafast laser expand quickly and become typical plasma in picosecond time scale. During the expansion, the clustered plasma demonstrates unique optical properties such as strong absorption and positive contribution to the refractive index. Here we studied cluster expansion dynamics by fs-time-resolved refractive index and absorption measurements in cluster gas jets after ionization and heating by an intense pump pulse. The refractive index measured by frequency domain interferometry (FDI) shows the transient positive peak of refractive index due to clustered plasma. By separating it from the negative contribution of the monomer plasma, we are able to determine the cluster fraction. The absorption measured by a delayed probe shows the contribution from clusters of various sizes. The plasma resonances in the cluster explain the enhancement of the absorption in our isothermal expanding cluster model. The cluster size distribution can be determined. A complete understanding of the femtosecond dynamics of cluster expansion is essential in the accurate interpretation and control of laser-cluster experiments such as phase-matched harmonic generation in cluster medium.

  4. Analysis of maize (Zea mays) kernel density and volume using micro-computed tomography and single-kernel near infrared spectroscopy

    USDA-ARS?s Scientific Manuscript database

    Maize kernel density impacts milling quality of the grain due to kernel hardness. Harder kernels are correlated with higher test weight and are more resistant to breakage during harvest and transport. Softer kernels, in addition to being susceptible to mechanical damage, are also prone to pathogen ...

  5. Introductory heat-transfer

    NASA Technical Reports Server (NTRS)

    Widener, Edward L.

    1992-01-01

    The objective is to introduce some concepts of thermodynamics in existing heat-treating experiments using available items. The specific objectives are to define the thermal properties of materials and to visualize expansivity, conductivity, heat capacity, and the melting point of common metals. The experimental procedures are described.

  6. Kernel polynomial representation for imaginary-time Green’s functions in continuous-time quantum Monte Carlo impurity solver

    NASA Astrophysics Data System (ADS)

    Huang, Li

    2016-11-01

    Inspired by the recently proposed Legendre orthogonal polynomial representation for imaginary-time Green’s functions G(τ), we develop an alternate and superior representation for G(τ) and implement it in the hybridization expansion continuous-time quantum Monte Carlo impurity solver. This representation is based on the kernel polynomial method, which introduces some integral kernel functions to filter the numerical fluctuations caused by the explicit truncations of polynomial expansion series and can improve the computational precision significantly. As an illustration of the new representation, we re-examine the imaginary-time Green’s functions of the single-band Hubbard model in the framework of dynamical mean-field theory. The calculated results suggest that with carefully chosen integral kernel functions, whether the system is metallic or insulating, the Gibbs oscillations found in the previous Legendre orthogonal polynomial representation have been vastly suppressed and remarkable corrections to the measured Green’s functions have been obtained. Project supported by the National Natural Science Foundation of China (Grant No. 11504340).

  7. Delimiting areas of endemism through kernel interpolation.

    PubMed

    Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  8. Delimiting Areas of Endemism through Kernel Interpolation

    PubMed Central

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  9. Generalized Langevin equation with tempered memory kernel

    NASA Astrophysics Data System (ADS)

    Liemert, André; Sandev, Trifce; Kantz, Holger

    2017-01-01

    We study a generalized Langevin equation for a free particle in presence of a truncated power-law and Mittag-Leffler memory kernel. It is shown that in presence of truncation, the particle from subdiffusive behavior in the short time limit, turns to normal diffusion in the long time limit. The case of harmonic oscillator is considered as well, and the relaxation functions and the normalized displacement correlation function are represented in an exact form. By considering external time-dependent periodic force we obtain resonant behavior even in case of a free particle due to the influence of the environment on the particle movement. Additionally, the double-peak phenomenon in the imaginary part of the complex susceptibility is observed. It is obtained that the truncation parameter has a huge influence on the behavior of these quantities, and it is shown how the truncation parameter changes the critical frequencies. The normalized displacement correlation function for a fractional generalized Langevin equation is investigated as well. All the results are exact and given in terms of the three parameter Mittag-Leffler function and the Prabhakar generalized integral operator, which in the kernel contains a three parameter Mittag-Leffler function. Such kind of truncated Langevin equation motion can be of high relevance for the description of lateral diffusion of lipids and proteins in cell membranes.

  10. Transcriptome analysis of Ginkgo biloba kernels

    PubMed Central

    He, Bing; Gu, Yincong; Xu, Meng; Wang, Jianwen; Cao, Fuliang; Xu, Li-an

    2015-01-01

    Ginkgo biloba is a dioecious species native to China with medicinally and phylogenetically important characteristics; however, genomic resources for this species are limited. In this study, we performed the first transcriptome sequencing for Ginkgo kernels at five time points using Illumina paired-end sequencing. Approximately 25.08-Gb clean reads were obtained, and 68,547 unigenes with an average length of 870 bp were generated by de novo assembly. Of these unigenes, 29,987 (43.74%) were annotated in publicly available plant protein database. A total of 3,869 genes were identified as significantly differentially expressed, and enrichment analysis was conducted at different time points. Furthermore, metabolic pathway analysis revealed that 66 unigenes were responsible for terpenoid backbone biosynthesis, with up to 12 up-regulated unigenes involved in the biosynthesis of ginkgolide and bilobalide. Differential gene expression analysis together with real-time PCR experiments indicated that the synthesis of bilobalide may have interfered with the ginkgolide synthesis process in the kernel. These data can remarkably expand the existing transcriptome resources of Ginkgo, and provide a valuable platform to reveal more on developmental and metabolic mechanisms of this species. PMID:26500663

  11. Scientific Computing Kernels on the Cell Processor

    SciTech Connect

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  12. End-use quality of soft kernel durum wheat

    USDA-ARS?s Scientific Manuscript database

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat is known for its very hard texture, which influences how it is milled and for what products it is well suited. We developed soft kernel durum wheat lines via Ph1b-mediated homoeologous recombination with Dr. Leonard Joppa...

  13. Ambered kernels in stenospermocarpic fruit of eastern black walnut

    Treesearch

    Michele R. Warmund; J.W. Van Sambeek

    2014-01-01

    "Ambers" is a term used to describe poorly filled, shriveled eastern black walnut (Juglans nigra L.) kernels with a dark brown or black-colored pellicle that are unmarketable. Studies were conducted to determine the incidence of ambered black walnut kernels and to ascertain when symptoms were apparent in specific tissues. The occurrence of...

  14. Parametric kernel-driven active contours for image segmentation

    NASA Astrophysics Data System (ADS)

    Wu, Qiongzhi; Fang, Jiangxiong

    2012-10-01

    We investigated a parametric kernel-driven active contour (PKAC) model, which implicitly transfers kernel mapping and piecewise constant to modeling the image data via kernel function. The proposed model consists of curve evolution functional with three terms: global kernel-driven and local kernel-driven terms, which evaluate the deviation of the mapped image data within each region from the piecewise constant model, and a regularization term expressed as the length of the evolution curves. In the local kernel-driven term, the proposed model can effectively segment images with intensity inhomogeneity by incorporating the local image information. By balancing the weight between the global kernel-driven term and the local kernel-driven term, the proposed model can segment the images with either intensity homogeneity or intensity inhomogeneity. To ensure the smoothness of the level set function and reduce the computational cost, the distance regularizing term is applied to penalize the deviation of the level set function and eliminate the requirement of re-initialization. Compared with the local image fitting model and local binary fitting model, experimental results show the advantages of the proposed method in terms of computational efficiency and accuracy.

  15. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  16. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  17. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  18. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined kernel...

  19. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  20. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined kernel...

  1. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  2. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined kernel...

  3. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined kernel...

  4. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  5. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined kernel...

  6. Evidence-Based Kernels: Fundamental Units of Behavioral Influence

    ERIC Educational Resources Information Center

    Embry, Dennis D.; Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…

  7. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  8. Sugar uptake into kernels of tunicate tassel-seed maize

    SciTech Connect

    Thomas, P.A.; Felker, F.C.; Crawford, C.G. )

    1990-05-01

    A maize (Zea mays L.) strain expressing both the tassel-seed (Ts-5) and tunicate (Tu) characters was developed which produces glume-covered kernels on the tassel, often born on 7-10 mm pedicels. Vigorous plants produce up to 100 such kernels interspersed with additional sessile kernels. This floral unit provides a potentially valuable experimental system for studying sugar uptake into developing maize seeds. When detached kernels (with glumes and pedicel intact) are placed in incubation solution, fluid flows up the pedicel and into the glumes, entering the pedicel apoplast near the kernel base. The unusual anatomical features of this maize strain permit experimental access to the pedicel apoplast with much less possibility of kernel base tissue damage than with kernels excised from the cob. ({sup 14}C)Fructose incorporation into soluble and insoluble fractions of endosperm increased for 8 days. Endosperm uptake of sucrose, fructose, and D-glucose was significantly greater than that of L-glucose. Fructose uptake was significantly inhibited by CCCP, DNP, and PCMBS. These results suggest the presence of an active, non-diffusion component of sugar transport in maize kernels.

  9. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  10. High speed sorting of Fusarium-damaged wheat kernels

    USDA-ARS?s Scientific Manuscript database

    Recent studies have found that resistance to Fusarium fungal infection can be inherited in wheat from one generation to another. However, there is not yet available a cost effective method to separate Fusarium-damaged wheat kernels from undamaged kernels so that wheat breeders can take advantage of...

  11. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  12. Integral Transform Methods: A Critical Review of Various Kernels

    NASA Astrophysics Data System (ADS)

    Orlandini, Giuseppina; Turro, Francesco

    2017-03-01

    Some general remarks about integral transform approaches to response functions are made. Their advantage for calculating cross sections at energies in the continuum is stressed. In particular we discuss the class of kernels that allow calculations of the transform by matrix diagonalization. A particular set of such kernels, namely the wavelets, is tested in a model study.

  13. Evidence-Based Kernels: Fundamental Units of Behavioral Influence

    ERIC Educational Resources Information Center

    Embry, Dennis D.; Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…

  14. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  15. Computing the roots of complex orthogonal and kernel polynomials

    SciTech Connect

    Saylor, P.E.; Smolarski, D.C.

    1988-01-01

    A method is presented to compute the roots of complex orthogonal and kernel polynomials. An important application of complex kernel polynomials is the acceleration of iterative methods for the solution of nonsymmetric linear equations. In the real case, the roots of orthogonal polynomials coincide with the eigenvalues of the Jacobi matrix, a symmetric tridiagonal matrix obtained from the defining three-term recurrence relationship for the orthogonal polynomials. In the real case kernel polynomials are orthogonal. The Stieltjes procedure is an algorithm to compute the roots of orthogonal and kernel polynomials bases on these facts. In the complex case, the Jacobi matrix generalizes to a Hessenberg matrix, the eigenvalues of which are roots of either orthogonal or kernel polynomials. The resulting algorithm generalizes the Stieljes procedure. It may not be defined in the case of kernel polynomials, a consequence of the fact that they are orthogonal with respect to a nonpositive bilinear form. (Another consequence is that kernel polynomials need not be of exact degree.) A second algorithm that is always defined is presented for kernel polynomials. Numerical examples are described.

  16. PROPERTIES OF A SOLAR FLARE KERNEL OBSERVED BY HINODE AND SDO

    SciTech Connect

    Young, P. R.; Doschek, G. A.; Warren, H. P.; Hara, H.

    2013-04-01

    Flare kernels are compact features located in the solar chromosphere that are the sites of rapid heating and plasma upflow during the rise phase of flares. An example is presented from a M1.1 class flare in active region AR 11158 observed on 2011 February 16 07:44 UT for which the location of the upflow region seen by EUV Imaging Spectrometer (EIS) can be precisely aligned to high spatial resolution images obtained by the Atmospheric Imaging Assembly (AIA) and Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO). A string of bright flare kernels is found to be aligned with a ridge of strong magnetic field, and one kernel site is highlighted for which an upflow speed of Almost-Equal-To 400 km s{sup -1} is measured in lines formed at 10-30 MK. The line-of-sight magnetic field strength at this location is Almost-Equal-To 1000 G. Emission over a continuous range of temperatures down to the chromosphere is found, and the kernels have a similar morphology at all temperatures and are spatially coincident with sizes at the resolution limit of the AIA instrument ({approx}<400 km). For temperatures of 0.3-3.0 MK the EIS emission lines show multiple velocity components, with the dominant component becoming more blueshifted with temperature from a redshift of 35 km s{sup -1} at 0.3 MK to a blueshift of 60 km s{sup -1} at 3.0 MK. Emission lines from 1.5-3.0 MK show a weak redshifted component at around 60-70 km s{sup -1} implying multi-directional flows at the kernel site. Significant non-thermal broadening corresponding to velocities of Almost-Equal-To 120 km s{sup -1} is found at 10-30 MK, and the electron density in the kernel, measured at 2 MK, is 3.4 Multiplication-Sign 10{sup 10} cm{sup -3}. Finally, the Fe XXIV {lambda}192.03/{lambda}255.11 ratio suggests that the EIS calibration has changed since launch, with the long wavelength channel less sensitive than the short wavelength channel by around a factor two.

  17. Water-heating dehumidifier

    DOEpatents

    Tomlinson, John J.

    2006-04-18

    A water-heating dehumidifier includes a refrigerant loop including a compressor, at least one condenser, an expansion device and an evaporator including an evaporator fan. The condenser includes a water inlet and a water outlet for flowing water therethrough or proximate thereto, or is affixed to the tank or immersed into the tank to effect water heating without flowing water. The immersed condenser design includes a self-insulated capillary tube expansion device for simplicity and high efficiency. In a water heating mode air is drawn by the evaporator fan across the evaporator to produce cooled and dehumidified air and heat taken from the air is absorbed by the refrigerant at the evaporator and is pumped to the condenser, where water is heated. When the tank of water heater is full of hot water or a humidistat set point is reached, the water-heating dehumidifier can switch to run as a dehumidifier.

  18. Giant negative thermal expansion in magnetic nanocrystals.

    PubMed

    Zheng, X G; Kubozono, H; Yamada, H; Kato, K; Ishiwata, Y; Xu, C N

    2008-12-01

    Most solids expand when they are heated, but a property known as negative thermal expansion has been observed in a number of materials, including the oxide ZrW2O8 (ref. 1) and the framework material ZnxCd1-x(CN)2 (refs 2,3). This unusual behaviour can be understood in terms of low-energy phonons, while the colossal values of both positive and negative thermal expansion recently observed in another framework material, Ag3[Co(CN)6], have been explained in terms of the geometric flexibility of its metal-cyanide-metal linkages. Thermal expansion can also be stopped in some magnetic transition metal alloys below their magnetic ordering temperature, a phenomenon known as the Invar effect, and the possibility of exploiting materials with tuneable positive or negative thermal expansion in industrial applications has led to intense interest in both the Invar effect and negative thermal expansion. Here we report the results of thermal expansion experiments on three magnetic nanocrystals-CuO, MnF2 and NiO-and find evidence for negative thermal expansion in both CuO and MnF2 below their magnetic ordering temperatures, but not in NiO. Larger particles of CuO and MnF2 also show prominent magnetostriction (that is, they change shape in response to an applied magnetic field), which results in significantly reduced thermal expansion below their magnetic ordering temperatures; this behaviour is not observed in NiO. We propose that the negative thermal expansion effect in CuO (which is four times larger than that observed in ZrW2O8) and MnF2 is a general property of nanoparticles in which there is strong coupling between magnetism and the crystal lattice.

  19. Building kernels from binary strings for image matching.

    PubMed

    Odone, Francesca; Barla, Annalisa; Verri, Alessandro

    2005-02-01

    In the statistical learning framework, the use of appropriate kernels may be the key for substantial improvement in solving a given problem. In essence, a kernel is a similarity measure between input points satisfying some mathematical requirements and possibly capturing the domain knowledge. In this paper, we focus on kernels for images: we represent the image information content with binary strings and discuss various bitwise manipulations obtained using logical operators and convolution with nonbinary stencils. In the theoretical contribution of our work, we show that histogram intersection is a Mercer's kernel and we determine the modifications under which a similarity measure based on the notion of Hausdorff distance is also a Mercer's kernel. In both cases, we determine explicitly the mapping from input to feature space. The presented experimental results support the relevance of our analysis for developing effective trainable systems.

  20. OSKI: A Library of Automatically Tuned Sparse Matrix Kernels

    SciTech Connect

    Vuduc, R; Demmel, J W; Yelick, K A

    2005-07-19

    The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.

  1. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  2. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  3. A Robustness Testing Campaign for IMA-SP Partitioning Kernels

    NASA Astrophysics Data System (ADS)

    Grixti, Stephen; Lopez Trecastro, Jorge; Sammut, Nicholas; Zammit-Mangion, David

    2015-09-01

    With time and space partitioned architectures becoming increasingly appealing to the European space sector, the dependability of partitioning kernel technology is a key factor to its applicability in European Space Agency projects. This paper explores the potential of the data type fault model, which injects faults through the Application Program Interface, in partitioning kernel robustness testing. This fault injection methodology has been tailored to investigate its relevance in uncovering vulnerabilities within partitioning kernels and potentially contributing towards fault removal campaigns within this domain. This is demonstrated through a robustness testing case study of the XtratuM partitioning kernel for SPARC LEON3 processors. The robustness campaign exposed a number of vulnerabilities in XtratuM, exhibiting the potential benefits of using such a methodology for the robustness assessment of partitioning kernels.

  4. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  5. An edge-adapting Laplacian kernel for nonlinear diffusion filters.

    PubMed

    Hajiaboli, Mohammad Reza; Ahmad, M Omair; Wang, Chunyan

    2012-04-01

    In this paper, first, a new Laplacian kernel is developed to integrate into it the anisotropic behavior to control the process of forward diffusion in horizontal and vertical directions. It is shown that, although the new kernel reduces the process of edge distortion, it nonetheless produces artifacts in the processed image. After examining the source of this problem, an analytical scheme is devised to obtain a spatially varying kernel that adapts itself to the diffusivity function. The proposed spatially varying Laplacian kernel is then used in various nonlinear diffusion filters starting from the classical Perona-Malik filter to the more recent ones. The effectiveness of the new kernel in terms of quantitative and qualitative measures is demonstrated by applying it to noisy images.

  6. Learning kernels from biological networks by maximizing entropy.

    PubMed

    Tsuda, Koji; Noble, William Stafford

    2004-08-04

    The diffusion kernel is a general method for computing pairwise distances among all nodes in a graph, based on the sum of weighted paths between each pair of nodes. This technique has been used successfully, in conjunction with kernel-based learning methods, to draw inferences from several types of biological networks. We show that computing the diffusion kernel is equivalent to maximizing the von Neumann entropy, subject to a global constraint on the sum of the Euclidean distances between nodes. This global constraint allows for high variance in the pairwise distances. Accordingly, we propose an alternative, locally constrained diffusion kernel, and we demonstrate that the resulting kernel allows for more accurate support vector machine prediction of protein functional classifications from metabolic and protein-protein interaction networks. Supplementary results and data are available at noble.gs.washington.edu/proj/maxent

  7. OSKI: A library of automatically tuned sparse matrix kernels

    NASA Astrophysics Data System (ADS)

    Vuduc, Richard; Demmel, James W.; Yelick, Katherine A.

    2005-01-01

    The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decisionmaking process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.

  8. Triso coating development progress for uranium nitride kernels

    SciTech Connect

    Jolly, Brian C.; Lindemer, Terrence; Terrani, Kurt A.

    2015-08-01

    In support of fully ceramic matrix (FCM) fuel development [1-2], coating development work is ongoing at the Oak Ridge National Laboratory (ORNL) to produce tri-structural isotropic (TRISO) coated fuel particles with UN kernels [3]. The nitride kernels are used to increase fissile density in these SiC-matrix fuel pellets with details described elsewhere [4]. The advanced gas reactor (AGR) program at ORNL used fluidized bed chemical vapor deposition (FBCVD) techniques for TRISO coating of UCO (two phase mixture of UO2 and UCx) kernels [5]. Similar techniques were employed for coating of the UN kernels, however significant changes in processing conditions were required to maintain acceptable coating properties due to physical property and dimensional differences between the UCO and UN kernels (Table 1).

  9. A novel extended kernel recursive least squares algorithm.

    PubMed

    Zhu, Pingping; Chen, Badong; Príncipe, José C

    2012-08-01

    In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.

  10. Preliminary thermal expansion screening data for tuffs

    SciTech Connect

    Lappin, A.R.

    1980-03-01

    A major variable in evaluating the potential of silicic tuffs for use in geologic disposal of heat-producing nuclear wastes is thermal expansion. Results of ambient-pressure linear expansion measurements on a group of tuffs that vary treatly in porosity and mineralogy are presente here. Thermal expansion of devitrified welded tuffs is generally linear with increasing temperature and independent of both porosity and heating rate. Mineralogic factors affecting behavior of these tuffs are limited to the presence or absence of cristobalite and altered biotite. The presence of cristobalite results in markedly nonlinear expansion above 200{sup 0}C. If biotite in biotite-hearing rocks alters even slightly to expandable clays, the behavior of these tuffs near the boiling point of water can be dominated by contraction of the expandable phase. Expansion of both high- and low-porosity tuffs containing hydrated silicic glass and/or expandable clays is complex. The behavior of these rocks appears to be completely dominated by dehydration of hydrous phases and, hence, should be critically dependent on fluid pressure. Valid extrapolation of the ambient-pressure results presented here to depths of interest for construction of a nuclear-waste repository will depend on a good understanding of the interaction of dehydration rates and fluid pressures, and of the effects of both micro- and macrofractures on the response of tuff masss.

  11. Kernel descriptors for chest x-ray analysis

    NASA Astrophysics Data System (ADS)

    Orbán, Gergely Gy.; Horváth, Gábor

    2017-03-01

    In this study, we address the problem of lesion classification in radiographic scans. We adapt image kernel functions to be applicable for high-resolution, grayscale images to improve the classification accuracy of a support vector machine. We take existing kernel functions inspired by the histogram of oriented gradients, and derive an approximation that can be evaluated in linear time of the image size instead of the original quadratic complexity, enabling highresolution input. Moreover, we propose a new variant inspired by the matched filter, to better utilize intensity space. The new kernels are improved to be scale-invariant and combined with a Gaussian kernel built from handcrafted image features. We introduce a simple multiple kernel learning framework that is robust when one of the kernels, in the current case the image feature kernel, dominates the others. The combined kernel is input to a support vector classifier. We tested our method on lesion classification both in chest radiographs and digital tomosynthesis scans. The radiographs originated from a database including 364 patients with lung nodules and 150 healthy cases. The digital tomosynthesis scans were obtained by simulation using 91 CT scans from the LIDC-IDRI database as input. The new kernels showed good separation capability: ROC AuC was in [0.827, 0.853] for the radiograph database and 0.763 for the tomosynthesis scans. Adding the new kernels to the image-feature-based classifier significantly improved accuracy: AuC increased from 0.958 to 0.967 and from 0.788 to 0.801 for the two applications.

  12. 3-D sensitivity kernels of the Rayleigh wave ellipticity

    NASA Astrophysics Data System (ADS)

    Maupin, Valérie

    2017-10-01

    The ellipticity of the Rayleigh wave at the surface depends on the seismic structure beneath and in the vicinity of the seismological station where it is measured. We derive here the expression and compute the 3-D kernels that describe this dependence with respect to S-wave velocity, P-wave velocity and density. Near-field terms as well as coupling to Love waves are included in the expressions. We show that the ellipticity kernels are the difference between the amplitude kernels of the radial and vertical components of motion. They show maximum values close to the station, but with a complex pattern, even when smoothing in a finite-frequency range is used to remove the oscillatory pattern present in mono-frequency kernels. In order to follow the usual data processing flow, we also compute and analyse the kernels of the ellipticity averaged over incoming wave backazimuth. The kernel with respect to P-wave velocity has the simplest lateral variation and is in good agreement with commonly used 1-D kernels. The kernels with respect to S-wave velocity and density are more complex and we have not been able to find a good correlation between the 3-D and 1-D kernels. Although it is clear that the ellipticity is mostly sensitive to the structure within half-a-wavelength of the station, the complexity of the kernels within this zone prevents simple approximations like a depth dependence times a lateral variation to be useful in the inversion of the ellipticity.

  13. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects.

  14. Image quality of mixed convolution kernel in thoracic computed tomography

    PubMed Central

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-01-01

    Abstract The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images. Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test. Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001). The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT. PMID:27858910

  15. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  16. Oil extraction from sheanut (Vitellaria paradoxa Gaertn C.F.) kernels assisted by microwaves.

    PubMed

    Nde, Divine B; Boldor, Dorin; Astete, Carlos; Muley, Pranjali; Xu, Zhimin

    2016-03-01

    Shea butter, is highly solicited in cosmetics, pharmaceuticals, chocolates and biodiesel formulations. Microwave assisted extraction (MAE) of butter from sheanut kernels was carried using the Doehlert's experimental design. Factors studied were microwave heating time, temperature and solvent/solute ratio while the responses were the quantity of oil extracted and the acid number. Second order models were established to describe the influence of experimental parameters on the responses studied. Under optimum MAE conditions of heating time 23 min, temperature 75 °C and solvent/solute ratio 4:1 more than 88 % of the oil with a free fatty acid (FFA) value less than 2, was extracted compared to the 10 h and solvent/solute ratio of 10:1 required for soxhlet extraction. Scanning electron microscopy was used to elucidate the effect of microwave heating on the kernels' microstructure. Substantial reduction in extraction time and volumes of solvent used and oil of suitable quality are the main benefits derived from the MAE process.

  17. A visualization tool for the kernel-driven model with improved ability in data analysis and kernel assessment

    NASA Astrophysics Data System (ADS)

    Dong, Yadong; Jiao, Ziti; Zhang, Hu; Bai, Dongni; Zhang, Xiaoning; Li, Yang; He, Dandan

    2016-10-01

    The semi-empirical, kernel-driven Bidirectional Reflectance Distribution Function (BRDF) model has been widely used for many aspects of remote sensing. With the development of the kernel-driven model, there is a need to further assess the performance of newly developed kernels. The use of visualization tools can facilitate the analysis of model results and the assessment of newly developed kernels. However, the current version of the kernel-driven model does not contain a visualization function. In this study, a user-friendly visualization tool, named MaKeMAT, was developed specifically for the kernel-driven model. The POLDER-3 and CAR BRDF datasets were used to demonstrate the applicability of MaKeMAT. The visualization of inputted multi-angle measurements enhances understanding of multi-angle measurements and allows the choice of measurements with good representativeness. The visualization of modeling results facilitates the assessment of newly developed kernels. The study shows that the visualization tool MaKeMAT can promote the widespread application of the kernel-driven model.

  18. Air expansion in a water rocket

    NASA Astrophysics Data System (ADS)

    Romanelli, Alejandro; Bove, Italo; González Madina, Federico

    2013-10-01

    We study the thermodynamics of a water rocket in the thrust phase, taking into account the expansion of the air with water vapor, vapor condensation, and the corresponding latent heat. We set up a simple experimental device with a stationary bottle and verify that the gas expansion in the bottle is well approximated by a polytropic process PVβ = constant, where the parameter β depends on the initial conditions. We find an analytical expression for β that depends only on the thermodynamic initial conditions and is in good agreement with the experimental results.

  19. Privacy preserving RBF kernel support vector machine.

    PubMed

    Li, Haoran; Xiong, Li; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2014-01-01

    Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data.

  20. Privacy Preserving RBF Kernel Support Vector Machine

    PubMed Central

    Xiong, Li; Ohno-Machado, Lucila

    2014-01-01

    Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data. PMID:25013805

  1. On the Kernelization Complexity of Colorful Motifs

    NASA Astrophysics Data System (ADS)

    Ambalath, Abhimanyu M.; Balasundaram, Radheshyam; Rao H., Chintan; Koppula, Venkata; Misra, Neeldhara; Philip, Geevarghese; Ramanujan, M. S.

    The Colorful Motif problem asks if, given a vertex-colored graph G, there exists a subset S of vertices of G such that the graph induced by G on S is connected and contains every color in the graph exactly once. The problem is motivated by applications in computational biology and is also well-studied from the theoretical point of view. In particular, it is known to be NP-complete even on trees of maximum degree three [Fellows et al, ICALP 2007]. In their pioneering paper that introduced the color-coding technique, Alon et al. [STOC 1995] show, inter alia, that the problem is FPT on general graphs. More recently, Cygan et al. [WG 2010] showed that Colorful Motif is NP-complete on comb graphs, a special subclass of the set of trees of maximum degree three. They also showed that the problem is not likely to admit polynomial kernels on forests.

  2. Kernel density estimation using graphical processing unit

    NASA Astrophysics Data System (ADS)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  3. Context quantization by kernel Fisher discriminant.

    PubMed

    Xu, Mantao; Wu, Xiaolin; Fränti, Pasi

    2006-01-01

    Optimal context quantizers for minimum conditional entropy can be constructed by dynamic programming in the probability simplex space. The main difficulty, operationally, is the resulting complex quantizer mapping function in the context space, in which the conditional entropy coding is conducted. To overcome this difficulty, we propose new algorithms for designing context quantizers in the context space based on the multiclass Fisher discriminant and the kernel Fisher discriminant (KFD). In particular, the KFD can describe linearly nonseparable quantizer cells by projecting input context vectors onto a high-dimensional curve, in which these cells become better separable. The new algorithms outperform the previous linear Fisher discriminant method for context quantization. They approach the minimum empirical conditional entropy context quantizer designed in the probability simplex space, but with a practical implementation that employs a simple scalar quantizer mapping function rather than a large lookup table.

  4. Learning molecular energies using localized graph kernels.

    PubMed

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-21

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  5. Learning molecular energies using localized graph kernels

    NASA Astrophysics Data System (ADS)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  6. Development of low-expansion ceramics for diesel engine applications

    SciTech Connect

    Brown, J.J. Jr. . Center for Advanced Ceramic Materials)

    1992-04-01

    The need for stable fabricable low thermal expansion ceramics for use in advanced heat engines was first recognized in the Department of Energy Advanced Gas Turbine (AGT) technology programs. More recently, the need for ceramic materials having low thermal expansion for use in components of advanced low heat rejection diesel engines has also been recognized. This investigation concentrated on (1) synthesis, (2) property characterization, and (3) fabrication of candidate low thermal expansion ceramics from four systems based upon aluminum phosphate, silica, mullite, and zircon. The NZP (zircon - NaZr{sub 2}(PO{sub 4}){sub 3}) structures clearly represent a new class of high melting, thermal shock-resistant ceramics.

  7. Labeled Graph Kernel for Behavior Analysis

    PubMed Central

    Zhao, Ruiqi; Martinez, Aleix M.

    2016-01-01

    Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data. PMID:26415154

  8. Labeled Graph Kernel for Behavior Analysis.

    PubMed

    Zhao, Ruiqi; Martinez, Aleix M

    2016-08-01

    Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data.

  9. Computed tomography coronary stent imaging with iterative reconstruction: a trade-off study between medium kernel and sharp kernel.

    PubMed

    Zhou, Qijing; Jiang, Biao; Dong, Fei; Huang, Peiyu; Liu, Hongtao; Zhang, Minming

    2014-01-01

    To evaluate the improvement of iterative reconstruction in image space (IRIS) technique in computed tomographic (CT) coronary stent imaging with sharp kernel, and to make a trade-off analysis. Fifty-six patients with 105 stents were examined by 128-slice dual-source CT coronary angiography (CTCA). Images were reconstructed using standard filtered back projection (FBP) and IRIS with both medium kernel and sharp kernel applied. Image noise and the stent diameter were investigated. Image noise was measured both in background vessel and in-stent lumen as objective image evaluation. Image noise score and stent score were performed as subjective image evaluation. The CTCA images reconstructed with IRIS were associated with significant noise reduction compared to that of CTCA images reconstructed using FBP technique in both of background vessel and in-stent lumen (the background noise decreased by approximately 25.4% ± 8.2% in medium kernel (P kernel (P kernel (P kernel (P kernel showed better visualization of the stent struts and in-stent lumen than that with medium kernel. Iterative reconstruction in image space reconstruction can effectively reduce the image noise and improve image quality. The sharp kernel images constructed with iterative reconstruction are considered the optimal images to observe coronary stents in this study.

  10. Kolkhoung (Pistacia khinjuk) Hull Oil and Kernel Oil as Antioxidative Vegetable Oils with High Oxidative Stability 
and Nutritional Value

    PubMed Central

    Asnaashari, Maryam; Mehr, Hamed Mahdavian; Yousefabad, Seyed Hossein Asadi

    2015-01-01

    Summary In this study, in order to introduce natural antioxidative vegetable oil in food industry, the kolkhoung hull oil and kernel oil were extracted. To evaluate their antioxidant efficiency, gas chromatography analysis of the composition of kolkhoung hull and kernel oil fatty acids and high–performance liquid chromatography analysis of tocopherols were done. Also, the oxidative stability of the oil was considered based on the peroxide value and anisidine value during heating at 100, 110 and 120 °C. Gas chromatography analysis showed that oleic acid was the major fatty acid of both types of oil (hull and kernel) and based on a low content of saturated fatty acids, high content of monounsaturated fatty acids, and the ratio of ω-6 and ω-3 polyunsaturated fatty acids, they were nutritionally well--balanced. Moreover, both hull and kernel oil showed high oxidative stability during heating, which can be attributed to high content of tocotrienols. Based on the results, kolkhoung hull oil acted slightly better than its kernel oil. However, both of them can be added to oxidation–sensitive oils to improve their shelf life. PMID:27904335

  11. Kolkhoung (Pistacia khinjuk) Hull Oil and Kernel Oil as Antioxidative Vegetable Oils with High Oxidative Stability 
and Nutritional Value.

    PubMed

    Asnaashari, Maryam; Hashemi, Seyed Mohammad Bagher; Mehr, Hamed Mahdavian; Yousefabad, Seyed Hossein Asadi

    2015-03-01

    In this study, in order to introduce natural antioxidative vegetable oil in food industry, the kolkhoung hull oil and kernel oil were extracted. To evaluate their antioxidant efficiency, gas chromatography analysis of the composition of kolkhoung hull and kernel oil fatty acids and high-performance liquid chromatography analysis of tocopherols were done. Also, the oxidative stability of the oil was considered based on the peroxide value and anisidine value during heating at 100, 110 and 120 °C. Gas chromatography analysis showed that oleic acid was the major fatty acid of both types of oil (hull and kernel) and based on a low content of saturated fatty acids, high content of monounsaturated fatty acids, and the ratio of ω-6 and ω-3 polyunsaturated fatty acids, they were nutritionally well--balanced. Moreover, both hull and kernel oil showed high oxidative stability during heating, which can be attributed to high content of tocotrienols. Based on the results, kolkhoung hull oil acted slightly better than its kernel oil. However, both of them can be added to oxidation-sensitive oils to improve their shelf life.

  12. Equivalence of kernel machine regression and kernel distance covariance for multidimensional phenotype association studies.

    PubMed

    Hua, Wen-Yu; Ghosh, Debashis

    2015-09-01

    Associating genetic markers with a multidimensional phenotype is an important yet challenging problem. In this work, we establish the equivalence between two popular methods: kernel-machine regression (KMR), and kernel distance covariance (KDC). KMR is a semiparametric regression framework that models covariate effects parametrically and genetic markers non-parametrically, while KDC represents a class of methods that include distance covariance (DC) and Hilbert-Schmidt independence criterion (HSIC), which are nonparametric tests of independence. We show that the equivalence between the score test of KMR and the KDC statistic under certain conditions can lead to a novel generalization of the KDC test that incorporates covariates. Our contributions are 3-fold: (1) establishing the equivalence between KMR and KDC; (2) showing that the principles of KMR can be applied to the interpretation of KDC; (3) the development of a broader class of KDC statistics, where the class members are statistics corresponding to different kernel combinations. Finally, we perform simulation studies and an analysis of real data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. The ADNI study suggest that SNPs of FLJ16124 exhibit pairwise interaction effects that are strongly correlated to the changes of brain region volumes. © 2015, The International Biometric Society.

  13. Influence of Initial Correlations on Evolution of a Subsystem in a Heat Bath and Polaron Mobility

    NASA Astrophysics Data System (ADS)

    Los, Victor F.

    2017-08-01

    A regular approach to accounting for initial correlations, which allows to go beyond the unrealistic random phase (initial product state) approximation in deriving the evolution equations, is suggested. An exact homogeneous (time-convolution and time-convolutionless) equations for a relevant part of the two-time equilibrium correlation function for the dynamic variables of a subsystem interacting with a boson field (heat bath) are obtained. No conventional approximation like RPA or Bogoliubov's principle of weakening of initial correlations is used. The obtained equations take into account the initial correlations in the kernel governing their evolution. The solution to these equations is found in the second order of the kernel expansion in the electron-phonon interaction, which demonstrates that generally the initial correlations influence the correlation function's evolution in time. It is explicitly shown that this influence vanishes on a large timescale (actually at t→ \\infty ) and the evolution process enters an irreversible kinetic regime. The developed approach is applied to the Fröhlich polaron and the low-temperature polaron mobility (which was under a long-time debate) is found with a correction due to initial correlations.

  14. Pen Branch delta expansion

    SciTech Connect

    Nelson, E.A.; Christensen, E.J.; Mackey, H.E.; Sharitz, R.R.; Jensen, J.R.; Hodgson, M.E.

    1984-02-01

    Since 1954, cooling water discharges from K Reactor ({anti X} = 370 cfs {at} 59 C) to Pen Branch have altered vegetation and deposited sediment in the Savannah River Swamp forming the Pen Branch delta. Currently, the delta covers over 300 acres and continues to expand at a rate of about 16 acres/yr. Examination of delta expansion can provide important information on environmental impacts to wetlands exposed to elevated temperature and flow conditions. To assess the current status and predict future expansion of the Pen Branch delta, historic aerial photographs were analyzed using both basic photo interpretation and computer techniques to provide the following information: (1) past and current expansion rates; (2) location and changes of impacted areas; (3) total acreage presently affected. Delta acreage changes were then compared to historic reactor discharge temperature and flow data to see if expansion rate variations could be related to reactor operations.

  15. Weakly relativistic plasma expansion

    SciTech Connect

    Fermous, Rachid Djebli, Mourad

    2015-04-15

    Plasma expansion is an important physical process that takes place in laser interactions with solid targets. Within a self-similar model for the hydrodynamical multi-fluid equations, we investigated the expansion of both dense and under-dense plasmas. The weakly relativistic electrons are produced by ultra-intense laser pulses, while ions are supposed to be in a non-relativistic regime. Numerical investigations have shown that relativistic effects are important for under-dense plasma and are characterized by a finite ion front velocity. Dense plasma expansion is found to be governed mainly by quantum contributions in the fluid equations that originate from the degenerate pressure in addition to the nonlinear contributions from exchange and correlation potentials. The quantum degeneracy parameter profile provides clues to set the limit between under-dense and dense relativistic plasma expansions at a given density and temperature.

  16. Thermal-Expansion Measurement

    NASA Technical Reports Server (NTRS)

    Davis, J. H.; Rives, C.

    1985-01-01

    Precise stable laser system determines coefficients of thermal expansion. Dual-beam interferometer arrangement monitors changes in sample length as function of temperature by following changes inoptical path lengths.

  17. Optimal Electric Utility Expansion

    SciTech Connect

    1989-10-10

    SAGE-WASP is designed to find the optimal generation expansion policy for an electrical utility system. New units can be automatically selected from a user-supplied list of expansion candidates which can include hydroelectric and pumped storage projects. The existing system is modeled. The calculational procedure takes into account user restrictions to limit generation configurations to an area of economic interest. The optimization program reports whether the restrictions acted as a constraint on the solution. All expansion configurations considered are required to pass a user supplied reliability criterion. The discount rate and escalation rate are treated separately for each expansion candidate and for each fuel type. All expenditures are separated into local and foreign accounts, and a weighting factor can be applied to foreign expenditures.

  18. High-Temperature Expansions for Frenkel-Kontorova Model

    NASA Astrophysics Data System (ADS)

    Takahashi, K.; Mannari, I.; Ishii, T.

    1995-02-01

    Two high-temperature series expansions of the Frenkel-Kontorova (FK) model are investigated: the high-temperature approximation of Schneider-Stoll is extended to the FK model having the density ρ ≠ 1, and an alternative series expansion in terms of the modified Bessel function is examined. The first six-order terms for both expansions in free energy are explicitly obtained and compared with Ishii's approximation of the transfer-integral method. The specific heat based on the expansions is discussed by comparing with those of the transfer-integral method and Monte Carlo simulation.

  19. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    PubMed

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  20. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  1. Relaxation and diffusion models with non-singular kernels

    NASA Astrophysics Data System (ADS)

    Sun, HongGuang; Hao, Xiaoxiao; Zhang, Yong; Baleanu, Dumitru

    2017-02-01

    Anomalous relaxation and diffusion processes have been widely quantified by fractional derivative models, where the definition of the fractional-order derivative remains a historical debate due to its limitation in describing different kinds of non-exponential decays (e.g. stretched exponential decay). Meanwhile, many efforts by mathematicians and engineers have been made to overcome the singularity of power function kernel in its definition. This study first explores physical properties of relaxation and diffusion models where the temporal derivative was defined recently using an exponential kernel. Analytical analysis shows that the Caputo type derivative model with an exponential kernel cannot characterize non-exponential dynamics well-documented in anomalous relaxation and diffusion. A legitimate extension of the previous derivative is then proposed by replacing the exponential kernel with a stretched exponential kernel. Numerical tests show that the Caputo type derivative model with the stretched exponential kernel can describe a much wider range of anomalous diffusion than the exponential kernel, implying the potential applicability of the new derivative in quantifying real-world, anomalous relaxation and diffusion processes.

  2. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    PubMed

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  3. Gaussian kernel based anatomically-aided diffuse optical tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Baikejiang, Reheman; Zhang, Wei; Li, Changqing

    2017-02-01

    Image reconstruction in diffuse optical tomography (DOT) is challenging because its inverse problem is nonlinear, ill-posed and ill-conditioned. Anatomical guidance from high spatial resolution imaging modalities can substantially improve the quality of reconstructed DOT images. In this paper, inspired by the kernel methods in machine learning, we propose the kernel method to introduce anatomical information into the DOT image reconstruction algorithm. In this kernel method, optical absorption coefficient at each finite element node is represented as a function of a set of features obtained from anatomical images such as computed tomography (CT). The kernel based image model is directly incorporated into the forward model of DOT, which exploits the sparseness of the image in the feature space. Compared with Laplacian approaches to include structural priors, the proposed method does not require the image segmentation of distinct regions. The proposed kernel method is validated with numerical simulations of 3D DOT reconstruction using synthetic CT data. We added 15% Gaussian noise onto both the numerical DOT measurements and the simulated CT image. We have also validated the proposed method by agar phantom experiment with anatomical guidance from a CT scan. We have studied the effects of voxel size and number of nearest neighborhood size in kernel method on the reconstructed DOT images. Our results indicate that the spatial resolution and the accuracy of the reconstructed DOT images have been improved substantially after applying the anatomical guidance with the proposed kernel method.

  4. Widely Linear Complex-Valued Kernel Methods for Regression

    NASA Astrophysics Data System (ADS)

    Boloix-Tortosa, Rafael; Murillo-Fuentes, Juan Jose; Santos, Irene; Perez-Cruz, Fernando

    2017-10-01

    Usually, complex-valued RKHS are presented as an straightforward application of the real-valued case. In this paper we prove that this procedure yields a limited solution for regression. We show that another kernel, here denoted as pseudo kernel, is needed to learn any function in complex-valued fields. Accordingly, we derive a novel RKHS to include it, the widely RKHS (WRKHS). When the pseudo-kernel cancels, WRKHS reduces to complex-valued RKHS of previous approaches. We address the kernel and pseudo-kernel design, paying attention to the kernel and the pseudo-kernel being complex-valued. In the experiments included we report remarkable improvements in simple scenarios where real a imaginary parts have different similitude relations for given inputs or cases where real and imaginary parts are correlated. In the context of these novel results we revisit the problem of non-linear channel equalization, to show that the WRKHS helps to design more efficient solutions.

  5. Spectrum-based kernel length estimation for Gaussian process classification.

    PubMed

    Wang, Liang; Li, Chuan

    2014-06-01

    Recent studies have shown that Gaussian process (GP) classification, a discriminative supervised learning approach, has achieved competitive performance in real applications compared with most state-of-the-art supervised learning methods. However, the problem of automatic model selection in GP classification, involving the kernel function form and the corresponding parameter values (which are unknown in advance), remains a challenge. To make GP classification a more practical tool, this paper presents a novel spectrum analysis-based approach for model selection by refining the GP kernel function to match the given input data. Specifically, we target the problem of GP kernel length scale estimation. Spectrums are first calculated analytically from the kernel function itself using the autocorrelation theorem as well as being estimated numerically from the training data themselves. Then, the kernel length scale is automatically estimated by equating the two spectrum values, i.e., the kernel function spectrum equals to the estimated training data spectrum. Compared with the classical Bayesian method for kernel length scale estimation via maximizing the marginal likelihood (which is time consuming and could suffer from multiple local optima), extensive experimental results on various data sets show that our proposed method is both efficient and accurate.

  6. Training Lp norm multiple kernel learning in the primal.

    PubMed

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method.

  7. Gaussian kernel width optimization for sparse Bayesian learning.

    PubMed

    Mohsenzadeh, Yalda; Sheikhzadeh, Hamid

    2015-04-01

    Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters.

  8. Dropping macadamia nuts-in-shell reduces kernel roasting quality.

    PubMed

    Walton, David A; Wallace, Helen M

    2010-10-01

    Macadamia nuts ('nuts-in-shell') are subjected to many impacts from dropping during postharvest handling, resulting in damage to the raw kernel. The effect of dropping on roasted kernel quality is unknown. Macadamia nuts-in-shell were dropped in various combinations of moisture content, number of drops and receiving surface in three experiments. After dropping, samples from each treatment and undropped controls were dry oven-roasted for 20 min at 130 °C, and kernels were assessed for colour, mottled colour and surface damage. Dropping nuts-in-shell onto a bed of nuts-in-shell at 3% moisture content or 20% moisture content increased the percentage of dark roasted kernels. Kernels from nuts dropped first at 20%, then 10% moisture content, onto a metal plate had increased mottled colour. Dropping nuts-in-shell at 3% moisture content onto nuts-in-shell significantly increased surface damage. Similarly, surface damage increased for kernels dropped onto a metal plate at 20%, then at 10% moisture content. Postharvest dropping of macadamia nuts-in-shell causes concealed cellular damage to kernels, the effects not evident until roasting. This damage provides the reagents needed for non-enzymatic browning reactions. Improvements in handling, such as reducing the number of drops and improving handling equipment, will reduce cellular damage and after-roast darkening. Copyright © 2010 Society of Chemical Industry.

  9. Bounding the heat trace of a Calabi-Yau manifold

    NASA Astrophysics Data System (ADS)

    Fiset, Marc-Antoine; Walcher, Johannes

    2015-09-01

    The SCHOK bound states that the number of marginal deformations of certain two-dimensional conformal field theories is bounded linearly from above by the number of relevant operators. In conformal field theories defined via sigma models into Calabi-Yau manifolds, relevant operators can be estimated, in the point-particle approximation, by the low-lying spectrum of the scalar Laplacian on the manifold. In the strict large volume limit, the standard asymptotic expansion of Weyl and Minakshisundaram-Pleijel diverges with the higher-order curvature invariants. We propose that it would be sufficient to find an a priori uniform bound on the trace of the heat kernel for large but finite volume. As a first step in this direction, we then study the heat trace asymptotics, as well as the actual spectrum of the scalar Laplacian, in the vicinity of a conifold singularity. The eigenfunctions can be written in terms of confluent Heun functions, the analysis of which gives evidence that regions of large curvature will not prevent the existence of a bound of this type. This is also in line with general mathematical expectations about spectral continuity for manifolds with conical singularities. A sharper version of our results could, in combination with the SCHOK bound, provide a basis for a global restriction on the dimension of the moduli space of Calabi-Yau manifolds.

  10. Machine learning algorithms for damage detection: Kernel-based approaches

    NASA Astrophysics Data System (ADS)

    Santos, Adam; Figueiredo, Eloi; Silva, M. F. M.; Sales, C. S.; Costa, J. C. W. A.

    2016-02-01

    This paper presents four kernel-based algorithms for damage detection under varying operational and environmental conditions, namely based on one-class support vector machine, support vector data description, kernel principal component analysis and greedy kernel principal component analysis. Acceleration time-series from an array of accelerometers were obtained from a laboratory structure and used for performance comparison. The main contribution of this study is the applicability of the proposed algorithms for damage detection as well as the comparison of the classification performance between these algorithms and other four ones already considered as reliable approaches in the literature. All proposed algorithms revealed to have better classification performance than the previous ones.

  11. Biologic fluorescence decay characteristics: determination by Laguerre expansion technique

    NASA Astrophysics Data System (ADS)

    Snyder, Wendy J.; Maarek, Jean-Michel I.; Papaioannou, Thanassis; Marmarelis, Vasilis Z.; Grundfest, Warren S.

    1996-04-01

    Fluorescence decay characteristics are used to identify biologic fluorophores and to characterize interactions with the fluorophore environment. In many studies, fluorescence lifetimes are assessed by iterative reconvolution techniques. We investigated the use of a new approach: the Laguerre expansion of kernels technique (Marmarelis, V.Z., Ann. Biomed., Eng. 1993; 21, 573-589) which yields the fluorescence impulse response function by least- squares fitting of a discrete-time Laguerre functions expansion. Nitrogen (4 ns FWHM) and excimer (120 ns FWHM) laser pulses were used to excite the fluorescence of an anthracene and of type II collagen powder. After filtering (monochromator) and detection (MCP-PMT), the fluorescence response was digitized (digital storage oscilloscope) and transferred to a personal computer. Input and output data were deconvolved by the Laguerre expansion technique to compute the impulse response function which was then fitted to a multiexponential function for determination of the decay constants. A single exponential (time constant: 4.24 ns) best approximated the fluorescence decay of anthracene, whereas the Type II collagen response was best approximated by a double exponential (time constants: 2.24 and 9.92 ns) in agreement with previously reported data. The results of the Laguerre expansion technique were compared to the least-squares iterative reconvolution technique. The Laguerre expansion technique appeared computationally efficient and robust to experimental noise in the data. Furthermore, the proposed method does not impose a set multiexponential form to the decay.

  12. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    PubMed

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  13. Discriminant power analyses of non-linear dimension expansion methods

    NASA Astrophysics Data System (ADS)

    Woo, Seongyoun; Lee, Chulhee

    2016-05-01

    Most non-linear classification methods can be viewed as non-linear dimension expansion methods followed by a linear classifier. For example, the support vector machine (SVM) expands the dimensions of the original data using various kernels and classifies the data in the expanded data space using a linear SVM. In case of extreme learning machines or neural networks, the dimensions are expanded by hidden neurons and the final layer represents the linear classification. In this paper, we analyze the discriminant powers of various non-linear classifiers. Some analyses of the discriminating powers of non-linear dimension expansion methods are presented along with a suggestion of how to improve separability in non-linear classifiers.

  14. Micro-injection by thermal expansion.

    PubMed

    Zalokar, M

    1981-05-01

    A micropipette (diameter 5 to 20 micron) sealed near the orifice to provide a small closed reservoir is described. The reservoir is filled with oil and can be heated with a tiny electric resistance wire loop. Thermal expansion and contraction of the oil in the reservoir allows liquid to be expelled or aspirated. The flow of the liquid can be controlled accurately by varying electric current. Detailed instructions are given for fabricating the micropipette and the heating assembly. A plan for a handy micropipette puller is given. The technique has proved to be valuable in nuclear transplantation and injection of fluid volumes between 1 and 100 picoliters into Drosophila eggs.

  15. The great human expansion.

    PubMed

    Henn, Brenna M; Cavalli-Sforza, L L; Feldman, Marcus W

    2012-10-30

    Genetic and paleoanthropological evidence is in accord that today's human population is the result of a great demic (demographic and geographic) expansion that began approximately 45,000 to 60,000 y ago in Africa and rapidly resulted in human occupation of almost all of the Earth's habitable regions. Genomic data from contemporary humans suggest that this expansion was accompanied by a continuous loss of genetic diversity, a result of what is called the "serial founder effect." In addition to genomic data, the serial founder effect model is now supported by the genetics of human parasites, morphology, and linguistics. This particular population history gave rise to the two defining features of genetic variation in humans: genomes from the substructured populations of Africa retain an exceptional number of unique variants, and there is a dramatic reduction in genetic diversity within populations living outside of Africa. These two patterns are relevant for medical genetic studies mapping genotypes to phenotypes and for inferring the power of natural selection in human history. It should be appreciated that the initial expansion and subsequent serial founder effect were determined by demographic and sociocultural factors associated with hunter-gatherer populations. How do we reconcile this major demic expansion with the population stability that followed for thousands years until the inventions of agriculture? We review advances in understanding the genetic diversity within Africa and the great human expansion out of Africa and offer hypotheses that can help to establish a more synthetic view of modern human evolution.

  16. Virial Expansion Bounds

    NASA Astrophysics Data System (ADS)

    Tate, Stephen James

    2013-10-01

    In the 1960s, the technique of using cluster expansion bounds in order to achieve bounds on the virial expansion was developed by Lebowitz and Penrose (J. Math. Phys. 5:841, 1964) and Ruelle (Statistical Mechanics: Rigorous Results. Benjamin, Elmsford, 1969). This technique is generalised to more recent cluster expansion bounds by Poghosyan and Ueltschi (J. Math. Phys. 50:053509, 2009), which are related to the work of Procacci (J. Stat. Phys. 129:171, 2007) and the tree-graph identity, detailed by Brydges (Phénomènes Critiques, Systèmes Aléatoires, Théories de Jauge. Les Houches 1984, pp. 129-183, 1986). The bounds achieved by Lebowitz and Penrose can also be sharpened by doing the actual optimisation and achieving expressions in terms of the Lambert W-function. The different bound from the cluster expansion shows some improvements for bounds on the convergence of the virial expansion in the case of positive potentials, which are allowed to have a hard core.

  17. Accelerating the loop expansion

    SciTech Connect

    Ingermanson, R.

    1986-07-29

    This thesis introduces a new non-perturbative technique into quantum field theory. To illustrate the method, I analyze the much-studied phi/sup 4/ theory in two dimensions. As a prelude, I first show that the Hartree approximation is easy to obtain from the calculation of the one-loop effective potential by a simple modification of the propagator that does not affect the perturbative renormalization procedure. A further modification then susggests itself, which has the same nice property, and which automatically yields a convex effective potential. I then show that both of these modifications extend naturally to higher orders in the derivative expansion of the effective action and to higher orders in the loop-expansion. The net effect is to re-sum the perturbation series for the effective action as a systematic ''accelerated'' non-perturbative expansion. Each term in the accelerated expansion corresponds to an infinite number of terms in the original series. Each term can be computed explicitly, albeit numerically. Many numerical graphs of the various approximations to the first two terms in the derivative expansion are given. I discuss the reliability of the results and the problem of spontaneous symmetry-breaking, as well as some potential applications to more interesting field theories. 40 refs.

  18. A Generalized Grid-Based Fast Multipole Method for Integrating Helmholtz Kernels.

    PubMed

    Parkkinen, Pauli; Losilla, Sergio A; Solala, Eelis; Toivanen, Elias A; Xu, Wen-Hua; Sundholm, Dage

    2017-02-14

    A grid-based fast multipole method (GB-FMM) for optimizing three-dimensional (3D) numerical molecular orbitals in the bubbles and cube double basis has been developed and implemented. The present GB-FMM method is a generalization of our recently published GB-FMM approach for numerically calculating electrostatic potentials and two-electron interaction energies. The orbital optimization is performed by integrating the Helmholtz kernel in the double basis. The steep part of the functions in the vicinity of the nuclei is represented by one-center bubbles functions, whereas the remaining cube part is expanded on an equidistant 3D grid. The integration of the bubbles part is treated by using one-center expansions of the Helmholtz kernel in spherical harmonics multiplied with modified spherical Bessel functions of the first and second kind, analogously to the numerical inward and outward integration approach for calculating two-electron interaction potentials in atomic structure calculations. The expressions and algorithms for massively parallel calculations on general purpose graphics processing units (GPGPU) are described. The accuracy and the correctness of the implementation has been checked by performing Hartree-Fock self-consistent-field calculations (HF-SCF) on H2, H2O, and CO. Our calculations show that an accuracy of 10(-4) to 10(-7) Eh can be reached in HF-SCF calculations on general molecules.

  19. A Generalized Kernel Consensus-Based Robust Estimator

    PubMed Central

    Wang, Hanzi; Mirota, Daniel; Hager, Gregory D.

    2010-01-01

    In this paper, we present a new Adaptive-Scale Kernel Consensus (ASKC) robust estimator as a generalization of the popular and state-of-the-art robust estimators such as RANdom SAmple Consensus (RANSAC), Adaptive Scale Sample Consensus (ASSC), and Maximum Kernel Density Estimator (MKDE). The ASKC framework is grounded on and unifies these robust estimators using nonparametric kernel density estimation theory. In particular, we show that each of these methods is a special case of ASKC using a specific kernel. Like these methods, ASKC can tolerate more than 50 percent outliers, but it can also automatically estimate the scale of inliers. We apply ASKC to two important areas in computer vision, robust motion estimation and pose estimation, and show comparative results on both synthetic and real data. PMID:19926908

  20. Intelligent classification methods of grain kernels using computer vision analysis

    NASA Astrophysics Data System (ADS)

    Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo

    2011-06-01

    In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.

  1. Hash subgraph pairwise kernel for protein-protein interaction extraction.

    PubMed

    Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian; Li, Yanpeng

    2012-01-01

    Extracting protein-protein interaction (PPI) from biomedical literature is an important task in biomedical text mining (BioTM). In this paper, we propose a hash subgraph pairwise (HSP) kernel-based approach for this task. The key to the novel kernel is to use the hierarchical hash labels to express the structural information of subgraphs in a linear time. We apply the graph kernel to compute dependency graphs representing the sentence structure for protein-protein interaction extraction task, which can efficiently make use of full graph structural information, and particularly capture the contiguous topological and label information ignored before. We evaluate the proposed approach on five publicly available PPI corpora. The experimental results show that our approach significantly outperforms all-path kernel approach on all five corpora and achieves state-of-the-art performance.

  2. Kernel-based Linux emulation for Plan 9.

    SciTech Connect

    Minnich, Ronald G.

    2010-09-01

    CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9. In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.

  3. Inheritance of Kernel Color in Corn: Explanations and Investigations.

    ERIC Educational Resources Information Center

    Ford, Rosemary H.

    2000-01-01

    Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)

  4. Bilinear analysis for kernel selection and nonlinear feature extraction.

    PubMed

    Yang, Shu; Yan, Shuicheng; Zhang, Chao; Tang, Xiaoou

    2007-09-01

    This paper presents a unified criterion, Fisher + kernel criterion (FKC), for feature extraction and recognition. This new criterion is intended to extract the most discriminant features in different nonlinear spaces, and then, fuse these features under a unified measurement. Thus, FKC can simultaneously achieve nonlinear discriminant analysis and kernel selection. In addition, we present an efficient algorithm Fisher + kernel analysis (FKA), which utilizes the bilinear analysis, to optimize the new criterion. This FKA algorithm can alleviate the ill-posed problem existed in traditional kernel discriminant analysis (KDA), and usually, has no singularity problem. The effectiveness of our proposed algorithm is validated by a series of face-recognition experiments on several different databases.

  5. Inheritance of Kernel Color in Corn: Explanations and Investigations.

    ERIC Educational Resources Information Center

    Ford, Rosemary H.

    2000-01-01

    Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)

  6. Nonlinear hyperspectral unmixing based on constrained multiple kernel NMF

    NASA Astrophysics Data System (ADS)

    Cui, Jiantao; Li, Xiaorun; Zhao, Liaoying

    2014-05-01

    Nonlinear spectral unmixing constitutes an important field of research for hyperspectral imagery. An unsupervised nonlinear spectral unmixing algorithm, namely multiple kernel constrained nonnegative matrix factorization (MKCNMF) is proposed by coupling multiple-kernel selection with kernel NMF. Additionally, a minimum endmemberwise distance constraint and an abundance smoothness constraint are introduced to alleviate the uniqueness problem of NMF in the algorithm. In the MKCNMF, two problems of optimizing matrices and selecting the proper kernel are jointly solved. The performance of the proposed unmixing algorithm is evaluated via experiments based on synthetic and real hyperspectral data sets. The experimental results demonstrate that the proposed method outperforms some existing unmixing algorithms in terms of spectral angle distance (SAD) and abundance fractions.

  7. Heat-Shield Gap Filler

    NASA Technical Reports Server (NTRS)

    Leiser, D. B.; Stewart, D. A.; Smith, M.; Estrella, C.; Goldstein, H. E.

    1985-01-01

    Ceramic cloth strips provide flexible, easily replaceable insulating filler. Filler prevents hot gas from flowing between heat-shield tiles while allowing space for thermal expansion and contraction. Strips easily replaced when necessary.

  8. Probing the physical determinants of thermal expansion of folded proteins.

    PubMed

    Dellarole, Mariano; Kobayashi, Kei; Rouget, Jean-Baptiste; Caro, José Alfredo; Roche, Julien; Islam, Mohammad M; Garcia-Moreno E, Bertrand; Kuroda, Yutaka; Royer, Catherine A

    2013-10-24

    The magnitude and sign of the volume change upon protein unfolding are strongly dependent on temperature. This temperature dependence reflects differences in the thermal expansivity of the folded and unfolded states. The factors that determine protein molar expansivities and the large differences in thermal expansivity for proteins of similar molar volume are not well understood. Model compound studies have suggested that a major contribution is made by differences in the molar volume of water molecules as they transfer from the protein surface to the bulk upon heating. The expansion of internal solvent-excluded voids upon heating is another possible contributing factor. Here, the contribution from hydration density to the molar thermal expansivity of a protein was examined by comparing bovine pancreatic trypsin inhibitor and variants with alanine substitutions at or near the protein-water interface. Variants of two of these proteins with an additional mutation that unfolded them under native conditions were also examined. A modest decrease in thermal expansivity was observed in both the folded and unfolded states for the alanine variants compared with the parent protein, revealing that large changes can be made to the external polarity of a protein without causing large ensuing changes in thermal expansivity. This modest effect is not surprising, given the small molar volume of the alanine residue. Contributions of the expansion of the internal void volume were probed by measuring the thermal expansion for cavity-containing variants of a highly stable form of staphylococcal nuclease. Significantly larger (2-3-fold) molar expansivities were found for these cavity-containing proteins relative to the reference protein. Taken together, these results suggest that a key determinant of the thermal expansivities of folded proteins lies in the expansion of internal solvent-excluded voids.

  9. Kernel generalized neighbor discriminant embedding for SAR automatic target recognition

    NASA Astrophysics Data System (ADS)

    Huang, Yulin; Pei, Jifang; Yang, Jianyu; Wang, Tao; Yang, Haiguang; Wang, Bing

    2014-12-01

    In this paper, we propose a new supervised feature extraction algorithm in synthetic aperture radar automatic target recognition (SAR ATR), called generalized neighbor discriminant embedding (GNDE). Based on manifold learning, GNDE integrates class and neighborhood information to enhance discriminative power of extracted feature. Besides, the kernelized counterpart of this algorithm is also proposed, called kernel-GNDE (KGNDE). The experiment in this paper shows that the proposed algorithms have better recognition performance than PCA and KPCA.

  10. CADCAM 024. DOEDEF KERNEL user's guide. Version 1. 3

    SciTech Connect

    Ames, A.L.

    1986-09-01

    The Department of Energy Data Exchange Format (DOEDEF) Subgroup is developing a software environment for effective translation of CAD based product definition between dissimilar CAD systems within the DOE Weapons Complex based on the Initial Graphics Exchange Specification (IGES). The DOEDEF KERNEL is a set of callable procedures and functions which support the writing of procedures for modifying IGES-based CAD data in a RIM database. This document describes the interface to the procedures within KERNEL. 6 refs., 5 figs.

  11. The Weighted Super Bergman Kernels Over the Supermatrix Spaces

    NASA Astrophysics Data System (ADS)

    Feng, Zhiming

    2015-12-01

    The purpose of this paper is threefold. Firstly, using Howe duality for , we obtain integral formulas of the super Schur functions with respect to the super standard Gaussian distributions. Secondly, we give explicit expressions of the super Szegö kernels and the weighted super Bergman kernels for the Cartan superdomains of type I. Thirdly, combining these results, we obtain duality relations of integrals over the unitary groups and the Cartan superdomains, and the marginal distributions of the weighted measure.

  12. Nonlinear stochastic system identification of skin using volterra kernels.

    PubMed

    Chen, Yi; Hunter, Ian W

    2013-04-01

    Volterra kernel stochastic system identification is a technique that can be used to capture and model nonlinear dynamics in biological systems, including the nonlinear properties of skin during indentation. A high bandwidth and high stroke Lorentz force linear actuator system was developed and used to test the mechanical properties of bulk skin and underlying tissue in vivo using a non-white input force and measuring an output position. These short tests (5 s) were conducted in an indentation configuration normal to the skin surface and in an extension configuration tangent to the skin surface. Volterra kernel solution methods were used including a fast least squares procedure and an orthogonalization solution method. The practical modifications, such as frequency domain filtering, necessary for working with low-pass filtered inputs are also described. A simple linear stochastic system identification technique had a variance accounted for (VAF) of less than 75%. Representations using the first and second Volterra kernels had a much higher VAF (90-97%) as well as a lower Akaike information criteria (AICc) indicating that the Volterra kernel models were more efficient. The experimental second Volterra kernel matches well with results from a dynamic-parameter nonlinearity model with fixed mass as a function of depth as well as stiffness and damping that increase with depth into the skin. A study with 16 subjects showed that the kernel peak values have mean coefficients of variation (CV) that ranged from 3 to 8% and showed that the kernel principal components were correlated with location on the body, subject mass, body mass index (BMI), and gender. These fast and robust methods for Volterra kernel stochastic system identification can be applied to the characterization of biological tissues, diagnosis of skin diseases, and determination of consumer product efficacy.

  13. Quantum heat traces

    NASA Astrophysics Data System (ADS)

    Avramidi, Ivan G.

    2017-02-01

    We study new invariants of elliptic partial differential operators acting on sections of a vector bundle over a closed Riemannian manifold that we call the relativistic heat trace and the quantum heat traces. We obtain some reduction formulas expressing these new invariants in terms of some integral transforms of the usual classical heat trace and compute the asymptotics of these invariants. The coefficients of these asymptotic expansion are determined by the usual heat trace coefficients (which are locally computable) as well as by some new global invariants.

  14. Local Kernel for Brains Classification in Schizophrenia

    NASA Astrophysics Data System (ADS)

    Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.

    In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.

  15. Kernel MAD Algorithm for Relative Radiometric Normalization

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Tang, Ping; Hu, Changmiao

    2016-06-01

    The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  16. The Dynamic Kernel Scheduler-Part 1

    NASA Astrophysics Data System (ADS)

    Adelmann, Andreas; Locans, Uldis; Suter, Andreas

    2016-10-01

    Emerging processor architectures such as GPUs and Intel MICs provide a huge performance potential for high performance computing. However developing software that uses these hardware accelerators introduces additional challenges for the developer. These challenges may include exposing increased parallelism, handling different hardware designs, and using multiple development frameworks in order to utilise devices from different vendors. The Dynamic Kernel Scheduler (DKS) is being developed in order to provide a software layer between the host application and different hardware accelerators. DKS handles the communication between the host and the device, schedules task execution, and provides a library of built-in algorithms. Algorithms available in the DKS library will be written in CUDA, OpenCL, and OpenMP. Depending on the available hardware, the DKS can select the appropriate implementation of the algorithm. The first DKS version was created using CUDA for the Nvidia GPUs and OpenMP for Intel MIC. DKS was further integrated into OPAL (Object-oriented Parallel Accelerator Library) in order to speed up a parallel FFT based Poisson solver and Monte Carlo simulations for particle-matter interaction used for proton therapy degrader modelling. DKS was also used together with Minuit2 for parameter fitting, where χ2 and max-log-likelihood functions were offloaded to the hardware accelerator. The concepts of the DKS, first results, and plans for the future will be shown in this paper.

  17. Kernel spectral clustering with memory effect

    NASA Astrophysics Data System (ADS)

    Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.

    2013-05-01

    Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.

  18. Learning molecular energies using localized graph kernels

    DOE PAGES

    Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos

    2017-03-21

    We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less

  19. GKS. Minimal Graphical Kernel System C Binding

    SciTech Connect

    Simons, R.W.

    1985-10-01

    GKS (the Graphical Kernel System) is both an American National Standard (ANS) and an ISO international standard graphics package. It conforms to ANS X3.124-1985 and to the May 1985 draft proposal for the GKS C Language Binding standard under development by the X3H3 Technical Committee. This implementation includes level ma (the lowest level of the ANS) and some routines from level mb. The following graphics capabilities are supported: two-dimensional lines, markers, text, and filled areas; control over color, line type, and character height and alignment; multiple simultaneous workstations and multiple transformations; and locator and choice input. Tektronix 4014 and 4115 terminals are supported, and support for other devices may be added. Since this implementation was developed under UNIX, it uses makefiles, C shell scripts, the ar library maintainer, editor scripts, and other UNIX utilities. Therefore, implementing it under another operating system may require considerable effort. Also included with GKS is the small plot package (SPP), a direct descendant of the WEASEL plot package developed at Sandia. SPP is built on the GKS; therefore, all of the capabilities of GKS are available. It is not necessary to use GKS functions, since entire plots can be produced using only SPP functions, but the addition of GKS will give the programmer added power and flexibility. SPP provides single-call plot commands, linear and logarithmic axis commands, control for optional plotting of tick marks and tick mark labels, and permits plotting of data with or without markers and connecting lines.

  20. Protoribosome by quantum kernel energy method.

    PubMed

    Huang, Lulu; Krupkin, Miri; Bashan, Anat; Yonath, Ada; Massa, Lou

    2013-09-10

    Experimental evidence suggests the existence of an RNA molecular prebiotic entity, called by us the "protoribosome," which may have evolved in the RNA world before evolution of the genetic code and proteins. This vestige of the RNA world, which possesses all of the capabilities required for peptide bond formation, seems to be still functioning in the heart of all of the contemporary ribosome. Within the modern ribosome this remnant includes the peptidyl transferase center. Its highly conserved nucleotide sequence is suggestive of its robustness under diverse environmental conditions, and hence on its prebiotic origin. Its twofold pseudosymmetry suggests that this entity could have been a dimer of self-folding RNA units that formed a pocket within which two activated amino acids might be accommodated, similar to the binding mode of modern tRNA molecules that carry amino acids or peptidyl moieties. Using quantum mechanics and crystal coordinates, this work studies the question of whether the putative protoribosome has properties necessary to function as an evolutionary precursor to the modern ribosome. The quantum model used in the calculations is density functional theory--B3LYP/3-21G*, implemented using the kernel energy method to make the computations practical and efficient. It occurs that the necessary conditions that would characterize a practicable protoribosome--namely (i) energetic structural stability and (ii) energetically stable attachment to substrates--are both well satisfied.