Science.gov

Sample records for heat kernel expansion

  1. Heat kernel asymptotic expansions for the Heisenberg sub-Laplacian and the Grushin operator

    PubMed Central

    Chang, Der-Chen; Li, Yutian

    2015-01-01

    The sub-Laplacian on the Heisenberg group and the Grushin operator are typical examples of sub-elliptic operators. Their heat kernels are both given in the form of Laplace-type integrals. By using Laplace's method, the method of stationary phase and the method of steepest descent, we derive the small-time asymptotic expansions for these heat kernels, which are related to the geodesic structure of the induced geometries. PMID:25792966

  2. On the asymptotic expansion of the Bergman kernel

    NASA Astrophysics Data System (ADS)

    Seto, Shoo

    Let (L, h) → (M, o) be a polarized Kahler manifold. We define the Bergman kernel for H0(M, Lk), holomorphic sections of the high tensor powers of the line bundle L. In this thesis, we will study the asymptotic expansion of the Bergman kernel. We will consider the on-diagonal, near-diagonal and far off-diagonal, using L2 estimates to show the existence of the asymptotic expansion and computation of the coefficients for the on and near-diagonal case, and a heat kernel approach to show the exponential decay of the off-diagonal of the Bergman kernel for noncompact manifolds assuming only a lower bound on Ricci curvature and C2 regularity of the metric.

  3. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template.

  4. A Closed Formula for the Asymptotic Expansion of the Bergman Kernel

    NASA Astrophysics Data System (ADS)

    Xu, Hao

    2012-09-01

    We prove a graph theoretic closed formula for coefficients in the Tian-Yau-Zelditch asymptotic expansion of the Bergman kernel. The formula is expressed in terms of the characteristic polynomial of the directed graphs representing Weyl invariants. The proof relies on a combinatorial interpretation of a recursive formula due to M. Engliš and A. Loi.

  5. Frostless heat pump having thermal expansion valves

    DOEpatents

    Chen, Fang C [Knoxville, TN; Mei, Viung C [Oak Ridge, TN

    2002-10-22

    A heat pump system having an operable relationship for transferring heat between an exterior atmosphere and an interior atmosphere via a fluid refrigerant and further having a compressor, an interior heat exchanger, an exterior heat exchanger, a heat pump reversing valve, an accumulator, a thermal expansion valve having a remote sensing bulb disposed in heat transferable contact with the refrigerant piping section between said accumulator and said reversing valve, an outdoor temperature sensor, and a first means for heating said remote sensing bulb in response to said outdoor temperature sensor thereby opening said thermal expansion valve to raise suction pressure in order to mitigate defrosting of said exterior heat exchanger wherein said heat pump continues to operate in a heating mode.

  6. Nondiagonal Values of the Heat Kernel for Scalars in a Constant Electromagnetic Field

    NASA Astrophysics Data System (ADS)

    Kalinichenko, I. S.; Kazinski, P. O.

    2017-03-01

    An original method for finding the nondiagonal values of the heat kernel associated with the wave operator Fourier-transformed in time is proposed for the case of a constant external electromagnetic field. The connection of the trace of such a heat kernel to the one-loop correction to the grand thermodynamic potential is indicated. The structure of its singularities is analyzed.

  7. A Semi-supervised Heat Kernel Pagerank MBO Algorithm for Data Classification

    DTIC Science & Technology

    2016-07-01

    computation of a different pagerank for every node and [70] involves solving a very large matrix system . We now present a simple, efficient and accurate...20], the authors descibe an algo- rithm solving linear systems with boundary conditions using heat kernel pagerank. The method in [21] is another...local clustering algorithm, which uses a novel way of comput- ing the pagerank very efficiently . An interesting application to heat kernel pagerank is

  8. Heat capacity and thermal expansion of water and helium

    NASA Astrophysics Data System (ADS)

    Putintsev, N. M.; Putintsev, D. N.

    2017-04-01

    Original expressions for heat capacity CV and its components, vibrational and configurational components of thermal expansion coefficient were established. The values of CV, Cvib, Cconf, αvib and αconf for water and helium 4He were calculated.

  9. Functional properties of raw and heat processed cashew nut (Anacardium occidentale, L.) kernel protein isolates.

    PubMed

    Neto, V Q; Narain, N; Silva, J B; Bora, P S

    2001-08-01

    The functional properties viz. solubility, water and oil absorption, emulsifying and foaming capacities of the protein isolates prepared from raw and heat processed cashew nut kernels were evaluated. Protein solubility vs. pH profile showed the isoelectric point at pH 5 for both isolates. The isolate prepared from raw cashew nuts showed superior solubility at and above isoelectric point pH. The water and oil absorption capacities of the proteins were slightly improved by heat treatment of cashew nut kernels. The emulsifying capacity of the isolates showed solubility dependent behavior and was better for raw cashew nut protein isolate at pH 5 and above. However, heat treated cashew nut protein isolate presented better foaming capacity at pH 7 and 8 but both isolates showed extremely low foam stability as compared to that of egg albumin.

  10. Direct expansion solar collector and heat pump

    NASA Astrophysics Data System (ADS)

    1982-05-01

    A hybrid heat pump/solar collector combination in which solar collectors replace the outside air heat exchanger found in conventional air-to-air heat pump systems is discussed. The solar panels ordinarily operate at or below ambient temperature, eliminating the need to install the collector panels in a glazed and insulated enclosure. The collectors simply consist of a flat plate with a centrally located tube running longitudinally. Solar energy absorbed by exposed panels directly vaporizes the refrigerant fluid. The resulting vapor is compressed to higher temperature and pressure; then, it is condensed to release the heat absorbed during the vaporization process. Control and monitoring of the demonstration system are addressed, and the tests conducted with the demonstration system are described. The entire heat pump system is modelled, including predicted performance and costs, and economic comparisons are made with conventional flat-plate collector systems.

  11. Heat Pumps With Direct Expansion Solar Collectors

    NASA Astrophysics Data System (ADS)

    Ito, Sadasuke

    In this paper, the studies of heat pump systems using solar collectors as the evaporators, which have been done so far by reserchers, are reviwed. Usually, a solar collector without any cover is preferable to one with ac over because of the necessity of absorbing heat from the ambient air when the intensity of the solar energy on the collector is not enough. The performance of the collector depends on its area and the intensity of the convective heat transfer on the surface. Fins are fixed on the backside of the collector-surface or on the tube in which the refrigerant flows in order to increase the convective heat transfer. For the purpose of using a heat pump efficiently throughout year, a compressor with variable capacity is applied. The solar assisted heat pump can be used for air conditioning at night during the summer. Only a few groups of people have studied cooling by using solar assisted heat pump systems. In Japan, a kind of system for hot water supply has been produced commercially in a company and a kind of system for air conditioning has been installed in buildings commercially by another company.

  12. A Novel Cortical Thickness Estimation Method based on Volumetric Laplace-Beltrami Operator and Heat Kernel

    PubMed Central

    Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J.; Wang, Yalin

    2015-01-01

    Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the grey matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer’s disease (AD) and mild cognitive impairment (MCI) on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces. PMID:25700360

  13. A novel cortical thickness estimation method based on volumetric Laplace-Beltrami operator and heat kernel.

    PubMed

    Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J; Wang, Yalin

    2015-05-01

    Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the gray matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer's disease (AD) and mild cognitive impairment (MCI) on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces.

  14. The heat kernel for two Aharonov-Bohm solenoids in a uniform magnetic field

    NASA Astrophysics Data System (ADS)

    Šťovíček, Pavel

    2017-01-01

    A non-relativistic quantum model is considered with a point particle carrying a charge e and moving in the plane pierced by two infinitesimally thin Aharonov-Bohm solenoids and subjected to a perpendicular uniform magnetic field of magnitude B. Relying on a technique originally due to Schulman, Laidlaw and DeWitt which is applicable to Schrödinger operators on multiply connected configuration manifolds a formula is derived for the corresponding heat kernel. As an application of the heat kernel formula, approximate asymptotic expressions are derived for the lowest eigenvalue lying above the first Landau level and for the corresponding eigenfunction while assuming that | eB | R2 /(ħ c) is large, where R is the distance between the two solenoids.

  15. Asymptotic expansions of the kernel functions for line formation with continuous absorption

    NASA Technical Reports Server (NTRS)

    Hummer, D. G.

    1991-01-01

    Asymptotic expressions are obtained for the kernel functions M2(tau, alpha, beta) and K2(tau, alpha, beta) appearing in the theory of line formation with complete redistribution over a Voigt profile with damping parameter a, in the presence of a source of continuous opacity parameterized by beta. For a greater than 0, each coefficient in the asymptotic series is expressed as the product of analytic functions of a and eta. For Doppler broadening, only the leading term can be evaluated analytically.

  16. Investigation of direct expansion in ground source heat pumps

    NASA Astrophysics Data System (ADS)

    Kalman, M. D.

    A fully instrumented subscale ground coupled heat pump system was developed, and built, and used to test and obtain data on three different earth heat exchanger configurations under heating conditions (ground cooling). Various refrigerant flow control and compressor protection devices were tested for their applicability to the direct expansion system. Undistributed Earth temperature data were acquired at various depths. The problem of oil return at low evaporator temperatures and low refrigerant velocities was addressed. An analysis was performed to theoretically determine what evaporator temperature can be expected with an isolated ground pipe configuration with given length, pipe size, soil conditions and constant heat load. Technical accomplishments to data are summarized.

  17. Water expansion dynamics after pulsed IR laser heating.

    PubMed

    Hobley, Jonathan; Kuge, Yutaka; Gorelik, Sergey; Kasuya, Motohiro; Hatanaka, Koji; Kajimoto, Shinji; Fukumura, Hiroshi

    2008-09-14

    A nanosecond pulsed IR (1.9 microm) laser rapidly heated water, in an open vessel, to temperatures well below the boiling point. The subsequent dynamics of volume expansion were monitored using time-resolved interferometry in order to measure the increase in the water level in the heated area. The water expanded at less than the speed of sound, taking just less than 100 ns to increase its height by approximately 500 nm at surface temperature jumps of 20 K. The initial expansion was followed by an apparent contraction and then a re-expansion. The first expansion phase occurred more slowly than the timescale for bulk H-bond re-structuring of the water, as determined from vibrational bands in the Raman spectra, and represents the limit to the rate at which the overpressure caused by sudden heating can be released. The second phase of the expansion was caused by hydrodynamic effects and is accompanied by morphological changes resulting in light scattering as well as droplet spallation.

  18. Heat damage and in vitro starch digestibility of puffed wheat kernels.

    PubMed

    Cattaneo, Stefano; Hidalgo, Alyssa; Masotti, Fabio; Stuknytė, Milda; Brandolini, Andrea; De Noni, Ivano

    2015-12-01

    The effect of processing conditions on heat damage, starch digestibility, release of advanced glycation end products (AGEs) and antioxidant capacity of puffed cereals was studied. The determination of several markers arising from Maillard reaction proved pyrraline (PYR) and hydroxymethylfurfural (HMF) as the most reliable indices of heat load applied during puffing. The considerable heat load was evidenced by the high levels of both PYR (57.6-153.4 mg kg(-1) dry matter) and HMF (13-51.2 mg kg(-1) dry matter). For cost and simplicity, HMF looked like the most appropriate index in puffed cereals. Puffing influenced starch in vitro digestibility, being most of the starch (81-93%) hydrolyzed to maltotriose, maltose and glucose whereas only limited amounts of AGEs were released. The relevant antioxidant capacity revealed by digested puffed kernels can be ascribed to both the new formed Maillard reaction products and the conditions adopted during in vitro digestion.

  19. Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation

    SciTech Connect

    Huang, Hao; Yoo, Shinjae; Yu, Dantong; Qin, Hong

    2015-06-01

    Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, so it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.

  20. Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation

    DOE PAGES

    Huang, Hao; Yoo, Shinjae; Yu, Dantong; ...

    2015-06-01

    Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, somore » it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.« less

  1. Towards a Holistic Cortical Thickness Descriptor: Heat Kernel-Based Grey Matter Morphology Signatures.

    PubMed

    Wang, Gang; Wang, Yalin

    2017-02-15

    In this paper, we propose a heat kernel based regional shape descriptor that may be capable of better exploiting volumetric morphological information than other available methods, thereby improving statistical power on brain magnetic resonance imaging (MRI) analysis. The mechanism of our analysis is driven by the graph spectrum and the heat kernel theory, to capture the volumetric geometry information in the constructed tetrahedral meshes. In order to capture profound brain grey matter shape changes, we first use the volumetric Laplace-Beltrami operator to determine the point pair correspondence between white-grey matter and CSF-grey matter boundary surfaces by computing the streamlines in a tetrahedral mesh. Secondly, we propose multi-scale grey matter morphology signatures to describe the transition probability by random walk between the point pairs, which reflects the inherent geometric characteristics. Thirdly, a point distribution model is applied to reduce the dimensionality of the grey matter morphology signatures and generate the internal structure features. With the sparse linear discriminant analysis, we select a concise morphology feature set with improved classification accuracies. In our experiments, the proposed work outperformed the cortical thickness features computed by FreeSurfer software in the classification of Alzheimer's disease and its prodromal stage, i.e., mild cognitive impairment, on publicly available data from the Alzheimer's Disease Neuroimaging Initiative. The multi-scale and physics based volumetric structure feature may bring stronger statistical power than some traditional methods for MRI-based grey matter morphology analysis.

  2. Weighted Riemannian 1-manifolds for classical orthogonal polynomials and their heat kernel

    NASA Astrophysics Data System (ADS)

    Crasmareanu, Mircea

    2015-12-01

    Through the eigenvalue problem we associate to the classical orthogonal polynomials two classes of weighted Riemannian 1-manifolds having the coordinate x. For the first class the eigenvalues contains x and the metric is fixed as being the Euclidean one while for the second class the eigenvalues are independent of this variable and the metric and weight function are founded. The Hermite polynomials is the only case which generates the same manifold. The geometry of second class of weighted manifolds is studied from several points of view: geodesics, distance and exponential map, harmonic functions and their energy density, volume, zeta function, heat kernel. A partial heat equation is studied for these metrics and for the Poincaré ball model of hyperbolic geometry.

  3. Shape-Based Image Matching Using Heat Kernels and Diffusion Maps

    NASA Astrophysics Data System (ADS)

    Vizilter, Yu. V.; Gorbatsevich, V. S.; Rubis, A. Yu.; Zheltov, S. Yu.

    2014-08-01

    2D image matching problem is often stated as an image-to-shape or shape-to-shape matching problem. Such shape-based matching techniques should provide the matching of scene image fragments registered in various lighting, weather and season conditions or in different spectral bands. Most popular shape-to-shape matching technique is based on mutual information approach. Another wellknown approach is a morphological image-to-shape matching proposed by Pytiev. In this paper we propose the new image-to-shape matching technique based on heat kernels and diffusion maps. The corresponding Diffusion Morphology is proposed as a new generalization of Pytiev morphological scheme. The fast implementation of morphological diffusion filtering is described. Experimental comparison of new and aforementioned shape-based matching techniques is reported applying to the TV and IR image matching problem.

  4. Quantum elasticity of graphene: Thermal expansion coefficient and specific heat

    NASA Astrophysics Data System (ADS)

    Burmistrov, I. S.; Gornyi, I. V.; Kachorovskii, V. Yu.; Katsnelson, M. I.; Mirlin, A. D.

    2016-11-01

    We explore thermodynamics of a quantum membrane, with a particular application to suspended graphene membrane and with a particular focus on the thermal expansion coefficient. We show that an interplay between quantum and classical anharmonicity-controlled fluctuations leads to unusual elastic properties of the membrane. The effect of quantum fluctuations is governed by the dimensionless coupling constant, g0≪1 , which vanishes in the classical limit (ℏ →0 ) and is equal to ≃0.05 for graphene. We demonstrate that the thermal expansion coefficient αT of the membrane is negative and remains nearly constant down to extremely low temperatures, T0∝exp(-2 /g0) . We also find that αT diverges in the classical limit: αT∝-ln(1 /g0) for g0→0 . For graphene parameters, we estimate the value of the thermal expansion coefficient as αT≃-0.23 eV-1 , which applies below the temperature Tuv˜g0ϰ0˜500 K (where ϰ0˜1 eV is the bending rigidity) down to T0˜10-14 K. For T expansion coefficient slowly (logarithmically) approaches zero with decreasing temperature. This behavior is surprising since typically the thermal expansion coefficient goes to zero as a power-law function. We discuss possible experimental consequences of this anomaly. We also evaluate classical and quantum contributions to the specific heat of the membrane and investigate the behavior of the Grüneisen parameter.

  5. Energy recovery during expansion of compressed gas using power plant low-quality heat sources

    DOEpatents

    Ochs, Thomas L.; O'Connor, William K.

    2006-03-07

    A method of recovering energy from a cool compressed gas, compressed liquid, vapor, or supercritical fluid is disclosed which includes incrementally expanding the compressed gas, compressed liquid, vapor, or supercritical fluid through a plurality of expansion engines and heating the gas, vapor, compressed liquid, or supercritical fluid entering at least one of the expansion engines with a low quality heat source. Expansion engines such as turbines and multiple expansions with heating are disclosed.

  6. Bergman Kernel from Path Integral

    NASA Astrophysics Data System (ADS)

    Douglas, Michael R.; Klevtsov, Semyon

    2010-01-01

    We rederive the expansion of the Bergman kernel on Kähler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory, and generalize it to supersymmetric quantum mechanics. One physics interpretation of this result is as an expansion of the projector of wave functions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kähler form. This is relevant for the quantum Hall effect in curved space, and for its higher dimensional generalizations. Other applications include the theory of coherent states, the study of balanced metrics, noncommutative field theory, and a conjecture on metrics in black hole backgrounds discussed in [24]. We give a short overview of these various topics. From a conceptual point of view, this expansion is noteworthy as it is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey et al short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry.

  7. Investigation of contact resistance for fin-tube heat exchanger by means of tube expansion

    NASA Astrophysics Data System (ADS)

    Hing, Yau Kar; Raghavan, Vijay R.; Meng, Chin Wai

    2012-06-01

    An experimental study on the heat transfer performance of a fin-tube heat exchanger due to mechanical expansion of the tube by bullets has been reported in this paper. The manufacture of a fin-tube heat exchanger commonly involves inserting copper tubes into a stack of aluminium fins and expanding the tubes mechanically. The mechanical expansion is achieved by inserting a steel bullet through the tube. The steel bullet has a larger diameter than the tube and the expansion provides a firm surface contact between fins and tubes. Five bullet expansion ratios (i.e. 1.045 to 1.059) have been used in the study to expand a 9.52mm diameter tubes in a fin-tube heat exchanger. The study is conducted on a water-to-water loop experiment rig under steady state conditions. In addition, the effects of fin hardness and fin pitch are investigated in the study. The results indicate that the optimum heat transfer occurred at a bullet expansion ratio ranging from 1.049 to 1.052. It is also observed that larger fin pitches require larger bullet expansion ratios, especially with lower fin hardness. As the fin pitch increases, both fin hardness (i.e. H22 and H24) exhibit increasing heat transfer rate per fin (W/fin). With the H22 hardness temper, the increase is as much as 11% while H24 increases by 1.2%.

  8. Heat pump systems with direct expansion ground coils

    NASA Astrophysics Data System (ADS)

    Svec, O. J.; Baxter, V. D.

    This paper is a summary of an International research project organized within the framework of the International Energy Agency (IEA), Implementing Agreement on Heat Pumps. This cooperative project, based on a task sharing principle, was proposed by the Canadian team and joined by the national teams of the United States of America, Japan and Austria. The Institute for Research in Construction (IRC) of the National Research Council of Canada (NRCC), has been acting as the Operating Agent for this project, known as Annex XV. The need for this research work is based on the recognition of the state-of-the-art of Ground Source Heat Pump (GSHP) technology, which can simply be described by the following two statements: (1) GSHP technology is the most successful among all renewable technologies in North American and northern European countries; and (2) installation cost of GSHP systems is currently too high for a meaningful worldwide penetration into the heating/cooling market.

  9. Eigenvalue Expansion Approach to Study Bio-Heat Equation

    NASA Astrophysics Data System (ADS)

    Khanday, M. A.; Nazir, Khalid

    2016-07-01

    A mathematical model based on Pennes bio-heat equation was formulated to estimate temperature profiles at peripheral regions of human body. The heat processes due to diffusion, perfusion and metabolic pathways were considered to establish the second-order partial differential equation together with initial and boundary conditions. The model was solved using eigenvalue method and the numerical values of the physiological parameters were used to understand the thermal disturbance on the biological tissues. The results were illustrated at atmospheric temperatures TA = 10∘C and 20∘C.

  10. The Effect of Homogenization Heat Treatment on Thermal Expansion Coefficient and Dimensional Stability of Low Thermal Expansion Cast Irons

    NASA Astrophysics Data System (ADS)

    Chen, Li-Hao; Liu, Zong-Pei; Pan, Yung-Ning

    2016-08-01

    In this paper, the effect of homogenization heat treatment on α value [coefficient of thermal expansion (10-6 K-1)] of low thermal expansion cast irons was studied. In addition, constrained thermal cyclic tests were conducted to evaluate the dimensional stability of the low thermal expansion cast irons with various heat treatment conditions. The results indicate that when the alloys were homogenized at a relatively low temperature, e.g., 1023 K (750 °C), the elimination of Ni segregation was not very effective, but the C concentration in the matrix was moderately reduced. On the other hand, if the alloys were homogenized at a relatively high temperature, e.g., 1473 K (1200 °C), opposite results were obtained. Consequently, not much improvement (reduction) in α value was achieved in both cases. Therefore, a compound homogenization heat treatment procedure was designed, namely 1473 K (1200 °C)/4 hours/FC/1023 K (750 °C)/2 hours/WQ, in which a relatively high homogenization temperature of 1473 K (1200 °C) can effectively eliminate the Ni segregation, and a subsequent holding stage at 1023.15 K (750 °C) can reduce the C content in the matrix. As a result, very low α values of around (1 to 2) × 10-6 K-1 were obtained. Regarding the constrained thermal cyclic testing in 303 K to 473 K (30 °C to 200 °C), the results indicate that regardless of heat treatment condition, low thermal expansion cast irons exhibit exceedingly higher dimensional stability than either the regular ductile cast iron or the 304 stainless steel. Furthermore, positive correlation exists between the α 303.15 K to 473.15 K value and the amount of shape change after the thermal cyclic testing. Among the alloys investigated, Heat I-T3B (1473 K (1200 °C)/4 hours/FC/1023 K (750 °C)/2 hours/WQ) exhibits the lowest α 303 K to 473 K value (1.72 × 10-6 K-1), and hence has the least shape change (7.41 μm) or the best dimensional stability.

  11. The Statistical Interpretation of Classical Thermodynamic Heating and Expansion Processes

    ERIC Educational Resources Information Center

    Cartier, Stephen F.

    2011-01-01

    A statistical model has been developed and applied to interpret thermodynamic processes typically presented from the macroscopic, classical perspective. Through this model, students learn and apply the concepts of statistical mechanics, quantum mechanics, and classical thermodynamics in the analysis of the (i) constant volume heating, (ii)…

  12. Learning with Box Kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-04-12

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, since the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given which dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  13. Learning with box kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-11-01

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, because the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given that dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  14. Effects of City Expansion on Heat Stress under Climate Change Conditions

    PubMed Central

    Argüeso, Daniel; Evans, Jason P.; Pitman, Andrew J.; Di Luca, Alejandro

    2015-01-01

    We examine the joint contribution of urban expansion and climate change on heat stress over the Sydney region. A Regional Climate Model was used to downscale present (1990–2009) and future (2040–2059) simulations from a Global Climate Model. The effects of urban surfaces on local temperature and vapor pressure were included. The role of urban expansion in modulating the climate change signal at local scales was investigated using a human heat-stress index combining temperature and vapor pressure. Urban expansion and climate change leads to increased risk of heat-stress conditions in the Sydney region, with substantially more frequent adverse conditions in urban areas. Impacts are particularly obvious in extreme values; daytime heat-stress impacts are more noticeable in the higher percentiles than in the mean values and the impact at night is more obvious in the lower percentiles than in the mean. Urban expansion enhances heat-stress increases due to climate change at night, but partly compensates its effects during the day. These differences are due to a stronger contribution from vapor pressure deficit during the day and from temperature increases during the night induced by urban surfaces. Our results highlight the inappropriateness of assessing human comfort determined using temperature changes alone and point to the likelihood that impacts of climate change assessed using models that lack urban surfaces probably underestimate future changes in terms of human comfort. PMID:25668390

  15. Effects of city expansion on heat stress under climate change conditions.

    PubMed

    Argüeso, Daniel; Evans, Jason P; Pitman, Andrew J; Di Luca, Alejandro

    2015-01-01

    We examine the joint contribution of urban expansion and climate change on heat stress over the Sydney region. A Regional Climate Model was used to downscale present (1990-2009) and future (2040-2059) simulations from a Global Climate Model. The effects of urban surfaces on local temperature and vapor pressure were included. The role of urban expansion in modulating the climate change signal at local scales was investigated using a human heat-stress index combining temperature and vapor pressure. Urban expansion and climate change leads to increased risk of heat-stress conditions in the Sydney region, with substantially more frequent adverse conditions in urban areas. Impacts are particularly obvious in extreme values; daytime heat-stress impacts are more noticeable in the higher percentiles than in the mean values and the impact at night is more obvious in the lower percentiles than in the mean. Urban expansion enhances heat-stress increases due to climate change at night, but partly compensates its effects during the day. These differences are due to a stronger contribution from vapor pressure deficit during the day and from temperature increases during the night induced by urban surfaces. Our results highlight the inappropriateness of assessing human comfort determined using temperature changes alone and point to the likelihood that impacts of climate change assessed using models that lack urban surfaces probably underestimate future changes in terms of human comfort.

  16. Mixed convection in a horizontal porous duct with a sudden expansion and local heating from below

    SciTech Connect

    Yokoyama, Y.; Mahajan, R.L.; Kulacki, F.A.

    1999-08-01

    Results are reported for an experimental and numerical study of forced and mixed convective heat transfer in a liquid-saturated horizontal porous duct. The cross section of the duct has a sudden expansion with a heated region on the lower surface downstream and adjacent to the expansion. Within the framework of Darcy`s formulation, the calculated and measured Nusselt numbers for 0.1 < Pe < 100 and 50 < Ra < 500 are in excellent agreement. Further, the calculated Nusselt numbers are very close to those for the bottom-heated flat duct. This finding has important implications for convective heat and mass transfer in geophysical systems and porous matrix heat exchangers. The calculations were also carried out for glass bead-packed beds saturated with water using non-Darcy`s formula. The streamlines in the forced convection indicate that, even with non-Darcy effects included, recirculation is not observed downstream of an expansion and the heat transfer rate is decreased but only marginally.

  17. Experimental analysis of direct-expansion ground-coupled heat pump systems

    NASA Astrophysics Data System (ADS)

    Mei, V. C.; Baxter, V. D.

    1991-09-01

    Direct-expansion ground-coil-coupled (DXGC) heat pump systems have certain energy efficiency advantages over conventional ground-coupled heat pump (GCHP) systems. Principal among these advantages are that the secondary heat transfer fluid heat exchanger and circulating pump are eliminated. While the DXGC concept can produce higher efficiencies, it also produces more system design and environmental problems (e.g., compressor starting, oil return, possible ground pollution, and more refrigerant charging). Furthermore, general design guidelines for DXGC systems are not well documented. A two-pronged approach was adopted for this study: (1) a literature survey, and (2) a laboratory study of a DXGC heat pump system with R-22 as the refrigerant, for both heating and cooling mode tests done in parallel and series tube connections. The results of each task are described in this paper. A set of general design guidelines was derived from the test results and is also presented.

  18. Pressurized heat treatment of glass-ceramic to control thermal expansion

    DOEpatents

    Kramer, Daniel P.

    1985-01-01

    A method of producing a glass-ceramic having a specified thermal expansion value is disclosed. The method includes the step of pressurizing the parent glass material to a predetermined pressure during heat treatment so that the glass-ceramic produced has a specified thermal expansion value. Preferably, the glass-ceramic material is isostatically pressed. A method for forming a strong glass-ceramic to metal seal is also disclosed in which the glass-ceramic is fabricated to have a thermal expansion value equal to that of the metal. The determination of the thermal expansion value of a parent glass material placed in a high-temperature environment is also used to determine the pressure in the environment.

  19. Bergman kernel, balanced metrics and black holes

    NASA Astrophysics Data System (ADS)

    Klevtsov, Semyon

    In this thesis we explore the connections between the Kahler geometry and Landau levels on compact manifolds. We rederive the expansion of the Bergman kernel on Kahler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory. The physics interpretation of this result is as an expansion of the projector of wavefunctions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kahler form. This is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry. We also generalize this expansion to supersymmetric quantum mechanics and more general magnetic fields, and explore its applications. These include the quantum Hall effect in curved space, the balanced metrics and Kahler gravity. In particular, we conjecture that for a probe in a BPS black hole in type II strings compactified on Calabi-Yau manifolds, the moduli space metric is the balanced metric.

  20. Debye temperature, thermal expansion, and heat capacity of TcC up to 100 GPa

    SciTech Connect

    Song, T.; Ma, Q.; Tian, J.H.; Liu, X.B.; Ouyang, Y.H.; Zhang, C.L.; Su, W.F.

    2015-01-15

    Highlights: • A number of thermodynamic properties of rocksalt TcC are investigated for the first time. • The quasi-harmonic Debye model is applied to take into account the thermal effect. • The pressure and temperature up to about 100 GPa and 3000 K, respectively. - Abstract: Debye temperature, thermal expansion coefficient, and heat capacity of ideal stoichiometric TcC in the rocksalt structure have been studied systematically by using ab initio plane-wave pseudopotential density functional theory method within the generalized gradient approximation. Through the quasi-harmonic Debye model, in which the phononic effects are considered, the dependences of Debye temperature, thermal expansion coefficient, constant-volume heat capacity, and constant-pressure heat capacity on pressure and temperature are successfully predicted. All the thermodynamic properties of TcC with rocksalt phase have been predicted in the entire temperature range from 300 to 3000 K and pressure up to 100 GPa.

  1. Green Synthesis of Silicon Carbide Nanowhiskers by Microwave Heating of Blends of Palm Kernel Shell and Silica

    NASA Astrophysics Data System (ADS)

    Voon, C. H.; Lim, B. Y.; Gopinath, S. C. B.; Tan, H. S.; Tony, V. C. S.; Arshad, M. K. Md; Foo, K. L.; Hashim, U.

    2016-11-01

    Silicon carbide nanomaterials especially silicon carbide nanowhiskers (SiCNWs) has been known for its excellent properties such as high thermal stability, good chemical inertness and excellent electronic properties. In this paper, a green synthesis of SiCNWs by microwave heating of blends of palm kernel shell (PKS) and silica was presented. The effect of ratio of PKS and silica on the synthesis process was also studied and reported. Blends of PKS and silica in different ratio were mixed homogenously in ultrasonic bath for 2 hours using ethanol as liquid medium. The blends were then dried on hotplate to remove the ethanol and compressed into pellets form.. Synthesis was conducted in 2.45 GHz multimode cavity at 1400 °C for 40 minutes. X-ray diffraction revealed that β-SiC was detected for samples synthesized from blends with ratio of PKS to silica of 5:1 and 7:1. FESEM images also show that SiCNWs with the average diameter of 70 nm were successfully formed from blends with ratio of PKS to silica of 5:1 and 7:1. A vapour-liquid-solid (VLS) mechanism was proposed to explain the growth of SiCNWs from blends of PKS and silica.

  2. Negative thermal expansion and anomalies of heat capacity of LuB50 at low temperatures.

    PubMed

    Novikov, V V; Zhemoedov, N A; Matovnikov, A V; Mitroshenkov, N V; Kuznetsov, S V; Bud'ko, S L

    2015-09-28

    Heat capacity and thermal expansion of LuB50 boride were experimentally studied in the 2-300 K temperature range. The data reveal an anomalous contribution to the heat capacity at low temperatures. The value of this contribution is proportional to the first degree of temperature. It was identified that this anomaly in heat capacity is caused by the effect of disorder in the LuB50 crystalline structure and it can be described in the soft atomic potential model (SAP). The parameters of the approximation were determined. The temperature dependence of LuB50 heat capacity in the whole temperature range was approximated by the sum of SAP contribution, Debye and two Einstein components. The parameters of SAP contribution for LuB50 were compared to the corresponding values for LuB66, which was studied earlier. Negative thermal expansion at low temperatures was experimentally observed for LuB50. The analysis of the experimental temperature dependence for the Gruneisen parameter of LuB50 suggested that the low-frequency oscillations, described in SAP mode, are responsible for the negative thermal expansion. Thus, the glasslike character of the behavior of LuB50 thermal characteristics at low temperatures was confirmed.

  3. Negative thermal expansion and anomalies of heat capacity of LuB50 at low temperatures

    DOE PAGES

    Novikov, V. V.; Zhemoedov, N. A.; Matovnikov, A. V.; ...

    2015-07-20

    Heat capacity and thermal expansion of LuB50 boride were experimentally studied in the 2–300 K temperature range. The data reveal an anomalous contribution to the heat capacity at low temperatures. The value of this contribution is proportional to the first degree of temperature. It was identified that this anomaly in heat capacity is caused by the effect of disorder in the LuB50 crystalline structure and it can be described in the soft atomic potential model (SAP). The parameters of the approximation were determined. The temperature dependence of LuB50 heat capacity in the whole temperature range was approximated by the summore » of SAP contribution, Debye and two Einstein components. The parameters of SAP contribution for LuB50 were compared to the corresponding values for LuB66, which was studied earlier. Negative thermal expansion at low temperatures was experimentally observed for LuB50. The analysis of the experimental temperature dependence for the Gruneisen parameter of LuB50 suggested that the low-frequency oscillations, described in SAP mode, are responsible for the negative thermal expansion. As a result, the glasslike character of the behavior of LuB50 thermal characteristics at low temperatures was confirmed.« less

  4. Was plio-pleistocene hominid brain expansion a pleiotropic effect of adaptation to heat stress?

    PubMed

    Eckhardt, R B

    1987-09-01

    This paper examines the hypothesis (Fiałkowski 1978, 1986) that hominid brain expansion was largely a side effect of an evolutionary response to increased heat stress under conditions of primitive hunting, with reduction in reliability of brain components due to a rise in temperature having been offset by increases in the number of cerebral sub-units and interconnections among them. Fiałkowski's hypothesis is shown here to be based on measurements that are seriously inaccurate, and the explanatory mechanism to be contradicted by existing data on response to heat stress by smaller-brained nonhuman primates.

  5. Correlation dependence of the volumetric thermal expansion coefficient of metallic aluminum on its heat capacity

    NASA Astrophysics Data System (ADS)

    Bodryakov, V. Yu.; Bykov, A. A.

    2016-05-01

    The correlation between the volumetric thermal expansion coefficient β( T) and the heat capacity C( T) of aluminum is considered in detail. It is shown that a clear correlation is observed in a significantly wider temperature range, up to the melting temperature of the metal, along with the low-temperature range where it is linear. The significant deviation of dependence β( C) from the low-temperature linear behavior is observed up to the point where the heat capacity achieves the classical Dulong-Petit limit of 3 R ( R is the universal gas constant).

  6. Heat capacity and thermal expansion of icosahedral lutetium boride LuB66

    SciTech Connect

    Novikov, V V; Avdashchenko, D V; Matovnikov, A V; Mitroshenkov, N V; Bud’ko, S L

    2014-01-07

    The experimental values of heat capacity and thermal expansion for lutetium boride LuB66 in the temperature range of 2-300 K were analysed in the Debye-Einstein approximation. It was found that the vibration of the boron sub-lattice can be considered within the Debye model with high characteristic temperatures; low-frequency vibration of weakly connected metal atoms is described by the Einstein model.

  7. High efficiency, quasi-instantaneous steam expansion device utilizing fossil or nuclear fuel as the heat source

    SciTech Connect

    Claudio Filippone, Ph.D.

    1999-06-01

    Thermal-hydraulic analysis of a specially designed steam expansion device (heat cavity) was performed to prove the feasibility of steam expansions at elevated rates for power generation with higher efficiency. The steam expansion process inside the heat cavity greatly depends on the gap within which the steam expands and accelerates. This system can be seen as a miniaturized boiler integrated inside the expander where steam (or the proper fluid) is generated almost instantaneously prior to its expansion in the work-producing unit. Relatively cold water is pulsed inside the heat cavity, where the heat transferred causes the water to flash to steam, thereby increasing its specific volume by a large factor. The gap inside the heat cavity forms a special nozzle-shaped system in which the fluid expands rapidly, accelerating toward the system outlet. The expansion phenomenon is the cause of ever-increasing fluid speed inside the cavity system, eliminating the need for moving parts (pumps, valves, etc.). In fact, the subsequent velocity induced by the sudden fluid expansion causes turbulent conditions, forcing accelerating Reynolds and Nusselt numbers which, in turn, increase the convective heat transfer coefficient. When the combustion of fossil fuels constitutes the heat source, the heat cavity concept can be applied directly inside the stator of conventional turbines, thereby greatly increasing the overall system efficiency.

  8. HIGH EFFICIENCY, QUASI-INSTANTANEOUS STEAM EXPANSION DEVICE UTILIZING FOSSIL OR NUCLEAR FUEL AS THE HEAT SOURCE

    SciTech Connect

    Claudio Filippone, Ph.D.

    1999-06-01

    Thermal-hydraulic analysis of a specially designed steam expansion device (heat cavity) was performed to prove the feasibility of steam expansions at elevated rates for power generation with higher efficiency. The steam expansion process inside the heat cavity greatly depends on the gap within which the steam expands and accelerates. This system can be seen as a miniaturized boiler integrated inside the expander where steam (or the proper fluid) is generated almost instantaneously prior to its expansion in the work-producing unit. Relatively cold water is pulsed inside the heat cavity, where the heat transferred causes the water to flash to steam, thereby increasing its specific volume by a large factor. The gap inside the heat cavity forms a special nozzle-shaped system in which the fluid expands rapidly, accelerating toward the system outlet. The expansion phenomenon is the cause of ever-increasing fluid speed inside the cavity system, eliminating the need for moving parts (pumps, valves, etc.). In fact, the subsequent velocity induced by the sudden fluid expansion causes turbulent conditions, forcing accelerating Reynolds and Nusselt numbers which, in turn, increase the convective heat transfer coefficient. When the combustion of fossil fuels constitutes the heat source, the heat cavity concept can be applied directly inside the stator of conventional turbines, thereby greatly increasing the overall system efficiency.

  9. Heat Transfer and Fluid Dynamics Measurements in the Expansion Space of a Stirling Cycle Engine

    NASA Technical Reports Server (NTRS)

    Jiang, Nan; Simon, Terrence W.

    2006-01-01

    The heater (or acceptor) of a Stirling engine, where most of the thermal energy is accepted into the engine by heat transfer, is the hottest part of the engine. Almost as hot is the adjacent expansion space of the engine. In the expansion space, the flow is oscillatory, impinging on a two-dimensional concavely-curved surface. Knowing the heat transfer on the inside surface of the engine head is critical to the engine design for efficiency and reliability. However, the flow in this region is not well understood and support is required to develop the CFD codes needed to design modern Stirling engines of high efficiency and power output. The present project is to experimentally investigate the flow and heat transfer in the heater head region. Flow fields and heat transfer coefficients are measured to characterize the oscillatory flow as well as to supply experimental validation for the CFD Stirling engine design codes. Presented also is a discussion of how these results might be used for heater head and acceptor region design calculations.

  10. The effects of volcanic eruptions on simulated ocean heat content and thermal expansion

    NASA Astrophysics Data System (ADS)

    Gleckler, P.; Achutarao, K.; Barnett, T.; Gregory, J.; Pierce, D.; Santer, B.; Taylor, K.; Wigley, T.

    2006-12-01

    We examine the ocean heat content in a recent suite of coupled ocean-atmosphere model simulations of the 20th Century. Our results suggest that 20th Century increases in ocean heat content and sea-level (via thermal expansion) were substantially reduced by the 1883 eruption of Krakatoa. The volcanically-induced cooling of the ocean surface is subducted into deeper ocean layers, where it persists for decades. Temporary reductions in ocean heat content associated with the comparable eruptions of El Chichon (1982) and Pinatubo (1991) were much shorter lived because they occurred relative to a non-stationary background of large, anthropogenically-forced ocean warming. To understand the response of these simulations to volcanic loadings, we focus on multiple realizations of the 20th Century experiment with three models (NCAR CCSM3, GFDL 2.0, and GISS HYCOM). By comparing these runs to control simulations of each model, we track the three dimensional oceanic response to Krakatoa using S/N analysis. Inter-model differences in the oceanic thermal response to Krakatoa are large and arise from differences in external forcing, model physics, and experimental design. Our results suggest that inclusion of the effects of Krakatoa (and perhaps even earlier eruptions) is important for reliable simulation of 20th century ocean heat uptake and thermal expansion. Systematic experimentation will be required to quantify the relative importance of these factors.

  11. Combining Lactic Acid Spray with Near-Infrared Radiation Heating To Inactivate Salmonella enterica Serovar Enteritidis on Almond and Pine Nut Kernels.

    PubMed

    Ha, Jae-Won; Kang, Dong-Hyun

    2015-07-01

    The aim of this study was to investigate the efficacy of near-infrared radiation (NIR) heating combined with lactic acid (LA) sprays for inactivating Salmonella enterica serovar Enteritidis on almond and pine nut kernels and to elucidate the mechanisms of the lethal effect of the NIR-LA combined treatment. Also, the effect of the combination treatment on product quality was determined. Separately prepared S. Enteritidis phage type (PT) 30 and non-PT 30 S. Enteritidis cocktails were inoculated onto almond and pine nut kernels, respectively, followed by treatments with NIR or 2% LA spray alone, NIR with distilled water spray (NIR-DW), and NIR with 2% LA spray (NIR-LA). Although surface temperatures of nuts treated with NIR were higher than those subjected to NIR-DW or NIR-LA treatment, more S. Enteritidis survived after NIR treatment alone. The effectiveness of NIR-DW and NIR-LA was similar, but significantly more sublethally injured cells were recovered from NIR-DW-treated samples. We confirmed that the enhanced bactericidal effect of the NIR-LA combination may not be attributable to cell membrane damage per se. NIR heat treatment might allow S. Enteritidis cells to become permeable to applied LA solution. The NIR-LA treatment (5 min) did not significantly (P > 0.05) cause changes in the lipid peroxidation parameters, total phenolic contents, color values, moisture contents, and sensory attributes of nut kernels. Given the results of the present study, NIR-LA treatment may be a potential intervention for controlling food-borne pathogens on nut kernel products.

  12. Combining Lactic Acid Spray with Near-Infrared Radiation Heating To Inactivate Salmonella enterica Serovar Enteritidis on Almond and Pine Nut Kernels

    PubMed Central

    Ha, Jae-Won

    2015-01-01

    The aim of this study was to investigate the efficacy of near-infrared radiation (NIR) heating combined with lactic acid (LA) sprays for inactivating Salmonella enterica serovar Enteritidis on almond and pine nut kernels and to elucidate the mechanisms of the lethal effect of the NIR-LA combined treatment. Also, the effect of the combination treatment on product quality was determined. Separately prepared S. Enteritidis phage type (PT) 30 and non-PT 30 S. Enteritidis cocktails were inoculated onto almond and pine nut kernels, respectively, followed by treatments with NIR or 2% LA spray alone, NIR with distilled water spray (NIR-DW), and NIR with 2% LA spray (NIR-LA). Although surface temperatures of nuts treated with NIR were higher than those subjected to NIR-DW or NIR-LA treatment, more S. Enteritidis survived after NIR treatment alone. The effectiveness of NIR-DW and NIR-LA was similar, but significantly more sublethally injured cells were recovered from NIR-DW-treated samples. We confirmed that the enhanced bactericidal effect of the NIR-LA combination may not be attributable to cell membrane damage per se. NIR heat treatment might allow S. Enteritidis cells to become permeable to applied LA solution. The NIR-LA treatment (5 min) did not significantly (P > 0.05) cause changes in the lipid peroxidation parameters, total phenolic contents, color values, moisture contents, and sensory attributes of nut kernels. Given the results of the present study, NIR-LA treatment may be a potential intervention for controlling food-borne pathogens on nut kernel products. PMID:25911473

  13. Vocational-Technical Physics Project. Thermometers: I. Temperature and Heat, II. Expansion Thermometers, III. Electrical Thermometers. Field Test Edition.

    ERIC Educational Resources Information Center

    Forsyth Technical Inst., Winston-Salem, NC.

    This vocational physics individualized student instructional module on thermometers consists of the three units: Temperature and heat, expansion thermometers, and electrical thermometers. Designed with a laboratory orientation, experiments are included on linear expansion; making a bimetallic thermometer, a liquid-in-gas thermometer, and a gas…

  14. Thermal expansion, heat capacity and magnetostriction of RAl3 (R = Tm, Yb, Lu) single crystals

    SciTech Connect

    Bud'ko, S.; Frenerick, J.; Mun, E.; Canfield, P.; Schmiedeshoff, G.

    2007-12-13

    We present thermal expansion and longitudinal magnetostriction data for cubic RAl{sub 3} (R = Tm, Yb, Lu) single crystals. The thermal expansion coefficient for YbAl{sub 3} is consistent with an intermediate valence of the Yb ion, whereas the data for TmAl{sub 3} show crystal electric field contributions and have strong magnetic field dependences. de Haas-van Alphen like oscillations were observed in the magnetostriction data for YbAl{sub 3} and LuAl{sub 3}, several new extreme orbits were measured and their effective masses were estimated. Specific heat data taken at 0 and 140 kOe for both LuAl{sub 3} and TmAl{sub 3} for T {le} 200 K allow for the determination of a crystal electric field splitting scheme for TmAl{sub 3}.

  15. Krakatoa lives: The effect of volcanic eruptions on ocean heat content and thermal expansion

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.; AchutaRao, K.; Gregory, J. M.; Santer, B. D.; Taylor, K. E.; Wigley, T. M. L.

    2006-09-01

    A suite of climate model experiments indicates that 20th Century increases in ocean heat content and sea-level (via thermal expansion) were substantially reduced by the 1883 eruption of Krakatoa. The volcanically-induced cooling of the ocean surface is subducted into deeper ocean layers, where it persists for decades. Temporary reductions in ocean heat content associated with the comparable eruptions of El Chichón (1982) and Pinatubo (1991) were much shorter lived because they occurred relative to a non-stationary background of large, anthropogenically-forced ocean warming. Our results suggest that inclusion of the effects of Krakatoa (and perhaps even earlier eruptions) is important for reliable simulation of 20th century ocean heat uptake and thermal expansion. Inter-model differences in the oceanic thermal response to Krakatoa are large and arise from differences in external forcing, model physics, and experimental design. Systematic experimentation is required to quantify the relative importance of these factors. The next generation of historical forcing experiments may require more careful treatment of pre-industrial volcanic aerosol loadings.

  16. Effects of compression and expansion ramp fuel injector configuration on scramjet combustion and heat transfer

    NASA Technical Reports Server (NTRS)

    Stouffer, Scott D.; Baker, N. R.; Capriotti, D. P.; Northam, G. B.

    1993-01-01

    A scramjet combustor with four wall-ramp injectors containing Mach-1.7 fuel jets in the base of the ramps was investigated experimentally. During the test program, two swept ramp injector designs were evaluated. One swept-ramp model had 10-deg compression-ramps and the other had 10-deg expansion cavities between flush wall ramps. The scramjet combustor model was instrumented with pressure taps and heat-flux gages. The pressure measurements indicated that both injector configurations were effective in promoting mixing and combustion. Autoignition occurred for the compression-ramp injectors, and the fuel began to burn immediately downstream of the injectors. In tests of the expansion ramps, a pilot was required to ignite the fuel, and the fuel did not burn for a distance of at least two gaps downstream of the injectors. Once initiated, combustion was rapid in this configuration. Heat transfer measurements showed that the heat flux differed greatly both across the width of the combustor and along the length of the combustor.

  17. Dependence of divertor heat flux widths on heating power, flux expansion, and plasma current in the NSTX

    SciTech Connect

    Maingi, Rajesh; Soukhanovskii, V. A.; Ahn, J.W.

    2011-01-01

    We report the dependence of the lower divertor surface heat flux profiles, measured from infrared thermography and mapped magnetically to the mid-plane on loss power into the scrape-off layer (P{sub LOSS}), plasma current (I{sub p}), and magnetic flux expansion (f{sub exp}), as well as initial results with lithium wall conditioning in NSTX. Here we extend previous studies [R. Maingi et al., J. Nucl. Mater. 363-365 (2007) 196-200] to higher triangularity similar to 0.7 and higher I{sub p} {le} 1.2 MA. First we note that the mid-plane heat flux width mapped to the mid-plane, {lambda}{sub q}{sup mid} is largely independent of P{sub LOSS} for P{sub LOSS} {ge} 4 MW. {lambda}{sub q}{sup mid} is also found to be relatively independent of f{sub exp}; peak heat flux is strongly reduced as f{sub exp} is increased, as expected. Finally, {lambda}{sub q}{sup mid} is shown to strongly contract with increasing I{sub p} such that {lambda}{sub q}{sup mid} {alpha} I{sub p}{sup -1.6} with a peak divertor heat flux of q{sub div,peak} similar to 15 MW/m{sup 2} when I{sub p} = 1.2 MA and P{sub LOSS} similar to 6 MW. These relationships are then used to predict the divertor heat flux for the planned NSTX-Upgrade, with heating power between 10 and 15 MW, B{sub t} = 1.01 and I{sub p}= 2.0 MA for 5 s.

  18. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  19. Akermanite: phase transitions in heat capacity and thermal expansion, and revised thermodynamic data.

    USGS Publications Warehouse

    Hemingway, B.S.; Evans, H.T.; Nord, G.L.; Haselton, H.T.; Robie, R.A.; McGee, J.J.

    1986-01-01

    A small but sharp anomaly in the heat capacity of akermanite at 357.9 K, and a discontinuity in its thermal expansion at 693 K, as determined by XRD, have been found. The enthalpy and entropy assigned to the heat-capacity anomaly, for the purpose of tabulation, are 679 J/mol and 1.9 J/(mol.K), respectively. They were determined from the difference between the measured values of the heat capacity in the T interval 320-365 K and that obtained from an equation which fits the heat-capacity and heat-content data for akermanite from 290 to 1731 K. Heat-capacity measurements are reported for the T range from 9 to 995 K. The entropy and enthalpy of formation of akermanite at 298.15 K and 1 bar are 212.5 + or - 0.4 J/(mol.K) and -3864.5 + or - 4.0 kJ/mol, respectively. Weak satellite reflections have been observed in hk0 single-crystal X-ray precession photographs and electron-diffraction patterns of this material at room T. With in situ heating by TEM, the satellite reflections decreased significantly in intensity above 358 K and disappeared at about 580 K and, on cooling, reappeared. These observations suggest that the anomalies in the thermal behaviour of akermanite are associated with local displacements of Ca ions from the mirror plane (space group P421m) and accompanying distortion of the MgSi2O7 framework.-L.C.C.

  20. Diagnosis of Ultrafast Laser-Heated Metal Surfaces and Plasma Expansion with Absolute Displacement Interferometry

    NASA Astrophysics Data System (ADS)

    Rodriguez, G.; Clarke, S. A.; Taylor, A. J.; Forsman, A.

    2004-07-01

    We report on the development of a novel technique to measure the critical surface displacement in intense, ultrashort, laser-solid target experiments. Determination of the critical surface position is important for understanding near solid density plasma dynamics and transport from warm dense matter systems, and for diagnosing short scale length plasma expansion and hydrodynamic surface motion from short pulse, laser-heated, solid targets. Instead of inferring critical surface motion from spectral power shifts using a time-delayed probe pulse or from phase shifts using ultrafast pump-probe frequency domain interferometry (FDI), this technique directly measures surface displacement using a single ultrafast laser heating pulse. Our technique is based on an application of a Michelson Stellar interferometer to microscopic rather than stellar scales, and we report plasma scale length motion as small as 10 nm. We will present results for motion of plasmas generated from several target materials (Au, Al, Au on CH plastic) for a laser pulse intensity range from 1011 to 1016 W/cm2. Varying both, the pulse duration and the pulse energy, explores the dependence of the expansion mechanism on the energy deposited and on the peak intensity. Comparisons with hydrocodes reveal the applicability of hydrodynamic models.

  1. Are heat waves susceptible to mitigate the expansion of a species progressing with global warming?

    PubMed Central

    Robinet, Christelle; Rousselet, Jérôme; Pineau, Patrick; Miard, Florie; Roques, Alain

    2013-01-01

    A number of organisms, especially insects, are extending their range in response of the increasing trend of warmer temperatures. However, the effects of more frequent climatic anomalies on these species are not clearly known. The pine processionary moth, Thaumetopoea pityocampa, is a forest pest that is currently extending its geographical distribution in Europe in response to climate warming. However, its population density largely decreased in its northern expansion range (near Paris, France) the year following the 2003 heat wave. In this study, we tested whether the 2003 heat wave could have killed a large part of egg masses. First, the local heat wave intensity was determined. Then, an outdoor experiment was conducted to measure the deviation between the temperatures recorded by weather stations and those observed within sun-exposed egg masses. A second experiment was conducted under laboratory conditions to simulate heat wave conditions (with night/day temperatures of 20/32°C and 20/40°C compared to the control treatment 13/20°C) and measure the potential effects of this heat wave on egg masses. No effects were noticed on egg development. Then, larvae hatched from these egg masses were reared under mild conditions until the third instar and no delayed effects on the development of larvae were found. Instead of eggs, the 2003 heat wave had probably affected directly or indirectly the young larvae that were already hatched when it occurred. Our results suggest that the effects of extreme climatic anomalies occurring over narrow time windows are difficult to determine because they strongly depend on the life stage of the species exposed to these anomalies. However, these effects could potentially reduce or enhance the average warming effects. As extreme weather conditions are predicted to become more frequent in the future, it is necessary to disentangle the effects of the warming trend from the effects of climatic anomalies when predicting the response of a

  2. High Enthalpy Studies of Capsule Heating in an Expansion Tunnel Facility

    NASA Technical Reports Server (NTRS)

    Dufrene, Aaron; MacLean, Matthew; Holden, Michael

    2012-01-01

    Measurements were made on an Orion heat shield model to demonstrate the capability of the new LENS-XX expansion tunnel facility to make high quality measurements of heat transfer distributions at flow velocities from 3 km/s (h(sub 0) = 5 MJ/kg) to 8.4 km/s (h(sub 0) = 36 MJ/kg). Thirty-nine heat transfer gauges, including both thin-film and thermocouple instruments, as well as four pressure gauges, and high-speed Schlieren were used to assess the aerothermal environment on the capsule heat shield. Only results from laminar boundary layer runs are reported. A major finding of this test series is that the high enthalpy, low-density flows displayed surface heating behavior that is observed to be consistent with some finite-rate recombination process occurring on the surface of the model. It is too early to speculate on the nature of the mechanism, but the response of the gages on the surface seems generally repeatable and consistent for a range of conditions. This result is an important milestone in developing and proving a capability to make measurements in a ground test environment and extrapolate them to flight for conditions with extreme non-equilibrium effects. Additionally, no significant, isolated stagnation point augmentation ("bump") was observed in the tests in this facility. Cases at higher Reynolds number seemed to show the greatest amount of overall increase in heating on the windward side of the model, which may in part be due to small-scale particulate.

  3. Are heat waves susceptible to mitigate the expansion of a species progressing with global warming?

    PubMed

    Robinet, Christelle; Rousselet, Jérôme; Pineau, Patrick; Miard, Florie; Roques, Alain

    2013-09-01

    A number of organisms, especially insects, are extending their range in response of the increasing trend of warmer temperatures. However, the effects of more frequent climatic anomalies on these species are not clearly known. The pine processionary moth, Thaumetopoea pityocampa, is a forest pest that is currently extending its geographical distribution in Europe in response to climate warming. However, its population density largely decreased in its northern expansion range (near Paris, France) the year following the 2003 heat wave. In this study, we tested whether the 2003 heat wave could have killed a large part of egg masses. First, the local heat wave intensity was determined. Then, an outdoor experiment was conducted to measure the deviation between the temperatures recorded by weather stations and those observed within sun-exposed egg masses. A second experiment was conducted under laboratory conditions to simulate heat wave conditions (with night/day temperatures of 20/32°C and 20/40°C compared to the control treatment 13/20°C) and measure the potential effects of this heat wave on egg masses. No effects were noticed on egg development. Then, larvae hatched from these egg masses were reared under mild conditions until the third instar and no delayed effects on the development of larvae were found. Instead of eggs, the 2003 heat wave had probably affected directly or indirectly the young larvae that were already hatched when it occurred. Our results suggest that the effects of extreme climatic anomalies occurring over narrow time windows are difficult to determine because they strongly depend on the life stage of the species exposed to these anomalies. However, these effects could potentially reduce or enhance the average warming effects. As extreme weather conditions are predicted to become more frequent in the future, it is necessary to disentangle the effects of the warming trend from the effects of climatic anomalies when predicting the response of a

  4. Boundary-layer computational model for predicting the flow and heat transfer in sudden expansions

    NASA Technical Reports Server (NTRS)

    Lewis, J. P.; Pletcher, R. H.

    1986-01-01

    Fully developed turbulent and laminar flows through symmetric planar and axisymmetric expansions with heat transfer were modeled using a finite-difference discretization of the boundary-layer equations. By using the boundary-layer equations to model separated flow in place of the Navier-Stokes equations, computational effort was reduced permitting turbulence modelling studies to be economically carried out. For laminar flow, the reattachment length was well predicted for Reynolds numbers as low as 20 and the details of the trapped eddy were well predicted for Reynolds numbers above 200. For turbulent flows, the Boussinesq assumption was used to express the Reynolds stresses in terms of a turbulent viscosity. Near-wall algebraic turbulence models based on Prandtl's-mixing-length model and the maximum Reynolds shear stress were compared.

  5. Bergman kernel from the lowest Landau level

    NASA Astrophysics Data System (ADS)

    Klevtsov, S.

    2009-07-01

    We use path integral representation for the density matrix, projected on the lowest Landau level, to generalize the expansion of the Bergman kernel on Kähler manifold to the case of arbitrary magnetic field.

  6. The effects of compressor speed and electronic expansion valve opening on the optimum design of inverter heat pump at various heating loads

    SciTech Connect

    Hwang, Y.; Kim, Y.; Park, J.; Kim, C.

    1999-07-01

    The experiments to design the optimum operation point of an inverter heat pump were performed by varying compressor speed and expansion valve opening for various heating loads. At the indoor temperatures of {minus}5 {approximately} 15C and outdoor temperatures of {minus}10 {approximately} 25 C, the compressor driving frequencies were varied 10 {approximately} 120 Hz and 80 {approximately} 200 pulse for the expansion valve opening while the speed of the indoor and outdoor fans were fixed. From the results of this study, the optimum combination of compressor driving frequency and expansion valve opening were found to exist if indoor and outdoor temperatures are settled though the operation point is changed by the preferable factor among capacity, comfort and power saving.

  7. Negative thermal expansion and anomalies of heat capacity of LuB50 at low temperatures

    SciTech Connect

    Novikov, V. V.; Zhemoedov, N. A.; Matovnikov, A. V.; Mitroshenkov, N. V.; Kuznetsov, S. V.; Bud'ko, S. L.

    2015-07-20

    Heat capacity and thermal expansion of LuB50 boride were experimentally studied in the 2–300 K temperature range. The data reveal an anomalous contribution to the heat capacity at low temperatures. The value of this contribution is proportional to the first degree of temperature. It was identified that this anomaly in heat capacity is caused by the effect of disorder in the LuB50 crystalline structure and it can be described in the soft atomic potential model (SAP). The parameters of the approximation were determined. The temperature dependence of LuB50 heat capacity in the whole temperature range was approximated by the sum of SAP contribution, Debye and two Einstein components. The parameters of SAP contribution for LuB50 were compared to the corresponding values for LuB66, which was studied earlier. Negative thermal expansion at low temperatures was experimentally observed for LuB50. The analysis of the experimental temperature dependence for the Gruneisen parameter of LuB50 suggested that the low-frequency oscillations, described in SAP mode, are responsible for the negative thermal expansion. As a result, the glasslike character of the behavior of LuB50 thermal characteristics at low temperatures was confirmed.

  8. Effect of dynamic and thermal prehistory on aerodynamic characteristics and heat transfer behind a sudden expansion in a round tube

    NASA Astrophysics Data System (ADS)

    Terekhov, V. I.; Bogatko, T. V.

    2017-03-01

    The results of a numerical study of the influence of the thicknesses of dynamic and thermal boundary layers on turbulent separation and heat transfer in a tube with sudden expansion are presented. The first part of this work studies the influence of the thickness of the dynamic boundary layer, which was varied by changing the length of the stabilization area within the maximal extent possible: from zero to half of the tube diameter. In the second part of the study, the flow before separation was hydrodynamically stabilized and the thermal layer before the expansion could simultaneously change its thickness from 0 to D1/2. The Reynolds number was varied in the range of {Re}_{{{{D}}1 }} = 6.7 \\cdot 103 {{to}} 1.33 \\cdot 105, and the degree of tube expansion remained constant at ER = ( D 2/ D 1)2 = 1.78. A significant effect of the thickness of the separated boundary layer on both dynamic and thermal characteristics of the flow is shown. In particular, it was found out that with an increase in the thickness of the boundary layer the recirculation zone increases and the maximal Nusselt number decreases. It was determined that the growth of the heat layer thickness does not affect the hydrodynamic characteristics of the flow after separation but does lead to a reduction of heat transfer intensity in the separation area and removal of the coordinates of maximal heat transfer from the point of tube expansion. The generalizing dependence for the maximal Nusselt number at various thermal layer thicknesses is given. Comparison with experimental data confirmed the main trends in the behavior of heat and mass transfer processes in separated flows behind a step with different thermal prehistories.

  9. Weighted Bergman kernels and virtual Bergman kernels

    NASA Astrophysics Data System (ADS)

    Roos, Guy

    2005-12-01

    We introduce the notion of "virtual Bergman kernel" and apply it to the computation of the Bergman kernel of "domains inflated by Hermitian balls", in particular when the base domain is a bounded symmetric domain.

  10. Crystalline electric field and lattice contributions to thermodynamic properties of PrGaO3: specific heat and thermal expansion

    NASA Astrophysics Data System (ADS)

    Senyshyn, A.; Schnelle, W.; Vasylechko, L.; Ehrenberg, H.; Berkowski, M.

    2007-04-01

    The low-temperature heat capacity of perovskite-type PrGaO3 has been measured in the temperature range from 2 to 320 K. Thermodynamic standard values at 298.15 K are reported. An initial Debye temperature θD(0) = (480 ± 10) K was determined by fitting the calculated lattice heat capacity. The entropy of the derived Debye temperature functions agrees well with values calculated from thermal displacement parameters and from atomistic simulations. The thermal expansion and the Grüneisen parameter, arising from a coupling of crystal field states of Pr3+ ion and phonon modes at low temperature, were analysed.

  11. The Adiabatic Expansion of Gases and the Determination of Heat Capacity Ratios: A Physical Chemistry Experiment.

    ERIC Educational Resources Information Center

    Moore, William M.

    1984-01-01

    Describes the procedures and equipment for an experiment on the adiabatic expansion of gases suitable for demonstration and discussion in the physical chemical laboratory. The expansion produced shows how the process can change temperature and still return to a different location on an isotherm. (JN)

  12. Differential response of cell-cycle and cell-expansion regulators to heat stress in apple (Malus domestica) fruitlets.

    PubMed

    Flaishman, Moshe A; Peles, Yuval; Dahan, Yardena; Milo-Cochavi, Shira; Frieman, Aviad; Naor, Amos

    2015-04-01

    Temperature is one of the most significant factors affecting physiological and biochemical aspects of fruit development. Current and progressing global warming is expected to change climate in the traditional deciduous fruit tree cultivation regions. In this study, 'Golden Delicious' trees, grown in a controlled environment or commercial orchard, were exposed to different periods of heat treatment. Early fruitlet development was documented by evaluating cell number, cell size and fruit diameter for 5-70 days after full bloom. Normal activities of molecular developmental and growth processes in apple fruitlets were disrupted under daytime air temperatures of 29°C and higher as a result of significant temporary declines in cell-production and cell-expansion rates, respectively. Expression screening of selected cell cycle and cell expansion genes revealed the influence of high temperature on genetic regulation of apple fruitlet development. Several core cell-cycle and cell-expansion genes were differentially expressed under high temperatures. While expression levels of B-type cyclin-dependent kinases and A- and B-type cyclins declined moderately in response to elevated temperatures, expression of several cell-cycle inhibitors, such as Mdwee1, Mdrbr and Mdkrps was sharply enhanced as the temperature rose, blocking the cell-cycle cascade at the G1/S and G2/M transition points. Moreover, expression of several expansin genes was associated with high temperatures, making them potentially useful as molecular platforms to enhance cell-expansion processes under high-temperature regimes. Understanding the molecular mechanisms of heat tolerance associated with genes controlling cell cycle and cell expansion may lead to the development of novel strategies for improving apple fruit productivity under global warming.

  13. Non-Markovian expansion in quantum Brownian motion

    NASA Astrophysics Data System (ADS)

    Fraga, Eduardo S.; Krein, Gastão; Palhares, Letícia F.

    2014-01-01

    We consider the non-Markovian Langevin evolution of a dissipative dynamical system in quantum mechanics in the path integral formalism. After discussing the role of the frequency cutoff for the interaction of the system with the heat bath and the kernel and noise correlator that follow from the most common choices, we derive an analytic expansion for the exact non-Markovian dissipation kernel and the corresponding colored noise in the general case that is consistent with the fluctuation-dissipation theorem and incorporates systematically non-local corrections. We illustrate the modifications to results obtained using the traditional (Markovian) Langevin approach in the case of the exponential kernel and analyze the case of the non-Markovian Brownian motion. We present detailed results for the free and the quadratic cases, which can be compared to exact solutions to test the convergence of the method, and discuss potentials of a general nonlinear form.

  14. Description and initial operating performance of the Langley 6-inch expansion tube using heated helium driver gas

    NASA Technical Reports Server (NTRS)

    Moore, J. A.

    1975-01-01

    A general description of the Langley 6-inch expansion tube is presented along with discussion of the basic components, internal resistance heater, arc-discharge assemblies, instrumentation, and operating procedure. Preliminary results using unheated and resistance-heated helium as the driver gas are presented. The driver-gas pressure ranged from approximately 17 to 59 MPa and its temperature ranged from 300 to 510 K. Interface velocities of approximately 3.8 to 6.7 km/sec were generated between the test gas and the acceleration gas using air as the test gas and helium as the acceleration gas. Test flow quality and comparison of measured and predicted expansion-tube flow quantities are discussed.

  15. Regular expansion solutions for small Peclet number heat or mass transfer in concentrated two-phase particulate systems

    NASA Technical Reports Server (NTRS)

    Yaron, I.

    1974-01-01

    Steady state heat or mass transfer in concentrated ensembles of drops, bubbles or solid spheres in uniform, slow viscous motion, is investigated. Convective effects at small Peclet numbers are taken into account by expanding the nondimensional temperature or concentration in powers of the Peclet number. Uniformly valid solutions are obtained, which reflect the effects of dispersed phase content and rate of internal circulation within the fluid particles. The dependence of the range of Peclet and Reynolds numbers, for which regular expansions are valid, on particle concentration is discussed.

  16. Thermal expansion of UO2+x nuclear fuel rods from a model coupling heat transfer and oxygen diffusion

    SciTech Connect

    Mihaila, Bogden; Zubelewicz, Aleksander; Stan, Marius; Ramirez, Juan

    2008-01-01

    We study the thermal expansion of UO{sub 2+x} nuclear fuel rod in the context of a model coupling heat transfer and oxygen diffusion discussed previously by J.C. Ramirez, M. Stan and P. Cristea [J. Nucl. Mat. 359 (2006) 174]. We report results of simulations performed for steady-state and time-dependent regimes in one-dimensional configurations. A variety of initial- and boundary-value scenarios are considered. We use material properties obtained from previously published correlations or from analysis of previously published data. All simulations were performed using the commercial code COMSOL Multiphysics{sup TM} and are readily extendable to include multidimensional effects.

  17. Loop expansion and the bosonic representation of loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Bianchi, E.; Guglielmon, J.; Hackl, L.; Yokomizo, N.

    2016-10-01

    We introduce a new loop expansion that provides a resolution of the identity in the Hilbert space of loop quantum gravity on a fixed graph. We work in the bosonic representation obtained by the canonical quantization of the spinorial formalism. The resolution of the identity gives a tool for implementing the projection of states in the full bosonic representation onto the space of solutions to the Gauss and area matching constraints of loop quantum gravity. This procedure is particularly efficient in the semiclassical regime, leading to explicit expressions for the loop expansions of coherent, heat kernel and squeezed states.

  18. Anomalous components of supercooled water expansivity, compressibility, and heat capacity (Cp and Cv) from binary formamide+water solution studies

    NASA Astrophysics Data System (ADS)

    Oguni, M.; Angell, C. A.

    1983-06-01

    Recently reported heat capacity studies of N2H4+H2O and H2O2+H2O solutions, from which an anomalous component of the pure water behavior could be extracted by extrapolation, have been extended to a system NH2CHO+H2O which has the chemical stability needed to permit expansivity and compressibility measurements as well. Data accurate to ±2% for each of these properties as well as for the heat capacity are reported. The expansivity data support almost quantitatively an earlier speculative separation of the bulk and supercooled water expansivity into a ``normal'' (or ``background'') part and an ``anomalous'' part, the latter part fitting a critical law αanom=A(T/Ts-1)-γ with exponent γ=1.0. According to the present analysis, the anomalous part of the expansivity which is always negative, yields Ts in the range 225-228, γ in the range 1.28-1.0, depending on the choice of background extrapolation function. The normal contribution to the heat capacity obtained from the present work is intermediate in character to that from the previous two systems and leads to similar equation parameters. The normal contribution to the compressibility on the other hand is very different from that speculated earlier by Kanno and Angell and approximately verified by Conde et al. for ethanol-water solutions. The background component from the present analysis is ˜50% larger, with the result that the anomalous component, at least when values above 0 °C are included in the analysis, cannot be sensibly fitted to the critical point equation. The possible origin and interest content of these differences is discussed. Combination of the new thermodynamic data permit estimation of Cv values for the solution, and by extrapolation, a normal Cv component for water. The anomalous component of Cv for pure water obtained by difference has the form of a Shottky anomaly in contrast with the corresponding Cp component which diverges.

  19. Heat capacity, thermal expansion and heat transport in the Han Blue (BaCuSi4O10): Observation of structural phase transitions

    NASA Astrophysics Data System (ADS)

    Masunaga, S. H.; Rebello, A.; Schye, A. T.; Prasai, N.; Neumeier, J. J.; Cohn, J. L.

    2015-10-01

    Structural phase transitions at 87 K and 103 K are reported for single-crystalline Han Blue (BaCuSi4O10) by means of high-resolution thermal-expansion, thermal conductivity and heat capacity measurements. The phase transition at 103 K results in differing lengths of the a and b lattice parameters, and thus a lowering of the crystallographic symmetry. Negative thermal-expansion coefficients are observed perpendicular to the c-axis over a wide temperature range (108 K < T < 350 K). The thermal conductivity is small, and decreases with temperature, both of which suggest strong scattering of heat-carrying phonons. The principle Grüneisen parameter within the plane and perpendicular to it was determined to be γ1=-1.09 and γ3=1.06 at room temperature; the bulk Grüneisen parameter is γ=0.10. The results are consistent with the presence of low-energy vibrations associated with the collective motions of CuO4 and Si4O10 polyhedral subunits.

  20. Compressibility, thermal expansion coefficient and heat capacity of CH4 and CO2 hydrate mixtures using molecular dynamics simulations.

    PubMed

    Ning, F L; Glavatskiy, K; Ji, Z; Kjelstrup, S; H Vlugt, T J

    2015-01-28

    Understanding the thermal and mechanical properties of CH4 and CO2 hydrates is essential for the replacement of CH4 with CO2 in natural hydrate deposits as well as for CO2 sequestration and storage. In this work, we present isothermal compressibility, isobaric thermal expansion coefficient and specific heat capacity of fully occupied single-crystal sI-CH4 hydrates, CO2 hydrates and hydrates of their mixture using molecular dynamics simulations. Eight rigid/nonpolarisable water interaction models and three CH4 and CO2 interaction potentials were selected to examine the atomic interactions in the sI hydrate structure. The TIP4P/2005 water model combined with the DACNIS united-atom CH4 potential and TraPPE CO2 rigid potential were found to be suitable molecular interaction models. Using these molecular models, the results indicate that both the lattice parameters and the compressibility of the sI hydrates agree with those from experimental measurements. The calculated bulk modulus for any mixture ratio of CH4 and CO2 hydrates varies between 8.5 GPa and 10.4 GPa at 271.15 K between 10 and 100 MPa. The calculated thermal expansion and specific heat capacities of CH4 hydrates are also comparable with experimental values above approximately 260 K. The compressibility and expansion coefficient of guest gas mixture hydrates increase with an increasing ratio of CO2-to-CH4, while the bulk modulus and specific heat capacity exhibit the opposite trend. The presented results for the specific heat capacities of 2220-2699.0 J kg(-1) K(-1) for any mixture ratio of CH4 and CO2 hydrates are the first reported so far. These computational results provide a useful database for practical natural gas recovery from CH4 hydrates in deep oceans where CO2 is considered to replace CH4, as well as for phase equilibrium and mechanical stability of gas hydrate-bearing sediments. The computational schemes also provide an appropriate balance between computational accuracy and cost for predicting

  1. Laser ablation of metals: Analysis of surface-heating and plume-expansion experiments

    NASA Astrophysics Data System (ADS)

    Mele, A.; Giardini Guidoni, A.; Kelly, R.; Flamini, C.; Orlando, S.

    1997-02-01

    The thermal effects produced by laser pulses (6 or 18 ns) absorbed by a solid target have been investigated experimentally and theoretically. The energy which is absorbed serves to raise the temperature of the surface. The regimes to be considered are described by the heat-diffusion equation under conditions of what we term `normal vaporization'. Numerical solutions of the heat-diffusion equation lead to the temperature profiles produced within the target. The aim of this work is to present the results on heat flow in terms of the surface temperature and the velocity at which the surface recedes. Experimental data on the recession velocity and of the crater depth in relation to the thermophysical parameters of the metals Al, Cu, Nb, W, and Zn, are reported. The effect of the surface heating has also been examined in terms of the velocities of the plumes emitted from the targets. It is concluded that vaporization from the laser-heated targets is not the only relevant process but that one or both of laser-plume interaction and phase explosion may play a role in determining particle energies.

  2. Asymptotic expansions of solutions of the heat conduction equation in internally bounded cylindrical geometry

    USGS Publications Warehouse

    Ritchie, R.H.; Sakakura, A.Y.

    1956-01-01

    The formal solutions of problems involving transient heat conduction in infinite internally bounded cylindrical solids may be obtained by the Laplace transform method. Asymptotic series representing the solutions for large values of time are given in terms of functions related to the derivatives of the reciprocal gamma function. The results are applied to the case of the internally bounded infinite cylindrical medium with, (a) the boundary held at constant temperature; (b) with constant heat flow over the boundary; and (c) with the "radiation" boundary condition. A problem in the flow of gas through a porous medium is considered in detail.

  3. An Implementation of Multiprogramming and Process Management for a Security Kernel Operating System.

    DTIC Science & Technology

    1980-06-01

    multiplexing technique for a distributed kernel and presents a virtual interrupt mechanism. Its structure is loop free to permit future expansion into more...coordinates the asynchronous interaction of system processes. This implementation describes a processor multiplexing technique for a distributed kernel...system. This implementation employs a processor multiplexing technique for a distributed kernel and presents a virtual interrupt mechanism. The

  4. A heat wave during leaf expansion severely reduces productivity and modifies seasonal growth patterns in a northern hardwood forest.

    PubMed

    Stangler, Dominik Florian; Hamann, Andreas; Kahle, Hans-Peter; Spiecker, Heinrich

    2016-10-13

    A useful approach to monitor tree response to climate change and environmental extremes is the recording of long-term time series of stem radial variations obtained with precision dendrometers. Here, we study the impact of environmental stress on seasonal growth dynamics and productivity of yellow birch (Betula alleghaniensis Britton) and sugar maple (Acer saccharum Marsh.) in the Great Lakes, St Lawrence forest region of Ontario. Specifically, we research the effects of a spring heat wave in 2010, and a summer drought in 2012 that occurred during the 2005-14 study period. We evaluated both growth phenology (onset, cessation, duration of radial growth, time of maximum daily growth rate) and productivity (monthly and seasonal average growth rates, maximum daily growth rate, tree-ring width) and tested for differences and interactions among species and years. Productivity of sugar maple was drastically compromised by a 3-day spring heat wave in 2010 as indicated by low growth rates, very early growth cessation and a lagged growth onset in the following year. Sugar maple also responded more sensitively than yellow birch to a prolonged drought period in July 2012, but final tree-ring width was not significantly reduced due to positive responses to above-average temperatures in the preceding spring. We conclude that sugar maple, a species that currently dominates northern hardwood forests, is vulnerable to heat wave disturbances during leaf expansion, which might occur more frequently under anticipated climate change.

  5. Semisupervised kernel matrix learning by kernel propagation.

    PubMed

    Hu, Enliang; Chen, Songcan; Zhang, Daoqiang; Yin, Xuesong

    2010-11-01

    The goal of semisupervised kernel matrix learning (SS-KML) is to learn a kernel matrix on all the given samples on which just a little supervised information, such as class label or pairwise constraint, is provided. Despite extensive research, the performance of SS-KML still leaves some space for improvement in terms of effectiveness and efficiency. For example, a recent pairwise constraints propagation (PCP) algorithm has formulated SS-KML into a semidefinite programming (SDP) problem, but its computation is very expensive, which undoubtedly restricts PCPs scalability in practice. In this paper, a novel algorithm, called kernel propagation (KP), is proposed to improve the comprehensive performance in SS-KML. The main idea of KP is first to learn a small-sized sub-kernel matrix (named seed-kernel matrix) and then propagate it into a larger-sized full-kernel matrix. Specifically, the implementation of KP consists of three stages: 1) separate the supervised sample (sub)set X(l) from the full sample set X; 2) learn a seed-kernel matrix on X(l) through solving a small-scale SDP problem; and 3) propagate the learnt seed-kernel matrix into a full-kernel matrix on X . Furthermore, following the idea in KP, we naturally develop two conveniently realizable out-of-sample extensions for KML: one is batch-style extension, and the other is online-style extension. The experiments demonstrate that KP is encouraging in both effectiveness and efficiency compared with three state-of-the-art algorithms and its related out-of-sample extensions are promising too.

  6. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  7. The role of turbulence in coronal heating and solar wind expansion.

    PubMed

    Cranmer, Steven R; Asgari-Targhi, Mahboubeh; Miralles, Mari Paz; Raymond, John C; Strachan, Leonard; Tian, Hui; Woolsey, Lauren N

    2015-05-13

    Plasma in the Sun's hot corona expands into the heliosphere as a supersonic and highly magnetized solar wind. This paper provides an overview of our current understanding of how the corona is heated and how the solar wind is accelerated. Recent models of magnetohydrodynamic turbulence have progressed to the point of successfully predicting many observed properties of this complex, multi-scale system. However, it is not clear whether the heating in open-field regions comes mainly from the dissipation of turbulent fluctuations that are launched from the solar surface, or whether the chaotic 'magnetic carpet' in the low corona energizes the system via magnetic reconnection. To help pin down the physics, we also review some key observational results from ultraviolet spectroscopy of the collisionless outer corona.

  8. The role of turbulence in coronal heating and solar wind expansion

    PubMed Central

    Cranmer, Steven R.; Asgari-Targhi, Mahboubeh; Miralles, Mari Paz; Raymond, John C.; Strachan, Leonard; Tian, Hui; Woolsey, Lauren N.

    2015-01-01

    Plasma in the Sun's hot corona expands into the heliosphere as a supersonic and highly magnetized solar wind. This paper provides an overview of our current understanding of how the corona is heated and how the solar wind is accelerated. Recent models of magnetohydrodynamic turbulence have progressed to the point of successfully predicting many observed properties of this complex, multi-scale system. However, it is not clear whether the heating in open-field regions comes mainly from the dissipation of turbulent fluctuations that are launched from the solar surface, or whether the chaotic ‘magnetic carpet’ in the low corona energizes the system via magnetic reconnection. To help pin down the physics, we also review some key observational results from ultraviolet spectroscopy of the collisionless outer corona. PMID:25848083

  9. Azadiradione ameliorates polyglutamine expansion disease in Drosophila by potentiating DNA binding activity of heat shock factor 1

    PubMed Central

    Dutta, Naibedya; Ghosh, Suvranil; Jana, Manas; Ganguli, Arnab; Komarov, Andrei; Paul, Soumyadip; Dwivedi, Vibha; Chatterjee, Subhrangsu; Jana, Nihar R.; Lakhotia, Subhash C.; Chakrabarti, Gopal; Misra, Anup K.; Mandal, Subhash C.; Pal, Mahadeb

    2016-01-01

    Aggregation of proteins with the expansion of polyglutamine tracts in the brain underlies progressive genetic neurodegenerative diseases (NDs) like Huntington's disease and spinocerebellar ataxias (SCA). An insensitive cellular proteotoxic stress response to non-native protein oligomers is common in such conditions. Indeed, upregulation of heat shock factor 1 (HSF1) function and its target protein chaperone expression has shown promising results in animal models of NDs. Using an HSF1 sensitive cell based reporter screening, we have isolated azadiradione (AZD) from the methanolic extract of seeds of Azadirachta indica, a plant known for its multifarious medicinal properties. We show that AZD ameliorates toxicity due to protein aggregation in cell and fly models of polyglutamine expansion diseases to a great extent. All these effects are correlated with activation of HSF1 function and expression of its target protein chaperone genes. Notably, HSF1 activation by AZD is independent of cellular HSP90 or proteasome function. Furthermore, we show that AZD directly interacts with purified human HSF1 with high specificity, and facilitates binding of HSF1 to its recognition sequence with higher affinity. These unique findings qualify AZD as an ideal lead molecule for consideration for drug development against NDs that affect millions worldwide. PMID:27835876

  10. Anharmonic phonon quasiparticle theory of zero-point and thermal shifts in insulators: Heat capacity, bulk modulus, and thermal expansion

    NASA Astrophysics Data System (ADS)

    Allen, Philip B.

    2015-08-01

    The quasiharmonic (QH) approximation uses harmonic vibrational frequencies ωQ ,H(V ) computed at volumes V near V0 where the Born-Oppenheimer (BO) energy Eel(V ) is minimum. When this is used in the harmonic free energy, QH approximation gives a good zeroth order theory of thermal expansion and first-order theory of bulk modulus, where nth-order means smaller than the leading term by ɛn, where ɛ =ℏ ωvib/Eel or kBT /Eel , and Eel is an electronic energy scale, typically 2 to 10 eV. Experiment often shows evidence for next-order corrections. When such corrections are needed, anharmonic interactions must be included. The most accessible measure of anharmonicity is the quasiparticle (QP) energy ωQ(V ,T ) seen experimentally by vibrational spectroscopy. However, this cannot just be inserted into the harmonic free energy FH. In this paper, a free energy is found that corrects the double-counting of anharmonic interactions that is made when F is approximated by FH( ωQ(V ,T ) ) . The term "QP thermodynamics" is used for this way of treating anharmonicity. It enables (n +1 ) -order corrections if QH theory is accurate to order n . This procedure is used to give corrections to the specific heat and volume thermal expansion. The QH formulas for isothermal (BT) and adiabatic (BS) bulk moduli are clarified, and the route to higher-order corrections is indicated.

  11. On the calculation of turbulent heat transport downstream from an abrupt pipe expansion

    NASA Technical Reports Server (NTRS)

    Chieng, C. C.; Launder, B. E.

    1980-01-01

    A numerical study of flow and heat transfer in the separated flow region produced by an abrupt pipe explosion is reported, with emphasis on the region in the immediate vicinity of the wall where turbulent transport gives way to molecular conduction and diffusion. The analysis is based on a modified TEACH-2E program with the standard k-epsilon model of turbulence. Predictions of the experimental data of Zemanick and Dougall (1970) for a diameter ratio of 0.54 show generally encouraging agreement with experiment. At a diameter ratio of 0.43 different trends are discernable between measurement and calculation, though this appears to be due to effects unconnected with the wall region studied here.

  12. Internal Thermal Control System Hose Heat Transfer Fluid Thermal Expansion Evaluation Test Report

    NASA Technical Reports Server (NTRS)

    Wieland, P. O.; Hawk, H. D.

    2001-01-01

    During assembly of the International Space Station, the Internal Thermal Control Systems in adjacent modules are connected by jumper hoses referred to as integrated hose assemblies (IHAs). A test of an IHA has been performed at the Marshall Space Flight Center to determine whether the pressure in an IHA filled with heat transfer fluid would exceed the maximum design pressure when subjected to elevated temperatures (up to 60 C (140 F)) that may be experienced during storage or transportation. The results of the test show that the pressure in the IHA remains below 227 kPa (33 psia) (well below the 689 kPa (100 psia) maximum design pressure) even at a temperature of 71 C (160 F), with no indication of leakage or damage to the hose. Therefore, based on the results of this test, the IHA can safely be filled with coolant prior to launch. The test and results are documented in this Technical Memorandum.

  13. Coherent phonon excitation and linear thermal expansion in structural dynamics and ultrafast electron diffraction of laser-heated metals

    NASA Astrophysics Data System (ADS)

    Tang, Jau

    2008-04-01

    In this study, we examine the ultrafast structural dynamics of metals induced by a femtosecond laser-heating pulse as probed by time-resolved electron diffraction. Using the two-temperature model and the Grüneisen relationship we calculate the electron temperature, phonon temperature, and impulsive force at each atomic site in the slab. Together with the Fermi-Pasta-Ulam anharmonic chain model we calculate changes of bond distance and the peak shift of Bragg spots or Laue rings. A laser-heated thin slab is shown to exhibit "breathing" standing-wave behavior, with a period equal to the round-trip time for sound wave and a wavelength twice the slab thickness. The peak delay time first increases linearly with the thickness (<70nm for aluminum and <200nm for gold), but becomes less dependent if further thickness increases. Coherent phonon excitation and propagation from the stressed bulk atoms due to impulsive forces as well as the linear thermal expansion due to lattice temperature jump are shown to contribute to the overall structural changes. Differences between these two mechanisms and their dependence on film thickness and other factors are discussed.

  14. The Effects of Kernel Feeding by Halyomorpha halys (Hemiptera: Pentatomidae) on Commercial Hazelnuts.

    PubMed

    Hedstrom, C S; Shearer, P W; Miller, J C; Walton, V M

    2014-10-01

    Halyomorpha halys Stål, the brown marmorated stink bug (Hemiptera: Pentatomidae), is an invasive pest with established populations in Oregon. The generalist feeding habits of H. halys suggest it has the potential to be a pest of many specialty crops grown in Oregon, including hazelnuts, Corylus avellana L. The objectives of this study were to: 1) characterize the damage to developing hazelnut kernels resulting from feeding by H. halys adults, 2) determine how the timing of feeding during kernel development influences damage to kernels, and 3) determine if hazelnut shell thickness has an effect on feeding frequency on kernels. Adult brown marmorated stink bugs were allowed to feed on developing nuts for 1-wk periods from initial kernel development (spring) until harvest (fall). Developing nuts not exposed to feeding by H. halys served as a control treatment. The degree of damage and diagnostic symptoms corresponded with the hazelnut kernels' physiological development. Our results demonstrated that when H. halys fed on hazelnuts before kernel expansion, development of the kernels could cease, resulting in empty shells. When stink bugs fed during kernel expansion, kernels appeared malformed. When stink bugs fed on mature nuts the kernels exhibited corky, necrotic areas. Although significant differences in shell thickness were observed among the cultivars, no significant differences occurred in the proportions of damaged kernels based on field tests and laboratory choice tests. The results of these studies demonstrated that commercial hazelnuts are susceptible to damage caused by the feeding of H. halys throughout the entire period of kernel development.

  15. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2016-02-25

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  16. Modeling the relative roles of the foehn wind and urban expansion in the 2002 Beijing heat wave and possible mitigation by high reflective roofs

    NASA Astrophysics Data System (ADS)

    Ma, Hongyun; Shao, Haiyan; Song, Jie

    2014-02-01

    Rapid urbanization has intensified summer heat waves in recent decades in Beijing, China. In this study, effectiveness of applying high-reflectance roofs on mitigating the warming effects caused by urban expansion and foehn wind was simulated for a record-breaking heat wave occurred in Beijing during July 13-15, 2002. Simulation experiments were performed using the Weather Research and Forecast (WRF version 3.0) model coupled with an urban canopy model. The modeled diurnal air temperatures were compared well with station observations in the city and the wind convergence caused by urban heat island (UHI) effect could be simulated clearly. By increasing urban roof albedo, the simulated UHI effect was reduced due to decreased net radiation, and the simulated wind convergence in the urban area was weakened. Using WRF3.0 model, the warming effects caused by urban expansion and foehn wind were quantified separately, and were compared with the cooling effect due to the increased roof albedo. Results illustrated that the foehn warming effect under the northwesterly wind contributed greatly to this heat wave event in Beijing, while contribution from urban expansion accompanied by anthropogenic heating was secondary, and was mostly evident at night. Increasing roof albedo could reduce air temperature both in the day and at night, and could more than offset the urban expansion effect. The combined warming caused by the urban expansion and the foehn wind could be potentially offset with high-reflectance roofs by 58.8 % or cooled by 1.4 °C in the early afternoon on July 14, 2002, the hottest day during the heat wave.

  17. Implications of Thermal Diffusity being Inversely Proportional to Temperature Times Thermal Expansivity on Lower Mantle Heat Transport

    NASA Astrophysics Data System (ADS)

    Hofmeister, A.

    2010-12-01

    Many measurements and models of heat transport in lower mantle candidate phases contain systematic errors: (1) conventional methods of insulators involve thermal losses that are pressure (P) and temperature (T) dependent due to physical contact with metal thermocouples, (2) measurements frequently contain unwanted ballistic radiative transfer which hugely increases with T, (3) spectroscopic measurements of dense samples in diamond anvil cells involve strong refraction by which has not been accounted for in analyzing transmission data, (4) the role of grain boundary scattering in impeding heat and light transfer has largely been overlooked, and (5) essentially harmonic physical properties have been used to predict anharmonic behavior. Improving our understanding of the physics of heat transport requires accurate data, especially as a function of temperature, where anharmonicity is the key factor. My laboratory provides thermal diffusivity (D) at T from laser flash analysis, which lacks the above experimental errors. Measuring a plethora of chemical compositions in diverse dense structures (most recently, perovskites, B1, B2, and glasses) as a function of temperature provides a firm basis for understanding microscopic behavior. Given accurate measurements for all quantities: (1) D is inversely proportional to [T x alpha(T)] from ~0 K to melting, where alpha is thermal expansivity, and (2) the damped harmonic oscillator model matches measured D(T), using only two parameters (average infrared dielectric peak width and compressional velocity), both acquired at temperature. These discoveries pertain to the anharmonic aspects of heat transport. I have previously discussed the easily understood quasi-harmonic pressure dependence of D. Universal behavior makes application to the Earth straightforward: due to the stiffness and slow motions of the plates and interior, and present-day, slow planetary cooling rates, Earth can be approximated as being in quasi

  18. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense.

  19. Anomalies in thermal expansion and heat capacity of TmB50 at low temperatures: magnetic phase transition and crystal electric field effect.

    PubMed

    Novikov, V V; Zhemoedov, N A; Mitroshenkov, N V; Matovnikov, A V

    2016-11-01

    We experimentally study the heat capacity and thermal expansion of thulium boride (TmB50) at temperatures of 2-300 K. The wide temperature range (2-180 K) of boride negative expansion was revealed. We found the anomalies in C(T) heat capacity temperature dependence, attributed to the Schottky contribution (i.e. the influence of the crystal electric field: CEF), as well as the magnetic phase transition. CEF-splitting of the f-levels of the Tm(3+) ion was described by the Schottky function of heat capacity with a quasi-quartet in the ground state. Excited multiplets are separated from the ground state by energy gaps δ1 = 100 K, and δ2 ≈ 350 K. The heat capacity maximum at Tmax ≈ 2.4 K may be attributed to the possible magnetic transition in TmB50. Other possible causes of the low-temperature maximum of C(T) dependence are the nonspherical surroundings of rare earth atoms due to the boron atoms in the crystal lattice of the boride and the emergence of two-level systems, as well as the splitting of the ground multiplet due to local magnetic fields of the neighboring ions of thulium. Anomalies in heat capacity are mapped with the thermal expansion features of boride. It is found that the TmB50 thermal expansion characteristic features are due to the influence of the CEF, as well as the asymmetry of the spatial arrangement of boron atoms around the rare earth atoms in the crystal lattice of RB50. The Grüneisen parameters, corresponding to the excitation of different multiplets of CEF-splitting, were determined. A satisfactory accordance between the experimental and estimated temperature dependencies of the boride thermal expansion coefficient was achieved.

  20. Iterative software kernels

    SciTech Connect

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  1. Tandem Duplication Events in the Expansion of the Small Heat Shock Protein Gene Family in Solanum lycopersicum (cv. Heinz 1706).

    PubMed

    Krsticevic, Flavia J; Arce, Débora P; Ezpeleta, Joaquín; Tapia, Elizabeth

    2016-10-13

    In plants, fruit maturation and oxidative stress can induce small heat shock protein (sHSP) synthesis to maintain cellular homeostasis. Although the tomato reference genome was published in 2012, the actual number and functionality of sHSP genes remain unknown. Using a transcriptomic (RNA-seq) and evolutionary genomic approach, putative sHSP genes in the Solanum lycopersicum (cv. Heinz 1706) genome were investigated. A sHSP gene family of 33 members was established. Remarkably, roughly half of the members of this family can be explained by nine independent tandem duplication events that determined, evolutionarily, their functional fates. Within a mitochondrial class subfamily, only one duplicated member, Solyc08g078700, retained its ancestral chaperone function, while the others, Solyc08g078710 and Solyc08g078720, likely degenerated under neutrality and lack ancestral chaperone function. Functional conservation occurred within a cytosolic class I subfamily, whose four members, Solyc06g076570, Solyc06g076560, Solyc06g076540, and Solyc06g076520, support ∼57% of the total sHSP RNAm in the red ripe fruit. Subfunctionalization occurred within a new subfamily, whose two members, Solyc04g082720 and Solyc04g082740, show heterogeneous differential expression profiles during fruit ripening. These findings, involving the birth/death of some genes or the preferential/plastic expression of some others during fruit ripening, highlight the importance of tandem duplication events in the expansion of the sHSP gene family in the tomato genome. Despite its evolutionary diversity, the sHSP gene family in the tomato genome seems to be endowed with a core set of four homeostasis genes: Solyc05g014280, Solyc03g082420, Solyc11g020330, and Solyc06g076560, which appear to provide a baseline protection during both fruit ripening and heat shock stress in different tomato tissues.

  2. Tandem Duplication Events in the Expansion of the Small Heat Shock Protein Gene Family in Solanum lycopersicum (cv. Heinz 1706)

    PubMed Central

    Krsticevic, Flavia J.; Arce, Débora P.; Ezpeleta, Joaquín; Tapia, Elizabeth

    2016-01-01

    In plants, fruit maturation and oxidative stress can induce small heat shock protein (sHSP) synthesis to maintain cellular homeostasis. Although the tomato reference genome was published in 2012, the actual number and functionality of sHSP genes remain unknown. Using a transcriptomic (RNA-seq) and evolutionary genomic approach, putative sHSP genes in the Solanum lycopersicum (cv. Heinz 1706) genome were investigated. A sHSP gene family of 33 members was established. Remarkably, roughly half of the members of this family can be explained by nine independent tandem duplication events that determined, evolutionarily, their functional fates. Within a mitochondrial class subfamily, only one duplicated member, Solyc08g078700, retained its ancestral chaperone function, while the others, Solyc08g078710 and Solyc08g078720, likely degenerated under neutrality and lack ancestral chaperone function. Functional conservation occurred within a cytosolic class I subfamily, whose four members, Solyc06g076570, Solyc06g076560, Solyc06g076540, and Solyc06g076520, support ∼57% of the total sHSP RNAm in the red ripe fruit. Subfunctionalization occurred within a new subfamily, whose two members, Solyc04g082720 and Solyc04g082740, show heterogeneous differential expression profiles during fruit ripening. These findings, involving the birth/death of some genes or the preferential/plastic expression of some others during fruit ripening, highlight the importance of tandem duplication events in the expansion of the sHSP gene family in the tomato genome. Despite its evolutionary diversity, the sHSP gene family in the tomato genome seems to be endowed with a core set of four homeostasis genes: Solyc05g014280, Solyc03g082420, Solyc11g020330, and Solyc06g076560, which appear to provide a baseline protection during both fruit ripening and heat shock stress in different tomato tissues. PMID:27565886

  3. Monitoring ground-surface heating during expansion of the Casa Diablo production well field at Mammoth Lakes, California

    USGS Publications Warehouse

    Bergfeld, D.; Vaughan, R. Greg; Evans, William C.; Olsen, Eric

    2015-01-01

    The Long Valley hydrothermal system supports geothermal power production from 3 binary plants (Casa Diablo) near the town of Mammoth Lakes, California. Development and growth of thermal ground at sites west of Casa Diablo have created concerns over planned expansion of a new well field and the associated increases in geothermal fluid production. To ensure that all areas of ground heating are identified prior to new geothermal development, we obtained high-resolution aerial thermal infrared imagery across the region. The imagery covers the existing and proposed well fields and part of the town of Mammoth Lakes. Imagery results from a predawn flight on Oct. 9, 2014 readily identified the Shady Rest thermal area (SRST), one of two large areas of ground heating west of Casa Diablo, as well as other known thermal areas smaller in size. Maximum surface temperatures at 3 thermal areas were 26–28 °C. Numerous small areas with ground temperatures >16 °C were also identified and slated for field investigations in summer 2015. Some thermal anomalies in the town of Mammoth Lakes clearly reflect human activity.Previously established projects to monitor impacts from geothermal power production include yearly surveys of soil temperatures and diffuse CO2 emissions at SRST, and less regular surveys to collect samples from fumaroles and gas vents across the region. Soil temperatures at 20 cm depth at SRST are well correlated with diffuse CO2 flux, and both parameters show little variation during the 2011–14 field surveys. Maximum temperatures were between 55–67 °C and associated CO2 discharge was around 12–18 tonnes per day. The carbon isotope composition of CO2 is fairly uniform across the area ranging between –3.7 to –4.4 ‰. The gas composition of the Shady Rest fumarole however has varied with time, and H2S concentrations in the gas have been increasing since 2009.

  4. Kernel Affine Projection Algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  5. Online Sequential Extreme Learning Machine With Kernels.

    PubMed

    Scardapane, Simone; Comminiello, Danilo; Scarpiniti, Michele; Uncini, Aurelio

    2015-09-01

    The extreme learning machine (ELM) was recently proposed as a unifying framework for different families of learning algorithms. The classical ELM model consists of a linear combination of a fixed number of nonlinear expansions of the input vector. Learning in ELM is hence equivalent to finding the optimal weights that minimize the error on a dataset. The update works in batch mode, either with explicit feature mappings or with implicit mappings defined by kernels. Although an online version has been proposed for the former, no work has been done up to this point for the latter, and whether an efficient learning algorithm for online kernel-based ELM exists remains an open problem. By explicating some connections between nonlinear adaptive filtering and ELM theory, in this brief, we present an algorithm for this task. In particular, we propose a straightforward extension of the well-known kernel recursive least-squares, belonging to the kernel adaptive filtering (KAF) family, to the ELM framework. We call the resulting algorithm the kernel online sequential ELM (KOS-ELM). Moreover, we consider two different criteria used in the KAF field to obtain sparse filters and extend them to our context. We show that KOS-ELM, with their integration, can result in a highly efficient algorithm, both in terms of obtained generalization error and training time. Empirical evaluations demonstrate interesting results on some benchmarking datasets.

  6. Multiple collaborative kernel tracking.

    PubMed

    Fan, Zhimin; Yang, Ming; Wu, Ying

    2007-07-01

    Those motion parameters that cannot be recovered from image measurements are unobservable in the visual dynamic system. This paper studies this important issue of singularity in the context of kernel-based tracking and presents a novel approach that is based on a motion field representation which employs redundant but sparsely correlated local motion parameters instead of compact but uncorrelated global ones. This approach makes it easy to design fully observable kernel-based motion estimators. This paper shows that these high-dimensional motion fields can be estimated efficiently by the collaboration among a set of simpler local kernel-based motion estimators, which makes the new approach very practical.

  7. Robotic Intelligence Kernel: Communications

    SciTech Connect

    Walton, Mike C.

    2009-09-16

    The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.

  8. Oil point pressure of Indian almond kernels

    NASA Astrophysics Data System (ADS)

    Aregbesola, O.; Olatunde, G.; Esuola, S.; Owolarafe, O.

    2012-07-01

    The effect of preprocessing conditions such as moisture content, heating temperature, heating time and particle size on oil point pressure of Indian almond kernel was investigated. Results showed that oil point pressure was significantly (P < 0.05) affected by above mentioned parameters. It was also observed that oil point pressure reduced with increase in heating temperature and heating time for both coarse and fine particles. Furthermore, an increase in moisture content resulted in increased oil point pressure for coarse particles while there was a reduction in oil point pressure with increase in moisture content for fine particles.

  9. Robotic Intelligence Kernel: Driver

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.

  10. Thermal expansion, heat capacity and Grüneisen parameter of iridium phosphide Ir2P from quasi-harmonic Debye model

    NASA Astrophysics Data System (ADS)

    Liu, Z. J.; Song, T.; Sun, X. W.; Ma, Q.; Wang, T.; Guo, Y.

    2017-03-01

    Thermal expansion coefficient, heat capacity, and Grüneisen parameter of iridium phosphide Ir2P are reported by means of quasi-harmonic Debye model for the first time in the current study. This model combines with first-principles calculations within generalized gradient approximation using pseudopotentials and a plane-wave basis in the framework of density functional theory, and it takes into account the phononic effects within the quasi-harmonic approximation. The Debye temperature as a function of volume, the Grüneisen parameter, thermal expansion coefficient, constant-volume and constant-pressure heat capacities, and entropy on the temperature T are also successfully obtained. All the thermodynamic properties of Ir2P in the whole pressure range from 0 to 100 GPa and temperature range from 0 to 3000 K are summarized and discussed in detail.

  11. Similarity and Boubaker Polynomials Expansion Scheme BPES comparative solutions to the heat transfer equation for incompressible non-Newtonian fluids: case of laminar boundary energy equation

    NASA Astrophysics Data System (ADS)

    Zheng, L. C.; Zhang, X. X.; Boubaker, K.; Yücel, U.; Gargouri-Ellouze, E.; Yıldırım, A.

    2011-08-01

    In this paper, a new model is proposed for the heat transfer characteristics of power law non- Newtonian fluids. The effects of power law viscosity on temperature field were taken into account by assuming that the temperature field is similar to the velocity field with modified Fourier's law of heat conduction for power law fluid media. The solutions obtained by using Boubaker Polynomials Expansion Scheme (BPES) technique are compared with those of the recent related similarity method in the literature with good agreement to verify the protocol exactness.

  12. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  13. Heating rate measurements over 30 deg and 40 deg (half angle) blunt cones in air and helium in the Langley expansion tube facility

    NASA Technical Reports Server (NTRS)

    Reddy, N. M.

    1980-01-01

    Convective heat transfer measurements, made on the conical portion of spherically blunted cones (30 deg and 40 deg half angle) in an expansion tube are discussed. The test gases used were helium and air; flow velocities were about 6.8 km/sec for helium and about 5.1 km/sec for air. The measured heating rates are compared with calculated results using a viscous shock layer computer code. For air, various techniques to determine flow velocity yielded identical results, but for helium, the flow velocity varied by as much as eight percent depending on which technique was used. The measured heating rates are in satisfactory agreement with calculation for helium, assuming the lower flow velocity, the measurements are significantly greater than theory and the discrepancy increased with increasing distance along the cone.

  14. Weighted Bergman Kernels and Quantization}

    NASA Astrophysics Data System (ADS)

    Engliš, Miroslav

    Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion for x,y near z, where φ(x,y) is an almost-analytic extension of &\\phi(x)=φ(x,x) and similarly for ψ. Further, . If in addition Ω is of finite type, φ,ψ behave reasonably at the boundary, and - log φ, - log ψ are strictly plurisubharmonic on Ω, we obtain also an analogous asymptotic expansion for the Berezin transform and give applications to the Berezin quantization. Finally, for Ω smoothly bounded and strictly pseudoconvex and φ a smooth strictly plurisubharmonic defining function for Ω, we also obtain results on the Berezin-Toeplitz quantization.

  15. Kernel mucking in top

    SciTech Connect

    LeFebvre, W.

    1994-08-01

    For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.

  16. Heat flow in anharmonic crystals with internal and external stochastic baths: a convergent polymer expansion for a model with discrete time and long range interparticle interaction

    NASA Astrophysics Data System (ADS)

    Pereira, Emmanuel; Mendonça, Mateus S.; Lemos, Humberto C. F.

    2015-09-01

    We investigate a chain of oscillators with anharmonic on-site potentials, with long range interparticle interactions, and coupled both to external and internal stochastic thermal reservoirs of Ornstein-Uhlenbeck type. We develop an integral representation, a` la Feynman-Kac, for the correlations and the heat current. We assume the approximation of discrete times in the integral formalism (together with a simplification in a subdominant part of the harmonic interaction) and develop a suitable polymer expansion for the model. In the regime of strong anharmonicity, strong harmonic pinning, and for the interparticle interaction with integrable polynomial decay, we prove the convergence of the polymer expansion uniformly in volume (number of sites and time). We also show that the two-point correlation decays in space such as the interparticle interaction. The existence of a convergent polymer expansion is of practical interest: it establishes a rigorous support for a perturbative analysis of the heat flow problem and for the computation of the thermal conductivity in related anharmonic crystals, including those with inhomogeneous potentials and long range interparticle interactions. To show the usefulness and trustworthiness of our approach, we compute the thermal conductivity of a specific anharmonic chain, and make a comparison with related numerical results presented in the literature.

  17. Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.

    PubMed

    Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I

    2016-03-01

    The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor.

  18. A small concentration expansion for the effective heat conductivity of a random disperse two-component material; an assessment of Batchelor's renormalization method

    NASA Astrophysics Data System (ADS)

    Vanbeek, P.

    1987-11-01

    The difficulty in the expansion of the effective properties of random disperse media in powers of the volume concentration c of the disperse phase presented by the divergence of certain integrals that perform averaging of two-particle approximations is considered. The random heat conduction problem analyzed by Jeffrey (1974) is treated using Batchelor's (1974) renormalization method. Batchelor's two-particle equation is extended to a hierarchical set of n-particle equations for arbitrary n. The solution of the hierarchy is seen to consist of a sequence of two, three, and more particle terms. The two and three-particle terms are calculated. It is proved that all i-particle terms (i greater than or = 2) can be averaged convergently showing that the hierarchical approach yields a well-defined expansion in integer powers of c of the effective conductivity. It follows that Jeffrey's expression for the effective conductivity is 0(c sq) - accurate.

  19. Resummed memory kernels in generalized system-bath master equations.

    PubMed

    Mavros, Michael G; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  20. Resummed memory kernels in generalized system-bath master equations

    NASA Astrophysics Data System (ADS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  1. Resummed memory kernels in generalized system-bath master equations

    SciTech Connect

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  2. Robotic Intelligence Kernel: Architecture

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  3. Robotic Intelligence Kernel: Visualization

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.

  4. Development of a single kernel analysis method for detection of 2-acetyl-1-pyrroline in aromatic rice germplasm

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Solid-phase microextraction (SPME) in conjunction with GC/MS was used to distinguish non-aromatic rice (Oryza sativa, L.) kernels from aromatic rice kernels. In this method, single kernels along with 10 µl of 0.1 ng 2,4,6-Trimethylpyridine (TMP) were placed in sealed vials and heated to 80oC for 18...

  5. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2015-12-22

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  6. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2016-06-01

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  7. Kernel Optimization in Discriminant Analysis

    PubMed Central

    You, Di; Hamsici, Onur C.; Martinez, Aleix M.

    2011-01-01

    Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results using a large number of databases and classifiers demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates. PMID:20820072

  8. Thermal Expansion "Paradox."

    ERIC Educational Resources Information Center

    Fakhruddin, Hasan

    1993-01-01

    Describes a paradox in the equation for thermal expansion. If the calculations for heating a rod and subsequently cooling a rod are determined, the new length of the cool rod is shorter than expected. (PR)

  9. Partially ionized gas flow and heat transfer in the separation, reattachment, and redevelopment regions downstream of an abrupt circular channel expansion.

    NASA Technical Reports Server (NTRS)

    Back, L. H.; Massier, P. F.; Roschke, E. J.

    1972-01-01

    Heat transfer and pressure measurements obtained in the separation, reattachment, and redevelopment regions along a tube and nozzle located downstream of an abrupt channel expansion are presented for a very high enthalpy flow of argon. The ionization energy fraction extended up to 0.6 at the tube inlet just downstream of the arc heater. Reattachment resulted from the growth of an instability in the vortex sheet-like shear layer between the central jet that discharged into the tube and the reverse flow along the wall at the lower Reynolds numbers, as indicated by water flow visualization studies which were found to dynamically model the high-temperature gas flow. A reasonably good prediction of the heat transfer in the reattachment region where the highest heat transfer occurred and in the redevelopment region downstream can be made by using existing laminar boundary layer theory for a partially ionized gas. In the experiments as much as 90 per cent of the inlet energy was lost by heat transfer to the tube and the nozzle wall.

  10. Thermal conductivity and thermal linear expansion measurements on molten salts for assessing their behaviour as heat transport fluid in thermodynamics solar systems

    NASA Astrophysics Data System (ADS)

    Coppa, P.; Bovesecchi, G.; Fabrizi, F.

    2010-08-01

    Molten salts (sodium and potassium nitrides) are going to be used in many different plants as heat transferring fluids, e.g. concentration solar plants, nuclear power plants, etc. In fact they present may important advantages: their absolute safety and non toxicity, availability and low cost. But their use, e.g. in the energy receiving pipe in the focus of the parabolic mirror concentrator of the solar thermodynamic plant, requires the accurate knowledge of the thermophysical properties, above all thermal conductivity, viscosity, specific heat and thermal linear expansion, in the temperature range 200°C÷600°C. In the new laboratory by ENEA Casaccia, SolTerm Department all these properties are going to be measured. Thermal conductivity is measured with the standard probe method (linear heat source inserted into the material) manufacturing a special probe suited to the foreseen temperature range (190-550°C). The probe is made of a ceramic quadrifilar pipe containing in different holes the heater (Ni wire) and the thermometer (type J thermocouple). The thermal linear expansion will be measured by a special system designed and built to this end, measuring the sample dilatation by the reflection of a laser beam by the bottom of the meniscus in the liquid solid interface. The viscosity will be evaluated detecting the start of the natural convection in the same experiment as to measure thermal conductivity. In the paper the construction of the devices, the results of preliminary tests and an evaluation of the obtainable accuracy are reported.

  11. Kernel component analysis using an epsilon-insensitive robust loss function.

    PubMed

    Alzate, Carlos; Suykens, Johan A K

    2008-09-01

    Kernel principal component analysis (PCA) is a technique to perform feature extraction in a high-dimensional feature space, which is nonlinearly related to the original input space. The kernel PCA formulation corresponds to an eigendecomposition of the kernel matrix: eigenvectors with large eigenvalues correspond to the principal components in the feature space. Starting from the least squares support vector machine (LS-SVM) formulation to kernel PCA, we extend it to a generalized form of kernel component analysis (KCA) with a general underlying loss function made explicit. For classical kernel PCA, the underlying loss function is L(2) . In this generalized form, one can plug in also other loss functions. In the context of robust statistics, it is known that the L(2) loss function is not robust because its influence function is not bounded. Therefore, outliers can skew the solution from the desired one. Another issue with kernel PCA is the lack of sparseness: the principal components are dense expansions in terms of kernel functions. In this paper, we introduce robustness and sparseness into kernel component analysis by using an epsilon-insensitive robust loss function. We propose two different algorithms. The first method solves a set of nonlinear equations with kernel PCA as starting points. The second method uses a simplified iterative weighting procedure that leads to solving a sequence of generalized eigenvalue problems. Simulations with toy and real-life data show improvements in terms of robustness together with a sparse representation.

  12. Kernel machine SNP-set testing under multiple candidate kernels.

    PubMed

    Wu, Michael C; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M; Harmon, Quaker E; Lin, Xinyi; Engel, Stephanie M; Molldrem, Jeffrey J; Armistead, Paul M

    2013-04-01

    Joint testing for the cumulative effect of multiple single-nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large-scale genetic association studies. The kernel machine (KM)-testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori because this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest P-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power vs. using the best candidate kernel.

  13. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or...

  14. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored...

  15. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel,...

  16. Kernel phase and kernel amplitude in Fizeau imaging

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin J. S.

    2016-12-01

    Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent history of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.

  17. Atmosphere expansion and mass loss of close-orbit giant exoplanets heated by stellar XUV. I. Modeling of hydrodynamic escape of upper atmospheric material

    SciTech Connect

    Shaikhislamov, I. F.; Khodachenko, M. L.; Sasunov, Yu. L.; Lammer, H.; Kislyakova, K. G.; Erkaev, N. V.

    2014-11-10

    In the present series of papers we propose a consistent description of the mass loss process. To study in a comprehensive way the effects of the intrinsic magnetic field of a close-orbit giant exoplanet (a so-called hot Jupiter) on atmospheric material escape and the formation of a planetary inner magnetosphere, we start with a hydrodynamic model of an upper atmosphere expansion in this paper. While considering a simple hydrogen atmosphere model, we focus on the self-consistent inclusion of the effects of radiative heating and ionization of the atmospheric gas with its consequent expansion in the outer space. Primary attention is paid to an investigation of the role of the specific conditions at the inner and outer boundaries of the simulation domain, under which different regimes of material escape (free and restricted flow) are formed. A comparative study is performed of different processes, such as X-ray and ultraviolet (XUV) heating, material ionization and recombination, H{sub 3}{sup +} cooling, adiabatic and Lyα cooling, and Lyα reabsorption. We confirm the basic consistency of the outcomes of our modeling with the results of other hydrodynamic models of expanding planetary atmospheres. In particular, we determine that, under the typical conditions of an orbital distance of 0.05 AU around a Sun-type star, a hot Jupiter plasma envelope may reach maximum temperatures up to ∼9000 K with a hydrodynamic escape speed of ∼9 km s{sup –1}, resulting in mass loss rates of ∼(4-7) · 10{sup 10} g s{sup –1}. In the range of the considered stellar-planetary parameters and XUV fluxes, that is close to the mass loss in the energy-limited case. The inclusion of planetary intrinsic magnetic fields in the model is a subject of the follow-up paper (Paper II).

  18. A direct approach to Bergman kernel asymptotics for positive line bundles

    NASA Astrophysics Data System (ADS)

    Berman, Robert; Berndtsson, Bo; Sjöstrand, Johannes

    2008-10-01

    We give an elementary proof of the existence of an asymptotic expansion in powers of k of the Bergman kernel associated to L k , where L is a positive line bundle over a compact complex manifold. We also give an algorithm for computing the coefficients in the expansion.

  19. The Adaptive Kernel Neural Network

    DTIC Science & Technology

    1989-10-01

    A neural network architecture for clustering and classification is described. The Adaptive Kernel Neural Network (AKNN) is a density estimation...classification layer. The AKNN retains the inherent parallelism common in neural network models. Its relationship to the kernel estimator allows the network to

  20. Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Theodoridis, Sergios

    2008-12-01

    Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.

  1. Robotic intelligence kernel

    DOEpatents

    Bruemmer, David J.

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  2. Flexible Kernel Memory

    PubMed Central

    Nowicki, Dimitri; Siegelmann, Hava

    2010-01-01

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013

  3. Popping the Kernel Modeling the States of Matter

    ERIC Educational Resources Information Center

    Hitt, Austin; White, Orvil; Hanson, Debbie

    2005-01-01

    This article discusses how to use popcorn to engage students in model building and to teach them about the nature of matter. Popping kernels is a simple and effective method to connect the concepts of heat, motion, and volume with the different phases of matter. Before proceeding with the activity the class should discuss the nature of scientific…

  4. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off....

  5. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...

  6. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle...

  7. Source identity and kernel functions for Inozemtsev-type systems

    NASA Astrophysics Data System (ADS)

    Langmann, Edwin; Takemura, Kouichi

    2012-08-01

    The Inozemtsev Hamiltonian is an elliptic generalization of the differential operator defining the BCN trigonometric quantum Calogero-Sutherland model, and its eigenvalue equation is a natural many-variable generalization of the Heun differential equation. We present kernel functions for Inozemtsev Hamiltonians and Chalykh-Feigin-Veselov-Sergeev-type deformations thereof. Our main result is a solution of a heat-type equation for a generalized Inozemtsev Hamiltonian which is the source of all these kernel functions. Applications are given, including a derivation of simple exact eigenfunctions and eigenvalues of the Inozemtsev Hamiltonian.

  8. FUV Continuum in Flare Kernels Observed by IRIS

    NASA Astrophysics Data System (ADS)

    Daw, Adrian N.; Kowalski, Adam; Allred, Joel C.; Cauzzi, Gianna

    2016-05-01

    Fits to Interface Region Imaging Spectrograph (IRIS) spectra observed from bright kernels during the impulsive phase of solar flares are providing long-sought constraints on the UV/white-light continuum emission. Results of fits of continua plus numerous atomic and molecular emission lines to IRIS far ultraviolet (FUV) spectra of bright kernels are presented. Constraints on beam energy and cross sectional area are provided by cotemporaneous RHESSI, FERMI, ROSA/DST, IRIS slit-jaw and SDO/AIA observations, allowing for comparison of the observed IRIS continuum to calculations of non-thermal electron beam heating using the RADYN radiative-hydrodynamic loop model.

  9. Use of meixner functions in estimation of Volterra kernels of nonlinear systems with delay.

    PubMed

    Asyali, Musa H; Juusola, Mikko

    2005-02-01

    Volterra series representation of nonlinear systems is a mathematical analysis tool that has been successfully applied in many areas of biological sciences, especially in the area of modeling of hemodynamic response. In this study, we explored the possibility of using discrete time Meixner basis functions (MBFs) in estimating Volterra kernels of nonlinear systems. The problem of estimation of Volterra kernels can be formulated as a multiple regression problem and solved using least squares estimation. By expanding system kernels with some suitable basis functions, it is possible to reduce the number of parameters to be estimated and obtain better kernel estimates. Thus far, Laguerre basis functions have been widely used in this framework. However, research in signal processing indicates that when the kernels have a slow initial onset or delay, Meixner functions, which can be made to have a slow start, are more suitable in terms of providing a more accurate approximation to the kernels. We, therefore, compared the performance of Meixner functions, in kernel estimation, to that of Laguerre functions in some test cases that we constructed and in a real experimental case where we studied photoreceptor responses of photoreceptor cells of adult fruitflies (Drosophila melanogaster). Our results indicate that when there is a slow initial onset or delay, MBF expansion provides better kernel estimates.

  10. Force Field Benchmark of Organic Liquids: Density, Enthalpy of Vaporization, Heat Capacities, Surface Tension, Isothermal Compressibility, Volumetric Expansion Coefficient, and Dielectric Constant.

    PubMed

    Caleman, Carl; van Maaren, Paul J; Hong, Minyan; Hub, Jochen S; Costa, Luciano T; van der Spoel, David

    2012-01-10

    The chemical composition of small organic molecules is often very similar to amino acid side chains or the bases in nucleic acids, and hence there is no a priori reason why a molecular mechanics force field could not describe both organic liquids and biomolecules with a single parameter set. Here, we devise a benchmark for force fields in order to test the ability of existing force fields to reproduce some key properties of organic liquids, namely, the density, enthalpy of vaporization, the surface tension, the heat capacity at constant volume and pressure, the isothermal compressibility, the volumetric expansion coefficient, and the static dielectric constant. Well over 1200 experimental measurements were used for comparison to the simulations of 146 organic liquids. Novel polynomial interpolations of the dielectric constant (32 molecules), heat capacity at constant pressure (three molecules), and the isothermal compressibility (53 molecules) as a function of the temperature have been made, based on experimental data, in order to be able to compare simulation results to them. To compute the heat capacities, we applied the two phase thermodynamics method (Lin et al. J. Chem. Phys.2003, 119, 11792), which allows one to compute thermodynamic properties on the basis of the density of states as derived from the velocity autocorrelation function. The method is implemented in a new utility within the GROMACS molecular simulation package, named g_dos, and a detailed exposé of the underlying equations is presented. The purpose of this work is to establish the state of the art of two popular force fields, OPLS/AA (all-atom optimized potential for liquid simulation) and GAFF (generalized Amber force field), to find common bottlenecks, i.e., particularly difficult molecules, and to serve as a reference point for future force field development. To make for a fair playing field, all molecules were evaluated with the same parameter settings, such as thermostats and barostats

  11. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  12. Relating dispersal and range expansion of California sea otters.

    PubMed

    Krkosek, Martin; Lauzon-Guay, Jean-Sébastien; Lewis, Mark A

    2007-06-01

    Linking dispersal and range expansion of invasive species has long challenged theoretical and quantitative ecologists. Subtle differences in dispersal can yield large differences in geographic spread, with speeds ranging from constant to rapidly increasing. We developed a stage-structured integrodifference equation (IDE) model of the California sea otter range expansion that occurred between 1914 and 1986. The non-spatial model, a linear matrix population model, was coupled to a suite of candidate dispersal kernels to form stage-structured IDEs. Demographic and dispersal parameters were estimated independent of range expansion data. Using a single dispersal parameter, alpha, we examined how well these stage-structured IDEs related small scale demographic and dispersal processes with geographic population expansion. The parameter alpha was estimated by fitting the kernels to dispersal data and by fitting the IDE model to range expansion data. For all kernels, the alpha estimate from range expansion data fell within the 95% confidence intervals of the alpha estimate from dispersal data. The IDE models with exponentially bounded kernels predicted invasion velocities that were captured within the 95% confidence bounds on the observed northbound invasion velocity. However, the exponentially bounded kernels yielded range expansions that were in poor qualitative agreement with range expansion data. An IDE model with fat (exponentially unbounded) tails and accelerating spatial spread yielded the best qualitative match. This model explained 94% and 97% of the variation in northbound and southbound range expansions when fit to range expansion data. These otters may have been fat-tailed accelerating invaders or they may have followed a piece-wise linear spread first over kelp forests and then over sandy habitats. Further, habitat-specific dispersal data could resolve these explanations.

  13. Universal Expansion.

    ERIC Educational Resources Information Center

    McArdle, Heather K.

    1997-01-01

    Describes a week-long activity for general to honors-level students that addresses Hubble's law and the universal expansion theory. Uses a discrepant event-type activity to lead up to the abstract principles of the universal expansion theory. (JRH)

  14. RTOS kernel in portable electrocardiograph

    NASA Astrophysics Data System (ADS)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  15. Volterra series truncation and kernel estimation of nonlinear systems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Billings, S. A.

    2017-02-01

    The Volterra series model is a direct generalisation of the linear convolution integral and is capable of displaying the intrinsic features of a nonlinear system in a simple and easy to apply way. Nonlinear system analysis using Volterra series is normally based on the analysis of its frequency-domain kernels and a truncated description. But the estimation of Volterra kernels and the truncation of Volterra series are coupled with each other. In this paper, a novel complex-valued orthogonal least squares algorithm is developed. The new algorithm provides a powerful tool to determine which terms should be included in the Volterra series expansion and to estimate the kernels and thus solves the two problems all together. The estimated results are compared with those determined using the analytical expressions of the kernels to validate the method. To further evaluate the effectiveness of the method, the physical parameters of the system are also extracted from the measured kernels. Simulation studies demonstrates that the new approach not only can truncate the Volterra series expansion and estimate the kernels of a weakly nonlinear system, but also can indicate the applicability of the Volterra series analysis in a severely nonlinear system case.

  16. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  17. Low-temperature heat capacities of CaAl2SiO6 glass and pyroxene and thermal expansion of CaAl2SiO6 pyroxene.

    USGS Publications Warehouse

    Haselton, H.T.; Hemingway, B.S.; Robie, R.A.

    1984-01-01

    Low-T heat capacities (5-380 K) have been measured by adiabatic calorimetry for synthetic CaAl2SiO6 glass and pyroxene. High-T unit cell parameters were measured for CaAl2SiO6 pyroxene by means of a Nonius Guinier-Lenne powder camera in order to determine the mean coefficient of thermal expansion in the T range 25-1200oC. -J.A.Z.

  18. New analytical TEMOM solutions for a class of collision kernels in the theory of Brownian coagulation

    NASA Astrophysics Data System (ADS)

    He, Qing; Shchekin, Alexander K.; Xie, Ming-Liang

    2015-06-01

    New analytical solutions in the theory of the Brownian coagulation with a wide class of collision kernels have been found with using the Taylor-series expansion method of moments (TEMOM). It has been shown at different power exponents in the collision kernels from this class and at arbitrary initial conditions that the relative rates of changing zeroth and second moments of the particle volume distribution have the same long time behavior with power exponent -1, while the dimensionless particle moment related to the geometric standard deviation tends to the constant value which equals 2. The power exponent in the collision kernel in the class studied affects the time of approaching the self-preserving distribution, the smaller the value of the index, the longer time. It has also been shown that constant collision kernel gives for the moments in the Brownian coagulation the results which are very close to that in the continuum regime.

  19. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  20. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  1. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  2. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  3. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  4. Travel-Time and Amplitude Sensitivity Kernels

    DTIC Science & Technology

    2011-09-01

    amplitude sensitivity kernels shown in the lower panels concentrate about the corresponding eigenrays . Each 3D kernel exhibits a broad negative...in 2 and 3 dimensions have similar 11 shapes to corresponding travel-time sensitivity kernels (TSKs), centered about the respective eigenrays

  5. Adaptive wiener image restoration kernel

    SciTech Connect

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  6. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  7. Identification of quantitative trait loci for popping traits and kernel characteristics in sorghum grain

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Popped grain sorghum has developed a niche among specialty snack-food consumers. In contrast to popcorn, sorghum has not benefited from persistent selective breeding for popping efficiency and kernel expansion ratio. While recent studies have already demonstrated that popping characteristics are h...

  8. Trajectory, Development, and Temperature of Spark Kernels Exiting into Quiescent Air (Preprint)

    DTIC Science & Technology

    2012-04-01

    measurements of the Hencken burner flames. Jay Gore is the academic advisor for the corresponding author. His tutelage is gratefully acknowledged...165-184. 6Au, S., Haley , R., Smy, P., “The Influence of the Igniter-Induced Blast Wave Upon the Initial Volume and Expansion of the Flame Kernel

  9. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    SciTech Connect

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  10. A reduced volumetric expansion factor plot

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.

    1979-01-01

    A reduced volumetric expansion factor plot was constructed for simple fluids which is suitable for engineering computations in heat transfer. Volumetric expansion factors were found useful in correlating heat transfer data over a wide range of operating conditions including liquids, gases and the near critical region.

  11. A reduced volumetric expansion factor plot

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.

    1979-01-01

    A reduced volumetric expansion factor plot has been constructed for simple fluids which is suitable for engineering computations in heat transfer. Volumetric expansion factors have been found useful in correlating heat transfer data over a wide range of operating conditions including liquids, gases and the near critical region.

  12. The flare kernel in the impulsive phase

    NASA Technical Reports Server (NTRS)

    Dejager, C.

    1986-01-01

    The impulsive phase of a flare is characterized by impulsive bursts of X-ray and microwave radiation, related to impulsive footpoint heating up to 50 or 60 MK, by upward gas velocities (150 to 400 km/sec) and by a gradual increase of the flare's thermal energy content. These phenomena, as well as non-thermal effects, are all related to the impulsive energy injection into the flare. The available observations are also quantitatively consistent with a model in which energy is injected into the flare by beams of energetic electrons, causing ablation of chromospheric gas, followed by convective rise of gas. Thus, a hole is burned into the chromosphere; at the end of impulsive phase of an average flare the lower part of that hole is situated about 1800 km above the photosphere. H alpha and other optical and UV line emission is radiated by a thin layer (approx. 20 km) at the bottom of the flare kernel. The upward rising and outward streaming gas cools down by conduction in about 45 s. The non-thermal effects in the initial phase are due to curtailing of the energy distribution function by escape of energetic electrons. The single flux tube model of a flare does not fit with these observations; instead we propose the spaghetti-bundle model. Microwave and gamma-ray observations suggest the occurrence of dense flare knots of approx. 800 km diameter, and of high temperature. Future observations should concentrate on locating the microwave/gamma-ray sources, and on determining the kernel's fine structure and the related multi-loop structure of the flaring area.

  13. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  14. Optimization of numerical orbitals using the Helmholtz kernel

    NASA Astrophysics Data System (ADS)

    Solala, Eelis; Losilla, Sergio A.; Sundholm, Dage; Xu, Wenhua; Parkkinen, Pauli

    2017-02-01

    We present an integration scheme for optimizing the orbitals in numerical electronic structure calculations on general molecules. The orbital optimization is performed by integrating the Helmholtz kernel in the double bubble and cube basis, where bubbles represent the steep part of the functions in the vicinity of the nuclei, whereas the remaining cube part is expanded on an equidistant three-dimensional grid. The bubbles' part is treated by using one-center expansions of the Helmholtz kernel in spherical harmonics multiplied with modified spherical Bessel functions of the first and second kinds. The angular part of the bubble functions can be integrated analytically, whereas the radial part is integrated numerically. The cube part is integrated using a similar method as we previously implemented for numerically integrating two-electron potentials. The behavior of the integrand of the auxiliary dimension introduced by the integral transformation of the Helmholtz kernel has also been investigated. The correctness of the implementation has been checked by performing Hartree-Fock self-consistent-field calculations on H2, H2O, and CO. The obtained energies are compared with reference values in the literature showing that an accuracy of 10-4 to 10-7 Eh can be obtained with our approach.

  15. Optimization of numerical orbitals using the Helmholtz kernel.

    PubMed

    Solala, Eelis; Losilla, Sergio A; Sundholm, Dage; Xu, Wenhua; Parkkinen, Pauli

    2017-02-28

    We present an integration scheme for optimizing the orbitals in numerical electronic structure calculations on general molecules. The orbital optimization is performed by integrating the Helmholtz kernel in the double bubble and cube basis, where bubbles represent the steep part of the functions in the vicinity of the nuclei, whereas the remaining cube part is expanded on an equidistant three-dimensional grid. The bubbles' part is treated by using one-center expansions of the Helmholtz kernel in spherical harmonics multiplied with modified spherical Bessel functions of the first and second kinds. The angular part of the bubble functions can be integrated analytically, whereas the radial part is integrated numerically. The cube part is integrated using a similar method as we previously implemented for numerically integrating two-electron potentials. The behavior of the integrand of the auxiliary dimension introduced by the integral transformation of the Helmholtz kernel has also been investigated. The correctness of the implementation has been checked by performing Hartree-Fock self-consistent-field calculations on H2, H2O, and CO. The obtained energies are compared with reference values in the literature showing that an accuracy of 10(-4) to 10(-7) Eh can be obtained with our approach.

  16. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    PubMed

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.

  17. Numerical simulations on influence of urban land cover expansion and anthropogenic heat release on urban meteorological environment in Pearl River Delta

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Wang, Xuemei; Chen, Yan; Dai, Wei; Wang, Xueyuan

    2016-11-01

    Urbanization is an extreme way in which human being changes the land use/land cover of the earth surface, and anthropogenic heat release occurs at the same time. In this paper, the anthropogenic heat release parameterization scheme in the Weather Research and Forecasting model is modified to consider the spatial heterogeneity of the release; and the impacts of land use change and anthropogenic heat release on urban boundary layer structure in the Pearl River Delta, China, are studied with a series of numerical experiments. The results show that the anthropogenic heat release contributes nearly 75 % to the urban heat island intensity in our studied period. The impact of anthropogenic heat release on near-surface specific humidity is very weak, but that on relative humidity is apparent due to the near-surface air temperature change. The near-surface wind speed decreases after the local land use is changed to urban type due to the increased land surface roughness, but the anthropogenic heat release leads to increases of the low-level wind speed and decreases above in the urban boundary layer because the anthropogenic heat release reduces the boundary layer stability and enhances the vertical mixing.

  18. Expansive Cements

    DTIC Science & Technology

    1970-10-01

    either burned simultaneously with a portland ce4nt or !r;terground with portland cement clinker ; Type M - a mixture of portland cement, calcium-aluminate... clinker that is interground with portland clinker or blended with portland cement or, alternately, it may be formed simul- taneously vrith the portland ... clinker compounds during the burning process. 3. Expansive cement, Type M is either a mixture of portland cement, calcium aluminate cement, and calcium

  19. Diffusion Map Kernel Analysis for Target Classification

    DTIC Science & Technology

    2010-06-01

    Gaussian and Polynomial kernels are most familiar from support vector machines. The Laplacian and Rayleigh were introduced previously in [7]. IV ...Cancer • Clev. Heart: Heart Disease Data Set, Cleveland • Wisc . BC: Wisconsin Breast Cancer Original • Sonar2: Shallow Water Acoustic Toolset [9...the Rayleigh kernel captures the embedding with an average PC of 77.3% and a slightly higher PFA than the Gaussian kernel. For the Wisc . BC

  20. Thermophysical properties of ilvaite CaFe22+Fe3+Si2O7O (OH); heat capacity from 7 to 920 K and thermal expansion between 298 and 856 K

    USGS Publications Warehouse

    Robie, R.A.; Evans, H.T.; Hemingway, B.S.

    1988-01-01

    The heat capacity of ilvaite from Seriphos, Greece was measured by adiabatic shield calorimetry (6.4 to 380.7 K) and by differential scanning calorimetry (340 to 950 K). The thermal expansion of ilvaite was also investigated, by X-ray methods, between 308 and 853 K. At 298.15 K the standard molar heat capacity and entropy for ilvaite are 298.9??0.6 and 292.3??0.6 J/(mol. K) respectively. Between 333 and 343 K ilvaite changes from monoclinic to orthorhombic. The antiferromagnetic transition is shown by a hump in Cp0with a Ne??el temperature of 121.9??0.5 K. A rounded hump in Cp0between 330 and 400 K may possibily arise from the thermally activated electron delocalization (hopping) known to take place in this temperature region. ?? 1988 Springer-Verlag.

  1. Molecular Hydrodynamics from Memory Kernels

    NASA Astrophysics Data System (ADS)

    Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin

    2016-04-01

    The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t-3 /2 . We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius.

  2. Fission gas retention and axial expansion of irradiated metallic fuel

    SciTech Connect

    Fenske, G.R.; Emerson, J.E.; Savoie, F.E.; Johanson, E.W.

    1986-05-01

    Out-of-reactor experiments utilizing direct electrical heating and infrared heating techniques were performed on irradiated metallic fuel. The results indicate accelerated expansion can occur during thermal transients and that the accelerated expansion is driven by retained fission gases. The results also demonstrate gas retention and, hence, expansion behavior is a function of axial position within the pin.

  3. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  4. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...

  5. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...

  6. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  7. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  8. Kernel current source density method.

    PubMed

    Potworowski, Jan; Jakuczun, Wit; Lȩski, Szymon; Wójcik, Daniel

    2012-02-01

    Local field potentials (LFP), the low-frequency part of extracellular electrical recordings, are a measure of the neural activity reflecting dendritic processing of synaptic inputs to neuronal populations. To localize synaptic dynamics, it is convenient, whenever possible, to estimate the density of transmembrane current sources (CSD) generating the LFP. In this work, we propose a new framework, the kernel current source density method (kCSD), for nonparametric estimation of CSD from LFP recorded from arbitrarily distributed electrodes using kernel methods. We test specific implementations of this framework on model data measured with one-, two-, and three-dimensional multielectrode setups. We compare these methods with the traditional approach through numerical approximation of the Laplacian and with the recently developed inverse current source density methods (iCSD). We show that iCSD is a special case of kCSD. The proposed method opens up new experimental possibilities for CSD analysis from existing or new recordings on arbitrarily distributed electrodes (not necessarily on a grid), which can be obtained in extracellular recordings of single unit activity with multiple electrodes.

  9. KERNEL PHASE IN FIZEAU INTERFEROMETRY

    SciTech Connect

    Martinache, Frantz

    2010-11-20

    The detection of high contrast companions at small angular separation appears feasible in conventional direct images using the self-calibration properties of interferometric observable quantities. The friendly notion of closure phase, which is key to the recent observational successes of non-redundant aperture masking interferometry used with adaptive optics, appears to be one example of a wide family of observable quantities that are not contaminated by phase noise. In the high-Strehl regime, soon to be available thanks to the coming generation of extreme adaptive optics systems on ground-based telescopes, and already available from space, closure phase like information can be extracted from any direct image, even taken with a redundant aperture. These new phase-noise immune observable quantities, called kernel phases, are determined a priori from the knowledge of the geometry of the pupil only. Re-analysis of archive data acquired with the Hubble Space Telescope NICMOS instrument using this new kernel-phase algorithm demonstrates the power of the method as it clearly detects and locates with milliarcsecond precision a known companion to a star at angular separation less than the diffraction limit.

  10. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  11. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  12. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  13. Load regulating expansion fixture

    DOEpatents

    Wagner, Lawrence M.; Strum, Michael J.

    1998-01-01

    A free standing self contained device for bonding ultra thin metallic films, such as 0.001 inch beryllium foils. The device will regulate to a predetermined load for solid state bonding when heated to a bonding temperature. The device includes a load regulating feature, whereby the expansion stresses generated for bonding are regulated and self adjusting. The load regulator comprises a pair of friction isolators with a plurality of annealed copper members located therebetween. The device, with the load regulator, will adjust to and maintain a stress level needed to successfully and economically complete a leak tight bond without damaging thin foils or other delicate components.

  14. Load regulating expansion fixture

    DOEpatents

    Wagner, L.M.; Strum, M.J.

    1998-12-15

    A free standing self contained device for bonding ultra thin metallic films, such as 0.001 inch beryllium foils is disclosed. The device will regulate to a predetermined load for solid state bonding when heated to a bonding temperature. The device includes a load regulating feature, whereby the expansion stresses generated for bonding are regulated and self adjusting. The load regulator comprises a pair of friction isolators with a plurality of annealed copper members located therebetween. The device, with the load regulator, will adjust to and maintain a stress level needed to successfully and economically complete a leak tight bond without damaging thin foils or other delicate components. 1 fig.

  15. The context-tree kernel for strings.

    PubMed

    Cuturi, Marco; Vert, Jean-Philippe

    2005-10-01

    We propose a new kernel for strings which borrows ideas and techniques from information theory and data compression. This kernel can be used in combination with any kernel method, in particular Support Vector Machines for string classification, with notable applications in proteomics. By using a Bayesian averaging framework with conjugate priors on a class of Markovian models known as probabilistic suffix trees or context-trees, we compute the value of this kernel in linear time and space while only using the information contained in the spectrum of the considered strings. This is ensured through an adaptation of a compression method known as the context-tree weighting algorithm. Encouraging classification results are reported on a standard protein homology detection experiment, showing that the context-tree kernel performs well with respect to other state-of-the-art methods while using no biological prior knowledge.

  16. Kernel method for corrections to scaling.

    PubMed

    Harada, Kenji

    2015-07-01

    Scaling analysis, in which one infers scaling exponents and a scaling function in a scaling law from given data, is a powerful tool for determining universal properties of critical phenomena in many fields of science. However, there are corrections to scaling in many cases, and then the inference problem becomes ill-posed by an uncontrollable irrelevant scaling variable. We propose a new kernel method based on Gaussian process regression to fix this problem generally. We test the performance of the new kernel method for some example cases. In all cases, when the precision of the example data increases, inference results of the new kernel method correctly converge. Because there is no limitation in the new kernel method for the scaling function even with corrections to scaling, unlike in the conventional method, the new kernel method can be widely applied to real data in critical phenomena.

  17. Heat capacity and entropy at the temperatures 5 K to 720 K and thermal expansion from the temperatures 298 K to 573 K of synthetic enargite (Cu3AsS4)

    USGS Publications Warehouse

    Seal, R.R.; Robie, R.A.; Hemingway, B.S.; Evans, H.T.

    1996-01-01

    The heat capacity of synthetic Cu3AsS4 (enargite) was measured by quasi-adiabatic calorimetry from the temperatures 5 K to 355 K and by differential scanning calorimetry from T = 339 K to T = 720 K. Heat-capacity anomalies were observed at T = (58.5 ?? 0.5) K (??trsHom = 1.4??R??K; ??trsSom = 0.02??R) and at T = (66.5 ?? 0.5) K (??trsHom = 4.6??R??K; ??trsSom = 0.08??R), where R = 8.31451 J??K-1??mol-1. The causes of the anomalies are unknown. At T = 298.15 K, Cop,m and Som(T) are (190.4 ?? 0.2) J??K-1??mol-1 and (257.6 ?? 0.6) J??K-1??mol-1, respectively. The superambient heat capacities are described from T = 298.15 K to T = 944 K by the least-squares regression equation: Cop,m/(J??K-1??mol-1) = (196.7 ?? 1.2) + (0.0499 ?? 0.0016)??(T/K) -(1918 000 ?? 84 000)??(T/K)-2. The thermal expansion of synthetic enargite was measured from T = 298.15 K to T = 573 K by powder X-ray diffraction. The thermal expansion of the unit-cell volume (Z = 2) is described from T = 298.15 K to T = 573 K by the least-squares equation: V/pm3 = 106??(288.2 ?? 0.2) + 104??(1.49 ?? 0.04)??(T/K). ?? 1996 Academic Press Limited.

  18. Expansion Microscopy

    PubMed Central

    Chen, Fei; Tillberg, Paul W.; Boyden, Edward S.

    2014-01-01

    In optical microscopy, fine structural details are resolved by using refraction to magnify images of a specimen. Here we report the discovery that, by synthesizing a swellable polymer network within a specimen, it can be physically expanded, resulting in physical magnification. By covalently anchoring specific labels located within the specimen directly to the polymer network, labels spaced closer than the optical diffraction limit can be isotropically separated and optically resolved, a process we call expansion microscopy (ExM). Thus, this process can be used to perform scalable super-resolution microscopy with diffraction-limited microscopes. We demonstrate ExM with effective ~70 nm lateral resolution in both cultured cells and brain tissue, performing three-color super-resolution imaging of ~107 μm3 of the mouse hippocampus with a conventional confocal microscope. PMID:25592419

  19. Search for the enhancement of the thermal expansion coefficient of superfluid 4HE Near T_Lambada by a heat current

    NASA Technical Reports Server (NTRS)

    Liu, Y.; Israelsson, U.; Larson, M.

    2001-01-01

    Presentation on the transition in 4He in the presence of a heat current (Q) provides an ideal system for the study of phase transitions under non-equlibrium, dynamical conditions. Many physical properties become nonlinear and Q-dependant near the transition temperature, T_Lambada.

  20. Bayesian Kernel Mixtures for Counts.

    PubMed

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  1. MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES

    PubMed Central

    Dunson, David B.

    2013-01-01

    Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563

  2. Dielectric properties of almond kernels associated with radio frequency and microwave pasteurization.

    PubMed

    Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin

    2017-02-10

    To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10-300 MHz), but gradually over the measured MW range (300-3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R(2) greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27-2450 MHz), moisture content (4.2-19.6% w.b.) and temperature (20-90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity.

  3. Induction of expression and co-localization of heat shock polypeptides with the polyalanine expansion mutant of poly(A)-binding protein N1 after chemical stress

    SciTech Connect

    Wang Qishan Bag, Jnanankur

    2008-05-23

    Formation of nuclear inclusions consisting of aggregates of a polyalanine expansion mutant of nuclear poly(A)-binding protein (PABPN1) is the hallmark of oculopharyngeal muscular dystrophy (OPMD). OPMD is a late onset autosomal dominant disease. Patients with this disorder exhibit progressive swallowing difficulty and drooping of their eye lids, which starts around the age of 50. Previously we have shown that treatment of cells expressing the mutant PABPN1 with a number of chemicals such as ibuprofen, indomethacin, ZnSO{sub 4}, and 8-hydroxy-quinoline induces HSP70 expression and reduces PABPN1 aggregation. In these studies we have shown that expression of additional HSPs including HSP27, HSP40, and HSP105 were induced in mutant PABPN1 expressing cells following exposure to the chemicals mentioned above. Furthermore, all three additional HSPs were translocated to the nucleus and probably helped to properly fold the mutant PABPN1 by co-localizing with this protein.

  4. Using TOPEX Satellite El Niño Altimetry Data to Introduce Thermal Expansion and Heat Capacity Concepts in Chemistry Courses

    NASA Astrophysics Data System (ADS)

    Blanck, Harvey F.

    1999-12-01

    draw and is a reasonable visual representation of the way in which the thermocline is depressed by warm water along a warm-water ridge. Discussion Various factors must be taken into account to modify the raw TOPEX radar altimeter data to obtain meaningful information. For example, as mentioned at JPL's TOPEX Web site, radar propagation speed is altered slightly by variations in water vapor in the atmosphere, and therefore atmospheric water vapor content must be determined by the satellite to correct the radar altimeter data. Studies of heat storage using direct temperature measurements have been conducted (5), and comparison of TOPEX altimetry data with actual temperature measurements shows them to be in reasonably good agreement (6). Low-profile hills and valleys on the ocean are generated or influenced by a variety of factors other than thermal energy. Ocean dynamics are complex indeed. Comparisons of thermal energy (steric effect) and wind-induced surface changes have been examined in relation to TOPEX data (7). The calculations of thermal energy excess in warm-water ocean bumps from radar altimetry data alone, while not unreasonable, must be understood to be a simplification for an extremely complex system. The Gaussian model proposed for the cross section of a warm-water ridge requires more study, but it is a useful visual model of the warm-water bump above the normal surface and its subsurface warm-water wedge. I believe students will enjoy these relevant calculations and learn a bit about density, thermal expansion, and heat capacity in the process. I have tried to present sufficient data and detail to allow teachers to pick and choose calculations appropriate to the level of their students. It is evident that dimensional analysis is a distinct advantage in using these equations. I have also tried to include enough descriptive detail of the TOPEX data and El Niño to answer many of the questions students may ask. The Web sites mentioned are very informative with

  5. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  6. Perturbed kernel approximation on homogeneous manifolds

    NASA Astrophysics Data System (ADS)

    Levesley, J.; Sun, X.

    2007-02-01

    Current methods for interpolation and approximation within a native space rely heavily on the strict positive-definiteness of the underlying kernels. If the domains of approximation are the unit spheres in euclidean spaces, then zonal kernels (kernels that are invariant under the orthogonal group action) are strongly favored. In the implementation of these methods to handle real world problems, however, some or all of the symmetries and positive-definiteness may be lost in digitalization due to small random errors that occur unpredictably during various stages of the execution. Perturbation analysis is therefore needed to address the stability problem encountered. In this paper we study two kinds of perturbations of positive-definite kernels: small random perturbations and perturbations by Dunkl's intertwining operators [C. Dunkl, Y. Xu, Orthogonal polynomials of several variables, Encyclopedia of Mathematics and Its Applications, vol. 81, Cambridge University Press, Cambridge, 2001]. We show that with some reasonable assumptions, a small random perturbation of a strictly positive-definite kernel can still provide vehicles for interpolation and enjoy the same error estimates. We examine the actions of the Dunkl intertwining operators on zonal (strictly) positive-definite kernels on spheres. We show that the resulted kernels are (strictly) positive-definite on spheres of lower dimensions.

  7. Thermal Expansion

    NASA Astrophysics Data System (ADS)

    Ventura, Guglielmo; Perfetti, Mauro

    All solid materials, when cooled to low temperatures experience a change in physical dimensions which called "thermal contraction" and is typically lower than 1 % in volume in the 4-300 K temperature range. Although the effect is small, it can have a heavy impact on the design of cryogenic devices. The thermal contraction of different materials may vary by as much as an order of magnitude: since cryogenic devices are constructed at room temperature with a lot of different materials, one of the major concerns is the effect of the different thermal contraction and the resulting thermal stress that may occur when two dissimilar materials are bonded together. In this chapter, theory of thermal contraction is reported in Sect. 1.2 . Section 1.3 is devoted to the phenomenon of negative thermal expansion and its applications.

  8. A simple method for computing the relativistic Compton scattering kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Prasad, M. K.; Kershaw, D. S.; Beason, J. D.

    1986-01-01

    Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.

  9. Relationship between cyanogenic compounds in kernels, leaves, and roots of sweet and bitter kernelled almonds.

    PubMed

    Dicenta, F; Martínez-Gómez, P; Grané, N; Martín, M L; León, A; Cánovas, J A; Berenguer, V

    2002-03-27

    The relationship between the levels of cyanogenic compounds (amygdalin and prunasin) in kernels, leaves, and roots of 5 sweet-, 5 slightly bitter-, and 5 bitter-kernelled almond trees was determined. Variability was observed among the genotypes for these compounds. Prunasin was found only in the vegetative part (roots and leaves) for all genotypes tested. Amygdalin was detected only in the kernels, mainly in bitter genotypes. In general, bitter-kernelled genotypes had higher levels of prunasin in their roots than nonbitter ones, but the correlation between cyanogenic compounds in the different parts of plants was not high. While prunasin seems to be present in most almond roots (with a variable concentration) only bitter-kernelled genotypes are able to transform it into amygdalin in the kernel. Breeding for prunasin-based resistance to the buprestid beetle Capnodis tenebrionis L. is discussed.

  10. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  11. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  12. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...

  13. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...

  14. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  15. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more...

  16. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...

  17. Transient laminar opposing mixed convection in a symmetrically heated duct with a plane symmetric sudden contraction-expansion: Buoyancy an inclination effects

    NASA Astrophysics Data System (ADS)

    Martínez-Suástegui, Lorenzo; Barreto, Enrique; Treviño, César

    2015-11-01

    Transient laminar opposing mixed convection is studied experimentally in an open vertical rectangular channel with two discrete protruded heat sources subjected to uniform heat flux simulating electronic components. Experiments are performed for a Reynolds number of Re = 700, Prandtl number of Pr = 7, inclination angles with respect to the horizontal of γ =0o , 45o and 90o, and different values of buoyancy strength or modified Richardson number, Ri* =Gr* /Re2 . From the experimental measurements, the space averaged surface temperatures, overall Nusselt number of each simulated electronic chip, phase-space plots of the self-oscillatory system, characteristic times of temperature oscillations and spectral distribution of the fluctuating energy have been obtained. Results show that when a threshold in the buoyancy parameter is reached, strong three-dimensional secondary flow oscillations develop in the axial and spanwise directions. This research was supported by the Consejo Nacional de Ciencia y Tecnología (CONACYT), Grant number 167474 and by the Secretaría de Investigación y Posgrado del IPN, Grant number SIP 20141309.

  18. Kernel-Based Equiprobabilistic Topographic Map Formation.

    PubMed

    Van Hulle MM

    1998-09-15

    We introduce a new unsupervised competitive learning rule, the kernel-based maximum entropy learning rule (kMER), which performs equiprobabilistic topographic map formation in regular, fixed-topology lattices, for use with nonparametric density estimation as well as nonparametric regression analysis. The receptive fields of the formal neurons are overlapping radially symmetric kernels, compatible with radial basis functions (RBFs); but unlike other learning schemes, the radii of these kernels do not have to be chosen in an ad hoc manner: the radii are adapted to the local input density, together with the weight vectors that define the kernel centers, so as to produce maps of which the neurons have an equal probability to be active (equiprobabilistic maps). Both an "online" and a "batch" version of the learning rule are introduced, which are applied to nonparametric density estimation and regression, respectively. The application envisaged is blind source separation (BSS) from nonlinear, noisy mixtures.

  19. Quantum kernel applications in medicinal chemistry.

    PubMed

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.

  20. KITTEN Lightweight Kernel 0.1 Beta

    SciTech Connect

    Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne; VanDyke, John; Hudson, Trammell

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.

  1. Negative thermal expansion materials: technological key for control of thermal expansion

    PubMed Central

    Takenaka, Koshi

    2012-01-01

    Most materials expand upon heating. However, although rare, some materials contract upon heating. Such negative thermal expansion (NTE) materials have enormous industrial merit because they can control the thermal expansion of materials. Recent progress in materials research enables us to obtain materials exhibiting negative coefficients of linear thermal expansion over −30 ppm K−1. Such giant NTE is opening a new phase of control of thermal expansion in composites. Specifically examining practical aspects, this review briefly summarizes materials and mechanisms of NTE as well as composites containing NTE materials, based mainly on activities of the last decade. PMID:27877465

  2. Negative thermal expansion materials: technological key for control of thermal expansion.

    PubMed

    Takenaka, Koshi

    2012-02-01

    Most materials expand upon heating. However, although rare, some materials contract upon heating. Such negative thermal expansion (NTE) materials have enormous industrial merit because they can control the thermal expansion of materials. Recent progress in materials research enables us to obtain materials exhibiting negative coefficients of linear thermal expansion over -30 ppm K(-1). Such giant NTE is opening a new phase of control of thermal expansion in composites. Specifically examining practical aspects, this review briefly summarizes materials and mechanisms of NTE as well as composites containing NTE materials, based mainly on activities of the last decade.

  3. TICK: Transparent Incremental Checkpointing at Kernel Level

    SciTech Connect

    Petrini, Fabrizio; Gioiosa, Roberto

    2004-10-25

    TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5

  4. RKF-PCA: robust kernel fuzzy PCA.

    PubMed

    Heo, Gyeongyong; Gader, Paul; Frigui, Hichem

    2009-01-01

    Principal component analysis (PCA) is a mathematical method that reduces the dimensionality of the data while retaining most of the variation in the data. Although PCA has been applied in many areas successfully, it suffers from sensitivity to noise and is limited to linear principal components. The noise sensitivity problem comes from the least-squares measure used in PCA and the limitation to linear components originates from the fact that PCA uses an affine transform defined by eigenvectors of the covariance matrix and the mean of the data. In this paper, a robust kernel PCA method that extends the kernel PCA and uses fuzzy memberships is introduced to tackle the two problems simultaneously. We first introduce an iterative method to find robust principal components, called Robust Fuzzy PCA (RF-PCA), which has a connection with robust statistics and entropy regularization. The RF-PCA method is then extended to a non-linear one, Robust Kernel Fuzzy PCA (RKF-PCA), using kernels. The modified kernel used in the RKF-PCA satisfies the Mercer's condition, which means that the derivation of the K-PCA is also valid for the RKF-PCA. Formal analyses and experimental results suggest that the RKF-PCA is an efficient non-linear dimension reduction method and is more noise-robust than the original kernel PCA.

  5. Thermal expansion, thermal conductivity, and heat capacity measurements for boreholes UE25 NRG-4, UE25 NRG-5, USW NRG-6, and USW NRG-7/7A

    SciTech Connect

    Brodsky, N.S.; Riggins, M.; Connolly, J.; Ricci, P.

    1997-09-01

    Specimens were tested from four thermal-mechanical units, namely Tiva Canyon (TCw), Paintbrush Tuff (PTn), and two Topopah Spring units (TSw1 and TSw2), and from two lithologies, i.e., welded devitrified (TCw, TSw1, TSw2) and nonwelded vitric tuff (PTn). Thermal conductivities in W(mk){sup {minus}1} averaged over all boreholes, ranged (depending upon temperature and saturation state) from 1.2 to 1.9 for TCw, from 0.4 to 0.9 for PTn, from 1.0 to 1.7 for TSw1, and from 1.5 to 2.3 for TSw2. Mean coefficients of thermal expansion were highly temperature dependent and values, averaged over all boreholes, ranged (depending upon temperature and saturation state) from 6.6 {times} 10{sup {minus}6} to 49 {times} 10{sup {minus}6} C{sup {minus}1} for TCw, from the negative range to 16 {times} 10{sup {minus}6} {center_dot} {degree}C{sup {minus}1} for PTn, from 6.3 {times} 10{sup {minus}6} to 44 {times} 10{sup {minus}6} C{sup {minus}1} for TSw1, and from 6.7 {times} 10{sup {minus}6} to 37 {times} 10{sup {minus}6} {center_dot} {degree}C{sup {minus}1} for TSw2. Mean values of thermal capacitance in J/cm{sup 3}K (averaged overall specimens) ranged from 1.6 J to 2.1 for TSw1 and from 1.8 to 2.5 for TSw2. In general, the lithostratigraphic classifications of rock assigned by the USGS are consistent with the mineralogical data presented in this report.

  6. On the formation of new ignition kernels in the chemically active dispersed mixtures

    NASA Astrophysics Data System (ADS)

    Ivanov, M. F.; Kiverin, A. D.

    2015-11-01

    The specific features of the combustion waves propagating through the channels filled with chemically active gaseous mixture and non-uniformly suspended micro particles are studied numerically. It is shown that the heat radiated by the hot products, absorbed by the micro particles and then transferred to the environmental fresh mixture can be the source of new ignition kernels in the regions of particles' clusters. Herewith the spatial distribution of the particles determines the features of combustion regimes arising in these kernels. One can highlight the multi-kernel ignition in the polydisperse mixtures and ignition of the combustion regimes with shocks and detonation formation in the mixtures with pronounced gradients of microparticles concentration.

  7. Moisture Sorption Isotherms and Properties of Sorbed Water of Neem ( Azadirichta indica A. Juss) Kernels

    NASA Astrophysics Data System (ADS)

    Ngono Mbarga, M. C.; Bup Nde, D.; Mohagir, A.; Kapseu, C.; Elambo Nkeng, G.

    2017-01-01

    A neem tree growing abundantly in India as well as in some regions of Asia and Africa gives fruits whose kernels have about 40-50% oil. This oil has high therapeutic and cosmetic qualities and is recently projected to be an important raw material for the production of biodiesel. Its seed is harvested at high moisture contents, which leads tohigh post-harvest losses. In the paper, the sorption isotherms are determined by the static gravimetric method at 40, 50, and 60°C to establish a database useful in defining drying and storage conditions of neem kernels. Five different equations are validated for modeling the sorption isotherms of neem kernels. The properties of sorbed water, such as the monolayer moisture content, surface area of adsorbent, number of adsorbed monolayers, and the percent of bound water are also defined. The critical moisture content necessary for the safe storage of dried neem kernels is shown to range from 5 to 10% dry basis, which can be obtained at a relative humidity less than 65%. The isosteric heats of sorption at 5% moisture content are 7.40 and 22.5 kJ/kg for the adsorption and desorption processes, respectively. This work is the first, to the best of our knowledge, to give the important parameters necessary for drying and storage of neem kernels, a potential raw material for the production of oil to be used in pharmaceutics, cosmetics, and biodiesel manufacturing.

  8. Kernel-Based Reconstruction of Graph Signals

    NASA Astrophysics Data System (ADS)

    Romero, Daniel; Ma, Meng; Giannakis, Georgios B.

    2017-02-01

    A number of applications in engineering, social sciences, physics, and biology involve inference over networks. In this context, graph signals are widely encountered as descriptors of vertex attributes or features in graph-structured data. Estimating such signals in all vertices given noisy observations of their values on a subset of vertices has been extensively analyzed in the literature of signal processing on graphs (SPoG). This paper advocates kernel regression as a framework generalizing popular SPoG modeling and reconstruction and expanding their capabilities. Formulating signal reconstruction as a regression task on reproducing kernel Hilbert spaces of graph signals permeates benefits from statistical learning, offers fresh insights, and allows for estimators to leverage richer forms of prior information than existing alternatives. A number of SPoG notions such as bandlimitedness, graph filters, and the graph Fourier transform are naturally accommodated in the kernel framework. Additionally, this paper capitalizes on the so-called representer theorem to devise simpler versions of existing Thikhonov regularized estimators, and offers a novel probabilistic interpretation of kernel methods on graphs based on graphical models. Motivated by the challenges of selecting the bandwidth parameter in SPoG estimators or the kernel map in kernel-based methods, the present paper further proposes two multi-kernel approaches with complementary strengths. Whereas the first enables estimation of the unknown bandwidth of bandlimited signals, the second allows for efficient graph filter selection. Numerical tests with synthetic as well as real data demonstrate the merits of the proposed methods relative to state-of-the-art alternatives.

  9. Oecophylla longinoda (Hymenoptera: Formicidae) Lead to Increased Cashew Kernel Size and Kernel Quality.

    PubMed

    Anato, F M; Sinzogan, A A C; Offenberg, J; Adandonon, A; Wargui, R B; Deguenon, J M; Ayelo, P M; Vayssières, J-F; Kossou, D K

    2017-03-03

    Weaver ants, Oecophylla spp., are known to positively affect cashew, Anacardium occidentale L., raw nut yield, but their effects on the kernels have not been reported. We compared nut size and the proportion of marketable kernels between raw nuts collected from trees with and without ants. Raw nuts collected from trees with weaver ants were 2.9% larger than nuts from control trees (i.e., without weaver ants), leading to 14% higher proportion of marketable kernels. On trees with ants, the kernel: raw nut ratio from nuts damaged by formic acid was 4.8% lower compared with nondamaged nuts from the same trees. Weaver ants provided three benefits to cashew production by increasing yields, yielding larger nuts, and by producing greater proportions of marketable kernel mass.

  10. A new Mercer sigmoid kernel for clinical data classification.

    PubMed

    Carrington, André M; Fieguth, Paul W; Chen, Helen H

    2014-01-01

    In classification with Support Vector Machines, only Mercer kernels, i.e. valid kernels, such as the Gaussian RBF kernel, are widely accepted and thus suitable for clinical data. Practitioners would also like to use the sigmoid kernel, a non-Mercer kernel, but its range of validity is difficult to determine, and even within range its validity is in dispute. Despite these shortcomings the sigmoid kernel is used by some, and two kernels in the literature attempt to emulate and improve upon it. We propose the first Mercer sigmoid kernel, that is therefore trustworthy for the classification of clinical data. We show the similarity between the Mercer sigmoid kernel and the sigmoid kernel and, in the process, identify a normalization technique that improves the classification accuracy of the latter. The Mercer sigmoid kernel achieves the best mean accuracy on three clinical data sets, detecting melanoma in skin lesions better than the most popular kernels; while with non-clinical data sets it has no significant difference in median accuracy as compared with the Gaussian RBF kernel. It consistently classifies some points correctly that the Gaussian RBF kernel does not and vice versa.

  11. Kernel bandwidth optimization in spike rate estimation.

    PubMed

    Shimazaki, Hideaki; Shinomoto, Shigeru

    2010-08-01

    Kernel smoother and a time-histogram are classical tools for estimating an instantaneous rate of spike occurrences. We recently established a method for selecting the bin width of the time-histogram, based on the principle of minimizing the mean integrated square error (MISE) between the estimated rate and unknown underlying rate. Here we apply the same optimization principle to the kernel density estimation in selecting the width or "bandwidth" of the kernel, and further extend the algorithm to allow a variable bandwidth, in conformity with data. The variable kernel has the potential to accurately grasp non-stationary phenomena, such as abrupt changes in the firing rate, which we often encounter in neuroscience. In order to avoid possible overfitting that may take place due to excessive freedom, we introduced a stiffness constant for bandwidth variability. Our method automatically adjusts the stiffness constant, thereby adapting to the entire set of spike data. It is revealed that the classical kernel smoother may exhibit goodness-of-fit comparable to, or even better than, that of modern sophisticated rate estimation methods, provided that the bandwidth is selected properly for a given set of spike data, according to the optimization methods presented here.

  12. Analog forecasting with dynamics-adapted kernels

    NASA Astrophysics Data System (ADS)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  13. The connection between regularization operators and support vector kernels.

    PubMed

    Smola, Alex J.; Schölkopf, Bernhard; Müller, Klaus Robert

    1998-06-01

    In this paper a correspondence is derived between regularization operators used in regularization networks and support vector kernels. We prove that the Green's Functions associated with regularization operators are suitable support vector kernels with equivalent regularization properties. Moreover, the paper provides an analysis of currently used support vector kernels in the view of regularization theory and corresponding operators associated with the classes of both polynomial kernels and translation invariant kernels. The latter are also analyzed on periodical domains. As a by-product we show that a large number of radial basis functions, namely conditionally positive definite functions, may be used as support vector kernels.

  14. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  15. Fast generation of sparse random kernel graphs

    SciTech Connect

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.

  16. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  17. Phenolic constituents of shea (Vitellaria paradoxa) kernels.

    PubMed

    Maranz, Steven; Wiesman, Zeev; Garti, Nissim

    2003-10-08

    Analysis of the phenolic constituents of shea (Vitellaria paradoxa) kernels by LC-MS revealed eight catechin compounds-gallic acid, catechin, epicatechin, epicatechin gallate, gallocatechin, epigallocatechin, gallocatechin gallate, and epigallocatechin gallate-as well as quercetin and trans-cinnamic acid. The mean kernel content of the eight catechin compounds was 4000 ppm (0.4% of kernel dry weight), with a 2100-9500 ppm range. Comparison of the profiles of the six major catechins from 40 Vitellaria provenances from 10 African countries showed that the relative proportions of these compounds varied from region to region. Gallic acid was the major phenolic compound, comprising an average of 27% of the measured total phenols and exceeding 70% in some populations. Colorimetric analysis (101 samples) of total polyphenols extracted from shea butter into hexane gave an average of 97 ppm, with the values for different provenances varying between 62 and 135 ppm of total polyphenols.

  18. Tile-Compressed FITS Kernel for IRAF

    NASA Astrophysics Data System (ADS)

    Seaman, R.

    2011-07-01

    The Flexible Image Transport System (FITS) is a ubiquitously supported standard of the astronomical community. Similarly, the Image Reduction and Analysis Facility (IRAF), developed by the National Optical Astronomy Observatory, is a widely used astronomical data reduction package. IRAF supplies compatibility with FITS format data through numerous tools and interfaces. The most integrated of these is IRAF's FITS image kernel that provides access to FITS from any IRAF task that uses the basic IMIO interface. The original FITS kernel is a complex interface of purpose-built procedures that presents growing maintenance issues and lacks recent FITS innovations. A new FITS kernel is being developed at NOAO that is layered on the CFITSIO library from the NASA Goddard Space Flight Center. The simplified interface will minimize maintenance headaches as well as add important new features such as support for the FITS tile-compressed (fpack) format.

  19. Fractal Weyl law for Linux Kernel architecture

    NASA Astrophysics Data System (ADS)

    Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.

    2011-01-01

    We study the properties of spectrum and eigenstates of the Google matrix of a directed network formed by the procedure calls in the Linux Kernel. Our results obtained for various versions of the Linux Kernel show that the spectrum is characterized by the fractal Weyl law established recently for systems of quantum chaotic scattering and the Perron-Frobenius operators of dynamical maps. The fractal Weyl exponent is found to be ν ≈ 0.65 that corresponds to the fractal dimension of the network d ≈ 1.3. An independent computation of the fractal dimension by the cluster growing method, generalized for directed networks, gives a close value d ≈ 1.4. The eigenmodes of the Google matrix of Linux Kernel are localized on certain principal nodes. We argue that the fractal Weyl law should be generic for directed networks with the fractal dimension d < 2.

  20. Rock expansion caused by ultrasound

    NASA Astrophysics Data System (ADS)

    Hedberg, C.; Gray, A.

    2013-12-01

    It has during many years been reported that materials' elastic modulus decrease when exposed to influences like mechanical impacts, ultrasound, magnetic fields, electricity and even humidity. Non-perfect atomic structures like rocks, concrete, or damaged metals exhibit a larger effect. This softening has most often been recorded by wave resonance measurements. The motion towards equilibrium is slow - often taking hours or days, which is why the effect is called Slow Dynamics [1]. The question had been raised, if a material expansion also occurs. 'The most fundamental parameter to consider is the volume expansion predicted to occur when positive hole charge carriers become activated, causing a decrease of the electron density in the O2- sublattice of the rock-forming minerals. This decrease of electron density should affect essentially all physical parameters, including the volume.' [2]. A new type of configuration has measured expansion of a rock subjected to ultrasound. A PZT was used as a pressure sensor while the combined thickness of the rock sample and the PZT sensor was held fixed. The expansion increased the stress in both the rock and the PZT, which gave an out-put voltage from the PZT. Knowing its material properties then made it possible to calculate the rock expansion. The equivalent strain caused by the ultrasound was approximately 3 x 10-5. The temperature was monitored and accounted for during the tests and for the maximum expansion the increase was 0.7 C, which means the expansion is at least to some degree caused by heating of the material by the ultrasound. The fraction of bonds activated by ultrasound was estimated to be around 10-5. References: [1] Guyer, R.A., Johnson, P.A.: Nonlinear Mesoscopic Elasticity: The Complex Behaviour of Rocks, Soils, Concrete. Wiley-VCH 2009 [2] M.M. Freund, F.F. Freund, Manipulating the Toughness of Rocks through Electric Potentials, Final Report CIF 2011 Award NNX11AJ84A, NAS Ames 2012.

  1. Heat pump system

    DOEpatents

    Swenson, Paul F.; Moore, Paul B.

    1982-01-01

    An air heating and cooling system for a building includes an expansion-type refrigeration circuit and a heat engine. The refrigeration circuit includes two heat exchangers, one of which is communicated with a source of indoor air from the building and the other of which is communicated with a source of air from outside the building. The heat engine includes a heat rejection circuit having a source of rejected heat and a primary heat exchanger connected to the source of rejected heat. The heat rejection circuit also includes an evaporator in heat exchange relation with the primary heat exchanger, a heat engine indoor heat exchanger, and a heat engine outdoor heat exchanger. The indoor heat exchangers are disposed in series air flow relationship, with the heat engine indoor heat exchanger being disposed downstream from the refrigeration circuit indoor heat exchanger. The outdoor heat exchangers are also disposed in series air flow relationship, with the heat engine outdoor heat exchanger disposed downstream from the refrigeration circuit outdoor heat exchanger. A common fluid is used in both of the indoor heat exchanges and in both of the outdoor heat exchangers. In a first embodiment, the heat engine is a Rankine cycle engine. In a second embodiment, the heat engine is a non-Rankine cycle engine.

  2. Heat pump system

    DOEpatents

    Swenson, Paul F.; Moore, Paul B.

    1979-01-01

    An air heating and cooling system for a building includes an expansion-type refrigeration circuit and a heat engine. The refrigeration circuit includes two heat exchangers, one of which is communicated with a source of indoor air from the building and the other of which is communicated with a source of air from outside the building. The heat engine includes a heat rejection circuit having a source of rejected heat and a primary heat exchanger connected to the source of rejected heat. The heat rejection circuit also includes an evaporator in heat exchange relation with the primary heat exchanger, a heat engine indoor heat exchanger, and a heat engine outdoor heat exchanger. The indoor heat exchangers are disposed in series air flow relationship, with the heat engine indoor heat exchanger being disposed downstream from the refrigeration circuit indoor heat exchanger. The outdoor heat exchangers are also disposed in series air flow relationship, with the heat engine outdoor heat exchanger disposed downstream from the refrigeration circuit outdoor heat exchanger. A common fluid is used in both of the indoor heat exchangers and in both of the outdoor heat exchangers. In a first embodiment, the heat engine is a Rankine cycle engine. In a second embodiment, the heat engine is a non-Rankine cycle engine.

  3. A kernel-based approach for biomedical named entity recognition.

    PubMed

    Patra, Rakesh; Saha, Sujan Kumar

    2013-01-01

    Support vector machine (SVM) is one of the popular machine learning techniques used in various text processing tasks including named entity recognition (NER). The performance of the SVM classifier largely depends on the appropriateness of the kernel function. In the last few years a number of task-specific kernel functions have been proposed and used in various text processing tasks, for example, string kernel, graph kernel, tree kernel and so on. So far very few efforts have been devoted to the development of NER task specific kernel. In the literature we found that the tree kernel has been used in NER task only for entity boundary detection or reannotation. The conventional tree kernel is unable to execute the complete NER task on its own. In this paper we have proposed a kernel function, motivated by the tree kernel, which is able to perform the complete NER task. To examine the effectiveness of the proposed kernel, we have applied the kernel function on the openly available JNLPBA 2004 data. Our kernel executes the complete NER task and achieves reasonable accuracy.

  4. Experimental study of turbulent flame kernel propagation

    SciTech Connect

    Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve

    2008-07-15

    Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{sub j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)

  5. A dynamic kernel modifier for linux

    SciTech Connect

    Minnich, R. G.

    2002-09-03

    Dynamic Kernel Modifier, or DKM, is a kernel module for Linux that allows user-mode programs to modify the execution of functions in the kernel without recompiling or modifying the kernel source in any way. Functions may be traced, either function entry only or function entry and exit; nullified; or replaced with some other function. For the tracing case, function execution results in the activation of a watchpoint. When the watchpoint is activated, the address of the function is logged in a FIFO buffer that is readable by external applications. The watchpoints are time-stamped with the resolution of the processor high resolution timers, which on most modem processors are accurate to a single processor tick. DKM is very similar to earlier systems such as the SunOS trace device or Linux TT. Unlike these two systems, and other similar systems, DKM requires no kernel modifications. DKM allows users to do initial probing of the kernel to look for performance problems, or even to resolve potential problems by turning functions off or replacing them. DKM watchpoints are not without cost: it takes about 200 nanoseconds to make a log entry on an 800 Mhz Pentium-Ill. The overhead numbers are actually competitive with other hardware-based trace systems, although it has less 'Los Alamos National Laboratory is operated by the University of California for the National Nuclear Security Administration of the United States Department of Energy under contract W-7405-ENG-36. accuracy than an In-Circuit Emulator such as the American Arium. Once the user has zeroed in on a problem, other mechanisms with a higher degree of accuracy can be used.

  6. Kernel abortion in maize. II. Distribution of /sup 14/C among kernel carboydrates

    SciTech Connect

    Hanft, J.M.; Jones, R.J.

    1986-06-01

    This study was designed to compare the uptake and distribution of /sup 14/C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35/sup 0/C were transferred to (/sup 14/C)sucrose media 10 days after pollination. Kernels cultured at 35/sup 0/C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on (/sup 14/C)sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35/sup 0/C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35/sup 0/C compared to kernels cultured at 30/sup 0/C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35/sup 0/C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30/sup 0/C (89%). Kernels cultured at 35/sup 0/C had a correspondingly higher proportion of /sup 14/C in endosperm fructose, glucose, and sucrose.

  7. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  8. Full Waveform Inversion Using Waveform Sensitivity Kernels

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    2013-04-01

    We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver

  9. Regularization techniques for PSF-matching kernels - I. Choice of kernel basis

    NASA Astrophysics Data System (ADS)

    Becker, A. C.; Homrighausen, D.; Connolly, A. J.; Genovese, C. R.; Owen, R.; Bickerton, S. J.; Lupton, R. H.

    2012-09-01

    We review current methods for building point spread function (PSF)-matching kernels for the purposes of image subtraction or co-addition. Such methods use a linear decomposition of the kernel on a series of basis functions. The correct choice of these basis functions is fundamental to the efficiency and effectiveness of the matching - the chosen bases should represent the underlying signal using a reasonably small number of shapes, and/or have a minimum number of user-adjustable tuning parameters. We examine methods whose bases comprise multiple Gauss-Hermite polynomials, as well as a form-free basis composed of delta-functions. Kernels derived from delta-functions are unsurprisingly shown to be more expressive; they are able to take more general shapes and perform better in situations where sum-of-Gaussian methods are known to fail. However, due to its many degrees of freedom (the maximum number allowed by the kernel size) this basis tends to overfit the problem and yields noisy kernels having large variance. We introduce a new technique to regularize these delta-function kernel solutions, which bridges the gap between the generality of delta-function kernels and the compactness of sum-of-Gaussian kernels. Through this regularization we are able to create general kernel solutions that represent the intrinsic shape of the PSF-matching kernel with only one degree of freedom, the strength of the regularization λ. The role of λ is effectively to exchange variance in the resulting difference image with variance in the kernel itself. We examine considerations in choosing the value of λ, including statistical risk estimators and the ability of the solution to predict solutions for adjacent areas. Both of these suggest moderate strengths of λ between 0.1 and 1.0, although this optimization is likely data set dependent. This model allows for flexible representations of the convolution kernel that have significant predictive ability and will prove useful in implementing

  10. Accuracy of Reduced and Extended Thin-Wire Kernels

    SciTech Connect

    Burke, G J

    2008-11-24

    Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.

  11. Analysis of maize ( Zea mays ) kernel density and volume using microcomputed tomography and single-kernel near-infrared spectroscopy.

    PubMed

    Gustin, Jeffery L; Jackson, Sean; Williams, Chekeria; Patel, Anokhee; Armstrong, Paul; Peter, Gary F; Settles, A Mark

    2013-11-20

    Maize kernel density affects milling quality of the grain. Kernel density of bulk samples can be predicted by near-infrared reflectance (NIR) spectroscopy, but no accurate method to measure individual kernel density has been reported. This study demonstrates that individual kernel density and volume are accurately measured using X-ray microcomputed tomography (μCT). Kernel density was significantly correlated with kernel volume, air space within the kernel, and protein content. Embryo density and volume did not influence overall kernel density. Partial least-squares (PLS) regression of μCT traits with single-kernel NIR spectra gave stable predictive models for kernel density (R(2) = 0.78, SEP = 0.034 g/cm(3)) and volume (R(2) = 0.86, SEP = 2.88 cm(3)). Density and volume predictions were accurate for data collected over 10 months based on kernel weights calculated from predicted density and volume (R(2) = 0.83, SEP = 24.78 mg). Kernel density was significantly correlated with bulk test weight (r = 0.80), suggesting that selection of dense kernels can translate to improved agronomic performance.

  12. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    SciTech Connect

    Charles Barnes; CLay Richardson; Scott Nagley; John Hunn; Eric Shaber

    2010-10-01

    Babcock and Wilcox (B&W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-µm, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B&W produced 425-µm, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B&W also produced 500-µm, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B&W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  13. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  14. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  15. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  16. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  17. Multiple spectral kernel learning and a gaussian complexity computation.

    PubMed

    Reyhani, Nima

    2013-07-01

    Multiple kernel learning (MKL) partially solves the kernel selection problem in support vector machines and similar classifiers by minimizing the empirical risk over a subset of the linear combination of given kernel matrices. For large sample sets, the size of the kernel matrices becomes a numerical issue. In many cases, the kernel matrix is of low-efficient rank. However, the low-rank property is not efficiently utilized in MKL algorithms. Here, we suggest multiple spectral kernel learning that efficiently uses the low-rank property by finding a kernel matrix from a set of Gram matrices of a few eigenvectors from all given kernel matrices, called a spectral kernel set. We provide a new bound for the gaussian complexity of the proposed kernel set, which depends on both the geometry of the kernel set and the number of Gram matrices. This characterization of the complexity implies that in an MKL setting, adding more kernels may not monotonically increase the complexity, while previous bounds show otherwise.

  18. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...

  19. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of...

  20. Thermomechanical property of rice kernels studied by DMA

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The thermomechanical property of the rice kernels was investigated using a dynamic mechanical analyzer (DMA). The length change of rice kernel with a loaded constant force along the major axis direction was detected during temperature scanning. The thermomechanical transition occurred in rice kernel...

  1. NIRS method for precise identification of Fusarium damaged wheat kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Development of scab resistant wheat varieties may be enhanced by non-destructive evaluation of kernels for Fusarium damaged kernels (FDKs) and deoxynivalenol (DON) levels. Fusarium infection generally affects kernel appearance, but insect damage and other fungi can cause similar symptoms. Also, some...

  2. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  3. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  4. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  5. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  6. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  7. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  8. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  9. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  10. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  11. Protein Structure Prediction Using String Kernels

    DTIC Science & Technology

    2006-03-03

    Prediction using String Kernels 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...consists of 4352 sequences from SCOP version 1.53 extracted from the Astral database, grouped into families and superfamilies. The dataset is processed

  12. Kernel Temporal Differences for Neural Decoding

    PubMed Central

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  13. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  14. Dielectric properties of almond kernels associated with radio frequency and microwave pasteurization

    PubMed Central

    Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin

    2017-01-01

    To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10–300 MHz), but gradually over the measured MW range (300–3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R2 greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27–2450 MHz), moisture content (4.2–19.6% w.b.) and temperature (20–90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity. PMID:28186149

  15. Dielectric properties of almond kernels associated with radio frequency and microwave pasteurization

    NASA Astrophysics Data System (ADS)

    Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin

    2017-02-01

    To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10–300 MHz), but gradually over the measured MW range (300–3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R2 greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27–2450 MHz), moisture content (4.2–19.6% w.b.) and temperature (20–90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity.

  16. Kernel weights optimization for error diffusion halftoning method

    NASA Astrophysics Data System (ADS)

    Fedoseev, Victor

    2015-02-01

    This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.

  17. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  18. Selection and properties of alternative forming fluids for TRISO fuel kernel production

    NASA Astrophysics Data System (ADS)

    Baker, M. P.; King, J. C.; Gorman, B. P.; Marshall, D. W.

    2013-01-01

    Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ˜10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.

  19. Selection and properties of alternative forming fluids for TRISO fuel kernel production

    SciTech Connect

    Baker, M. P.; King, J. C.; Gorman, B. P.; Marshall, Doug W.

    2013-01-01

    Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1- bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.

  20. Difference image analysis: automatic kernel design using information criteria

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.

    2016-03-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.

  1. Efficient $\\chi ^{2}$ Kernel Linearization via Random Feature Maps.

    PubMed

    Yuan, Xiao-Tong; Wang, Zhenzhen; Deng, Jiankang; Liu, Qingshan

    2016-11-01

    Explicit feature mapping is an appealing way to linearize additive kernels, such as χ(2) kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of χ(2) kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize χ(2) kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to χ(2) multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the χ(2) kernel SVMs at almost no cost of testing accuracy.

  2. A Novel Framework for Learning Geometry-Aware Kernels.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo

    2016-05-01

    The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.

  3. Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets.

    PubMed

    Wang, Shitong; Wang, Jun; Chung, Fu-lai

    2014-01-01

    Kernel methods such as the standard support vector machine and support vector regression trainings take O(N(3)) time and O(N(2)) space complexities in their naïve implementations, where N is the training set size. It is thus computationally infeasible in applying them to large data sets, and a replacement of the naive method for finding the quadratic programming (QP) solutions is highly desirable. By observing that many kernel methods can be linked up with kernel density estimate (KDE) which can be efficiently implemented by some approximation techniques, a new learning method called fast KDE (FastKDE) is proposed to scale up kernel methods. It is based on establishing a connection between KDE and the QP problems formulated for kernel methods using an entropy-based integrated-squared-error criterion. As a result, FastKDE approximation methods can be applied to solve these QP problems. In this paper, the latest advance in fast data reduction via KDE is exploited. With just a simple sampling strategy, the resulted FastKDE method can be used to scale up various kernel methods with a theoretical guarantee that their performance does not degrade a lot. It has a time complexity of O(m(3)) where m is the number of the data points sampled from the training set. Experiments on different benchmarking data sets demonstrate that the proposed method has comparable performance with the state-of-art method and it is effective for a wide range of kernel methods to achieve fast learning in large data sets.

  4. Reaction Kernel Structure of a Slot Jet Diffusion Flame in Microgravity

    NASA Technical Reports Server (NTRS)

    Takahashi, F.; Katta, V. R.

    2001-01-01

    Diffusion flame stabilization in normal earth gravity (1 g) has long been a fundamental research subject in combustion. Local flame-flow phenomena, including heat and species transport and chemical reactions, around the flame base in the vicinity of condensed surfaces control flame stabilization and fire spreading processes. Therefore, gravity plays an important role in the subject topic because buoyancy induces flow in the flame zone, thus increasing the convective (and diffusive) oxygen transport into the flame zone and, in turn, reaction rates. Recent computations show that a peak reactivity (heat-release or oxygen-consumption rate) spot, or reaction kernel, is formed in the flame base by back-diffusion and reactions of radical species in the incoming oxygen-abundant flow at relatively low temperatures (about 1550 K). Quasi-linear correlations were found between the peak heat-release or oxygen-consumption rate and the velocity at the reaction kernel for cases including both jet and flat-plate diffusion flames in airflow. The reaction kernel provides a stationary ignition source to incoming reactants, sustains combustion, and thus stabilizes the trailing diffusion flame. In a quiescent microgravity environment, no buoyancy-induced flow exits and thus purely diffusive transport controls the reaction rates. Flame stabilization mechanisms in such purely diffusion-controlled regime remain largely unstudied. Therefore, it will be a rigorous test for the reaction kernel correlation if it can be extended toward zero velocity conditions in the purely diffusion-controlled regime. The objectives of this study are to reveal the structure of the flame-stabilizing region of a two-dimensional (2D) laminar jet diffusion flame in microgravity and develop a unified diffusion flame stabilization mechanism. This paper reports the recent progress in the computation and experiment performed in microgravity.

  5. Wilson Dslash Kernel From Lattice QCD Optimization

    SciTech Connect

    Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  6. Bergman kernel and complex singularity exponent

    NASA Astrophysics Data System (ADS)

    Chen, Boyong; Lee, Hanjin

    2009-12-01

    We give a precise estimate of the Bergman kernel for the model domain defined by $\\Omega_F=\\{(z,w)\\in \\mathbb{C}^{n+1}:{\\rm Im}w-|F(z)|^2>0\\},$ where $F=(f_1,...,f_m)$ is a holomorphic map from $\\mathbb{C}^n$ to $\\mathbb{C}^m$, in terms of the complex singularity exponent of $F$.

  7. Advanced Development of Certified OS Kernels

    DTIC Science & Technology

    2015-06-01

    and Coq Ltac libraries. 15. SUBJECT TERMS Certified Software; Certified OS Kernels; Certified Compilers; Abstraction Layers; Modularity; Deep ...module should only need to be done once (to show that it implements its deep functional specification [14]). Global properties should be derived from the...building certified abstraction layers with deep specifications. A certified layer is a new language-based module construct that consists of a triple pL1,M

  8. The Palomar kernel-phase experiment: testing kernel phase interferometry for ground-based astronomical observations

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin; Tuthill, Peter; Hinkley, Sasha; Ireland, Michael J.; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D.; Martinache, Frantz

    2016-01-01

    At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently developed adaptive optics post-processing technique, called kernel-phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here, we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the α Ophiuchi binary system near periastron, using the Palomar 200-Inch Telescope. This is the first case in which kernel phase has been used with a full aperture to resolve a system close to the diffraction limit with ground-based extreme adaptive optics observations. Excellent agreement in astrometric quantities is found between kernel phase and masking, and kernel phase significantly outperforms PSF fitting and bispectral analysis, demonstrating its viability as an alternative to conventional non-redundant masking under appropriate conditions.

  9. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.

  10. Kernel Non-Rigid Structure from Motion

    PubMed Central

    Gotardo, Paulo F. U.; Martinez, Aleix M.

    2013-01-01

    Non-rigid structure from motion (NRSFM) is a difficult, underconstrained problem in computer vision. The standard approach in NRSFM constrains 3D shape deformation using a linear combination of K basis shapes; the solution is then obtained as the low-rank factorization of an input observation matrix. An important but overlooked problem with this approach is that non-linear deformations are often observed; these deformations lead to a weakened low-rank constraint due to the need to use additional basis shapes to linearly model points that move along curves. Here, we demonstrate how the kernel trick can be applied in standard NRSFM. As a result, we model complex, deformable 3D shapes as the outputs of a non-linear mapping whose inputs are points within a low-dimensional shape space. This approach is flexible and can use different kernels to build different non-linear models. Using the kernel trick, our model complements the low-rank constraint by capturing non-linear relationships in the shape coefficients of the linear model. The net effect can be seen as using non-linear dimensionality reduction to further compress the (shape) space of possible solutions. PMID:24002226

  11. Balancing continuous covariates based on Kernel densities.

    PubMed

    Ma, Zhenjun; Hu, Feifang

    2013-03-01

    The balance of important baseline covariates is essential for convincing treatment comparisons. Stratified permuted block design and minimization are the two most commonly used balancing strategies, both of which require the covariates to be discrete. Continuous covariates are typically discretized in order to be included in the randomization scheme. But breakdown of continuous covariates into subcategories often changes the nature of the covariates and makes distributional balance unattainable. In this article, we propose to balance continuous covariates based on Kernel density estimations, which keeps the continuity of the covariates. Simulation studies show that the proposed Kernel-Minimization can achieve distributional balance of both continuous and categorical covariates, while also keeping the group size well balanced. It is also shown that the Kernel-Minimization is less predictable than stratified permuted block design and minimization. Finally, we apply the proposed method to redesign the NINDS trial, which has been a source of controversy due to imbalance of continuous baseline covariates. Simulation shows that imbalances such as those observed in the NINDS trial can be generally avoided through the implementation of the new method.

  12. Kernel methods for phenotyping complex plant architecture.

    PubMed

    Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien

    2014-02-07

    The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.

  13. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  14. Towards smart energy systems: application of kernel machine regression for medium term electricity load forecasting.

    PubMed

    Alamaniotis, Miltiadis; Bargiotas, Dimitrios; Tsoukalas, Lefteri H

    2016-01-01

    Integration of energy systems with information technologies has facilitated the realization of smart energy systems that utilize information to optimize system operation. To that end, crucial in optimizing energy system operation is the accurate, ahead-of-time forecasting of load demand. In particular, load forecasting allows planning of system expansion, and decision making for enhancing system safety and reliability. In this paper, the application of two types of kernel machines for medium term load forecasting (MTLF) is presented and their performance is recorded based on a set of historical electricity load demand data. The two kernel machine models and more specifically Gaussian process regression (GPR) and relevance vector regression (RVR) are utilized for making predictions over future load demand. Both models, i.e., GPR and RVR, are equipped with a Gaussian kernel and are tested on daily predictions for a 30-day-ahead horizon taken from the New England Area. Furthermore, their performance is compared to the ARMA(2,2) model with respect to mean average percentage error and squared correlation coefficient. Results demonstrate the superiority of RVR over the other forecasting models in performing MTLF.

  15. Abrasion resistant heat pipe

    DOEpatents

    Ernst, D.M.

    1984-10-23

    A specially constructed heat pipe is described for use in fluidized bed combustors. Two distinct coatings are spray coated onto a heat pipe casing constructed of low thermal expansion metal, each coating serving a different purpose. The first coating forms aluminum oxide to prevent hydrogen permeation into the heat pipe casing, and the second coating contains stabilized zirconium oxide to provide abrasion resistance while not substantially affecting the heat transfer characteristics of the system.

  16. Abrasion resistant heat pipe

    DOEpatents

    Ernst, Donald M.

    1984-10-23

    A specially constructed heat pipe for use in fluidized bed combustors. Two distinct coatings are spray coated onto a heat pipe casing constructed of low thermal expansion metal, each coating serving a different purpose. The first coating forms aluminum oxide to prevent hydrogen permeation into the heat pipe casing, and the second coating contains stabilized zirconium oxide to provide abrasion resistance while not substantially affecting the heat transfer characteristics of the system.

  17. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  18. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  19. Thermal Expansion of AuIn2

    SciTech Connect

    Saw, C K; Siekhaus, W J

    2004-07-12

    The thermal expansion of AuIn{sub 2} gold is of great interest in soldering technology. Indium containing solders have been used to make gold wire interconnects at low soldering temperature and over time, AuIn{sub 2} is formed between the gold wire and the solder due to the high heat of formation and the high inter-metallic diffusion of indium. Hence, the thermal expansion of AuIn{sub 2} alloy in comparison with that of the gold wire and the indium-containing solder is critical in determining the integrity of the connection. We present the results of x-ray diffraction measurement of the coefficient of linear expansion of AuIn{sub 2} as well as the bulk expansion and density changes over the temperature range of 30 to 500 C.

  20. Femtosecond dynamics of cluster expansion

    NASA Astrophysics Data System (ADS)

    Gao, Xiaohui; Wang, Xiaoming; Shim, Bonggu; Arefiev, Alexey; Tushentsov, Mikhail; Breizman, Boris; Downer, Mike

    2010-03-01

    Noble gas clusters irradiated by intense ultrafast laser expand quickly and become typical plasma in picosecond time scale. During the expansion, the clustered plasma demonstrates unique optical properties such as strong absorption and positive contribution to the refractive index. Here we studied cluster expansion dynamics by fs-time-resolved refractive index and absorption measurements in cluster gas jets after ionization and heating by an intense pump pulse. The refractive index measured by frequency domain interferometry (FDI) shows the transient positive peak of refractive index due to clustered plasma. By separating it from the negative contribution of the monomer plasma, we are able to determine the cluster fraction. The absorption measured by a delayed probe shows the contribution from clusters of various sizes. The plasma resonances in the cluster explain the enhancement of the absorption in our isothermal expanding cluster model. The cluster size distribution can be determined. A complete understanding of the femtosecond dynamics of cluster expansion is essential in the accurate interpretation and control of laser-cluster experiments such as phase-matched harmonic generation in cluster medium.

  1. Influence of wheat kernel physical properties on the pulverizing process.

    PubMed

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  2. Geometric tree kernels: classification of COPD from airway tree geometry.

    PubMed

    Feragen, Aasa; Petersen, Jens; Grimm, Dominik; Dirksen, Asger; Pedersen, Jesper Holst; Borgwardt, Karsten; de Bruijne, Marleen

    2013-01-01

    Methodological contributions: This paper introduces a family of kernels for analyzing (anatomical) trees endowed with vector valued measurements made along the tree. While state-of-the-art graph and tree kernels use combinatorial tree/graph structure with discrete node and edge labels, the kernels presented in this paper can include geometric information such as branch shape, branch radius or other vector valued properties. In addition to being flexible in their ability to model different types of attributes, the presented kernels are computationally efficient and some of them can easily be computed for large datasets (N - 10.000) of trees with 30 - 600 branches. Combining the kernels with standard machine learning tools enables us to analyze the relation between disease and anatomical tree structure and geometry. Experimental results: The kernels are used to compare airway trees segmented from low-dose CT, endowed with branch shape descriptors and airway wall area percentage measurements made along the tree. Using kernelized hypothesis testing we show that the geometric airway trees are significantly differently distributed in patients with Chronic Obstructive Pulmonary Disease (COPD) than in healthy individuals. The geometric tree kernels also give a significant increase in the classification accuracy of COPD from geometric tree structure endowed with airway wall thickness measurements in comparison with state-of-the-art methods, giving further insight into the relationship between airway wall thickness and COPD. Software: Software for computing kernels and statistical tests is available at http://image.diku.dk/aasa/software.php.

  3. A Kernel-based Account of Bibliometric Measures

    NASA Astrophysics Data System (ADS)

    Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji

    The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.

  4. Microscale Regenerative Heat Exchanger

    NASA Technical Reports Server (NTRS)

    Moran, Matthew E.; Stelter, Stephan; Stelter, Manfred

    2006-01-01

    The device described herein is designed primarily for use as a regenerative heat exchanger in a miniature Stirling engine or Stirling-cycle heat pump. A regenerative heat exchanger (sometimes called, simply, a "regenerator" in the Stirling-engine art) is basically a thermal capacitor: Its role in the Stirling cycle is to alternately accept heat from, then deliver heat to, an oscillating flow of a working fluid between compression and expansion volumes, without introducing an excessive pressure drop. These volumes are at different temperatures, and conduction of heat between these volumes is undesirable because it reduces the energy-conversion efficiency of the Stirling cycle.

  5. Model-based online learning with kernels.

    PubMed

    Li, Guoqi; Wen, Changyun; Li, Zheng Guo; Zhang, Aimin; Yang, Feng; Mao, Kezhi

    2013-03-01

    New optimization models and algorithms for online learning with Kernels (OLK) in classification, regression, and novelty detection are proposed in a reproducing Kernel Hilbert space. Unlike the stochastic gradient descent algorithm, called the naive online Reg minimization algorithm (NORMA), OLK algorithms are obtained by solving a constrained optimization problem based on the proposed models. By exploiting the techniques of the Lagrange dual problem like Vapnik's support vector machine (SVM), the solution of the optimization problem can be obtained iteratively and the iteration process is similar to that of the NORMA. This further strengthens the foundation of OLK and enriches the research area of SVM. We also apply the obtained OLK algorithms to problems in classification, regression, and novelty detection, including real time background substraction, to show their effectiveness. It is illustrated that, based on the experimental results of both classification and regression, the accuracy of OLK algorithms is comparable with traditional SVM-based algorithms, such as SVM and least square SVM (LS-SVM), and with the state-of-the-art algorithms, such as Kernel recursive least square (KRLS) method and projectron method, while it is slightly higher than that of NORMA. On the other hand, the computational cost of the OLK algorithm is comparable with or slightly lower than existing online methods, such as above mentioned NORMA, KRLS, and projectron methods, but much lower than that of SVM-based algorithms. In addition, different from SVM and LS-SVM, it is possible for OLK algorithms to be applied to non-stationary problems. Also, the applicability of OLK in novelty detection is illustrated by simulation results.

  6. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  7. Prediction of kernel density of corn using single-kernel near infrared spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...

  8. Neutron scattering kernel for solid deuterium

    NASA Astrophysics Data System (ADS)

    Granada, J. R.

    2009-06-01

    A new scattering kernel to describe the interaction of slow neutrons with solid deuterium was developed. The main characteristics of that system are contained in the formalism, including the lattice's density of states, the Young-Koppel quantum treatment of the rotations, and the internal molecular vibrations. The elastic processes involving coherent and incoherent contributions are fully described, as well as the spin-correlation effects. The results from the new model are compared with the best available experimental data, showing very good agreement.

  9. Verification of Chare-kernel programs

    SciTech Connect

    Bhansali, S.; Kale, L.V. )

    1989-01-01

    Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.

  10. Kernel polynomial representation for imaginary-time Green’s functions in continuous-time quantum Monte Carlo impurity solver

    NASA Astrophysics Data System (ADS)

    Huang, Li

    2016-11-01

    Inspired by the recently proposed Legendre orthogonal polynomial representation for imaginary-time Green’s functions G(τ), we develop an alternate and superior representation for G(τ) and implement it in the hybridization expansion continuous-time quantum Monte Carlo impurity solver. This representation is based on the kernel polynomial method, which introduces some integral kernel functions to filter the numerical fluctuations caused by the explicit truncations of polynomial expansion series and can improve the computational precision significantly. As an illustration of the new representation, we re-examine the imaginary-time Green’s functions of the single-band Hubbard model in the framework of dynamical mean-field theory. The calculated results suggest that with carefully chosen integral kernel functions, whether the system is metallic or insulating, the Gibbs oscillations found in the previous Legendre orthogonal polynomial representation have been vastly suppressed and remarkable corrections to the measured Green’s functions have been obtained. Project supported by the National Natural Science Foundation of China (Grant No. 11504340).

  11. Analysis of maize (Zea mays) kernel density and volume using micro-computed tomography and single-kernel near infrared spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Maize kernel density impacts milling quality of the grain due to kernel hardness. Harder kernels are correlated with higher test weight and are more resistant to breakage during harvest and transport. Softer kernels, in addition to being susceptible to mechanical damage, are also prone to pathogen ...

  12. Giant negative thermal expansion in magnetic nanocrystals.

    PubMed

    Zheng, X G; Kubozono, H; Yamada, H; Kato, K; Ishiwata, Y; Xu, C N

    2008-12-01

    Most solids expand when they are heated, but a property known as negative thermal expansion has been observed in a number of materials, including the oxide ZrW2O8 (ref. 1) and the framework material ZnxCd1-x(CN)2 (refs 2,3). This unusual behaviour can be understood in terms of low-energy phonons, while the colossal values of both positive and negative thermal expansion recently observed in another framework material, Ag3[Co(CN)6], have been explained in terms of the geometric flexibility of its metal-cyanide-metal linkages. Thermal expansion can also be stopped in some magnetic transition metal alloys below their magnetic ordering temperature, a phenomenon known as the Invar effect, and the possibility of exploiting materials with tuneable positive or negative thermal expansion in industrial applications has led to intense interest in both the Invar effect and negative thermal expansion. Here we report the results of thermal expansion experiments on three magnetic nanocrystals-CuO, MnF2 and NiO-and find evidence for negative thermal expansion in both CuO and MnF2 below their magnetic ordering temperatures, but not in NiO. Larger particles of CuO and MnF2 also show prominent magnetostriction (that is, they change shape in response to an applied magnetic field), which results in significantly reduced thermal expansion below their magnetic ordering temperatures; this behaviour is not observed in NiO. We propose that the negative thermal expansion effect in CuO (which is four times larger than that observed in ZrW2O8) and MnF2 is a general property of nanoparticles in which there is strong coupling between magnetism and the crystal lattice.

  13. Introductory heat-transfer

    NASA Technical Reports Server (NTRS)

    Widener, Edward L.

    1992-01-01

    The objective is to introduce some concepts of thermodynamics in existing heat-treating experiments using available items. The specific objectives are to define the thermal properties of materials and to visualize expansivity, conductivity, heat capacity, and the melting point of common metals. The experimental procedures are described.

  14. Preliminary thermal expansion screening data for tuffs

    SciTech Connect

    Lappin, A.R.

    1980-03-01

    A major variable in evaluating the potential of silicic tuffs for use in geologic disposal of heat-producing nuclear wastes is thermal expansion. Results of ambient-pressure linear expansion measurements on a group of tuffs that vary treatly in porosity and mineralogy are presente here. Thermal expansion of devitrified welded tuffs is generally linear with increasing temperature and independent of both porosity and heating rate. Mineralogic factors affecting behavior of these tuffs are limited to the presence or absence of cristobalite and altered biotite. The presence of cristobalite results in markedly nonlinear expansion above 200{sup 0}C. If biotite in biotite-hearing rocks alters even slightly to expandable clays, the behavior of these tuffs near the boiling point of water can be dominated by contraction of the expandable phase. Expansion of both high- and low-porosity tuffs containing hydrated silicic glass and/or expandable clays is complex. The behavior of these rocks appears to be completely dominated by dehydration of hydrous phases and, hence, should be critically dependent on fluid pressure. Valid extrapolation of the ambient-pressure results presented here to depths of interest for construction of a nuclear-waste repository will depend on a good understanding of the interaction of dehydration rates and fluid pressures, and of the effects of both micro- and macrofractures on the response of tuff masss.

  15. Delimiting Areas of Endemism through Kernel Interpolation

    PubMed Central

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  16. Scientific Computing Kernels on the Cell Processor

    SciTech Connect

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  17. Generalized Langevin equation with tempered memory kernel

    NASA Astrophysics Data System (ADS)

    Liemert, André; Sandev, Trifce; Kantz, Holger

    2017-01-01

    We study a generalized Langevin equation for a free particle in presence of a truncated power-law and Mittag-Leffler memory kernel. It is shown that in presence of truncation, the particle from subdiffusive behavior in the short time limit, turns to normal diffusion in the long time limit. The case of harmonic oscillator is considered as well, and the relaxation functions and the normalized displacement correlation function are represented in an exact form. By considering external time-dependent periodic force we obtain resonant behavior even in case of a free particle due to the influence of the environment on the particle movement. Additionally, the double-peak phenomenon in the imaginary part of the complex susceptibility is observed. It is obtained that the truncation parameter has a huge influence on the behavior of these quantities, and it is shown how the truncation parameter changes the critical frequencies. The normalized displacement correlation function for a fractional generalized Langevin equation is investigated as well. All the results are exact and given in terms of the three parameter Mittag-Leffler function and the Prabhakar generalized integral operator, which in the kernel contains a three parameter Mittag-Leffler function. Such kind of truncated Langevin equation motion can be of high relevance for the description of lateral diffusion of lipids and proteins in cell membranes.

  18. Transcriptome analysis of Ginkgo biloba kernels

    PubMed Central

    He, Bing; Gu, Yincong; Xu, Meng; Wang, Jianwen; Cao, Fuliang; Xu, Li-an

    2015-01-01

    Ginkgo biloba is a dioecious species native to China with medicinally and phylogenetically important characteristics; however, genomic resources for this species are limited. In this study, we performed the first transcriptome sequencing for Ginkgo kernels at five time points using Illumina paired-end sequencing. Approximately 25.08-Gb clean reads were obtained, and 68,547 unigenes with an average length of 870 bp were generated by de novo assembly. Of these unigenes, 29,987 (43.74%) were annotated in publicly available plant protein database. A total of 3,869 genes were identified as significantly differentially expressed, and enrichment analysis was conducted at different time points. Furthermore, metabolic pathway analysis revealed that 66 unigenes were responsible for terpenoid backbone biosynthesis, with up to 12 up-regulated unigenes involved in the biosynthesis of ginkgolide and bilobalide. Differential gene expression analysis together with real-time PCR experiments indicated that the synthesis of bilobalide may have interfered with the ginkgolide synthesis process in the kernel. These data can remarkably expand the existing transcriptome resources of Ginkgo, and provide a valuable platform to reveal more on developmental and metabolic mechanisms of this species. PMID:26500663

  19. Aligning Biomolecular Networks Using Modular Graph Kernels

    NASA Astrophysics Data System (ADS)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  20. Sugar uptake into kernels of tunicate tassel-seed maize

    SciTech Connect

    Thomas, P.A.; Felker, F.C.; Crawford, C.G. )

    1990-05-01

    A maize (Zea mays L.) strain expressing both the tassel-seed (Ts-5) and tunicate (Tu) characters was developed which produces glume-covered kernels on the tassel, often born on 7-10 mm pedicels. Vigorous plants produce up to 100 such kernels interspersed with additional sessile kernels. This floral unit provides a potentially valuable experimental system for studying sugar uptake into developing maize seeds. When detached kernels (with glumes and pedicel intact) are placed in incubation solution, fluid flows up the pedicel and into the glumes, entering the pedicel apoplast near the kernel base. The unusual anatomical features of this maize strain permit experimental access to the pedicel apoplast with much less possibility of kernel base tissue damage than with kernels excised from the cob. ({sup 14}C)Fructose incorporation into soluble and insoluble fractions of endosperm increased for 8 days. Endosperm uptake of sucrose, fructose, and D-glucose was significantly greater than that of L-glucose. Fructose uptake was significantly inhibited by CCCP, DNP, and PCMBS. These results suggest the presence of an active, non-diffusion component of sugar transport in maize kernels.

  1. Integral Transform Methods: A Critical Review of Various Kernels

    NASA Astrophysics Data System (ADS)

    Orlandini, Giuseppina; Turro, Francesco

    2017-03-01

    Some general remarks about integral transform approaches to response functions are made. Their advantage for calculating cross sections at energies in the continuum is stressed. In particular we discuss the class of kernels that allow calculations of the transform by matrix diagonalization. A particular set of such kernels, namely the wavelets, is tested in a model study.

  2. Evidence-Based Kernels: Fundamental Units of Behavioral Influence

    ERIC Educational Resources Information Center

    Embry, Dennis D.; Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…

  3. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  4. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  5. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  6. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  7. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  8. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  9. High speed sorting of Fusarium-damaged wheat kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recent studies have found that resistance to Fusarium fungal infection can be inherited in wheat from one generation to another. However, there is not yet available a cost effective method to separate Fusarium-damaged wheat kernels from undamaged kernels so that wheat breeders can take advantage of...

  10. End-use quality of soft kernel durum wheat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat is known for its very hard texture, which influences how it is milled and for what products it is well suited. We developed soft kernel durum wheat lines via Ph1b-mediated homoeologous recombination with Dr. Leonard Joppa...

  11. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  12. Parametric kernel-driven active contours for image segmentation

    NASA Astrophysics Data System (ADS)

    Wu, Qiongzhi; Fang, Jiangxiong

    2012-10-01

    We investigated a parametric kernel-driven active contour (PKAC) model, which implicitly transfers kernel mapping and piecewise constant to modeling the image data via kernel function. The proposed model consists of curve evolution functional with three terms: global kernel-driven and local kernel-driven terms, which evaluate the deviation of the mapped image data within each region from the piecewise constant model, and a regularization term expressed as the length of the evolution curves. In the local kernel-driven term, the proposed model can effectively segment images with intensity inhomogeneity by incorporating the local image information. By balancing the weight between the global kernel-driven term and the local kernel-driven term, the proposed model can segment the images with either intensity homogeneity or intensity inhomogeneity. To ensure the smoothness of the level set function and reduce the computational cost, the distance regularizing term is applied to penalize the deviation of the level set function and eliminate the requirement of re-initialization. Compared with the local image fitting model and local binary fitting model, experimental results show the advantages of the proposed method in terms of computational efficiency and accuracy.

  13. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  14. Computing the roots of complex orthogonal and kernel polynomials

    SciTech Connect

    Saylor, P.E.; Smolarski, D.C.

    1988-01-01

    A method is presented to compute the roots of complex orthogonal and kernel polynomials. An important application of complex kernel polynomials is the acceleration of iterative methods for the solution of nonsymmetric linear equations. In the real case, the roots of orthogonal polynomials coincide with the eigenvalues of the Jacobi matrix, a symmetric tridiagonal matrix obtained from the defining three-term recurrence relationship for the orthogonal polynomials. In the real case kernel polynomials are orthogonal. The Stieltjes procedure is an algorithm to compute the roots of orthogonal and kernel polynomials bases on these facts. In the complex case, the Jacobi matrix generalizes to a Hessenberg matrix, the eigenvalues of which are roots of either orthogonal or kernel polynomials. The resulting algorithm generalizes the Stieljes procedure. It may not be defined in the case of kernel polynomials, a consequence of the fact that they are orthogonal with respect to a nonpositive bilinear form. (Another consequence is that kernel polynomials need not be of exact degree.) A second algorithm that is always defined is presented for kernel polynomials. Numerical examples are described.

  15. PROPERTIES OF A SOLAR FLARE KERNEL OBSERVED BY HINODE AND SDO

    SciTech Connect

    Young, P. R.; Doschek, G. A.; Warren, H. P.; Hara, H.

    2013-04-01

    Flare kernels are compact features located in the solar chromosphere that are the sites of rapid heating and plasma upflow during the rise phase of flares. An example is presented from a M1.1 class flare in active region AR 11158 observed on 2011 February 16 07:44 UT for which the location of the upflow region seen by EUV Imaging Spectrometer (EIS) can be precisely aligned to high spatial resolution images obtained by the Atmospheric Imaging Assembly (AIA) and Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO). A string of bright flare kernels is found to be aligned with a ridge of strong magnetic field, and one kernel site is highlighted for which an upflow speed of Almost-Equal-To 400 km s{sup -1} is measured in lines formed at 10-30 MK. The line-of-sight magnetic field strength at this location is Almost-Equal-To 1000 G. Emission over a continuous range of temperatures down to the chromosphere is found, and the kernels have a similar morphology at all temperatures and are spatially coincident with sizes at the resolution limit of the AIA instrument ({approx}<400 km). For temperatures of 0.3-3.0 MK the EIS emission lines show multiple velocity components, with the dominant component becoming more blueshifted with temperature from a redshift of 35 km s{sup -1} at 0.3 MK to a blueshift of 60 km s{sup -1} at 3.0 MK. Emission lines from 1.5-3.0 MK show a weak redshifted component at around 60-70 km s{sup -1} implying multi-directional flows at the kernel site. Significant non-thermal broadening corresponding to velocities of Almost-Equal-To 120 km s{sup -1} is found at 10-30 MK, and the electron density in the kernel, measured at 2 MK, is 3.4 Multiplication-Sign 10{sup 10} cm{sup -3}. Finally, the Fe XXIV {lambda}192.03/{lambda}255.11 ratio suggests that the EIS calibration has changed since launch, with the long wavelength channel less sensitive than the short wavelength channel by around a factor two.

  16. Water-heating dehumidifier

    DOEpatents

    Tomlinson, John J.

    2006-04-18

    A water-heating dehumidifier includes a refrigerant loop including a compressor, at least one condenser, an expansion device and an evaporator including an evaporator fan. The condenser includes a water inlet and a water outlet for flowing water therethrough or proximate thereto, or is affixed to the tank or immersed into the tank to effect water heating without flowing water. The immersed condenser design includes a self-insulated capillary tube expansion device for simplicity and high efficiency. In a water heating mode air is drawn by the evaporator fan across the evaporator to produce cooled and dehumidified air and heat taken from the air is absorbed by the refrigerant at the evaporator and is pumped to the condenser, where water is heated. When the tank of water heater is full of hot water or a humidistat set point is reached, the water-heating dehumidifier can switch to run as a dehumidifier.

  17. OSKI: A Library of Automatically Tuned Sparse Matrix Kernels

    SciTech Connect

    Vuduc, R; Demmel, J W; Yelick, K A

    2005-07-19

    The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.

  18. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  19. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  20. A novel extended kernel recursive least squares algorithm.

    PubMed

    Zhu, Pingping; Chen, Badong; Príncipe, José C

    2012-08-01

    In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.

  1. Optimal Electric Utility Expansion

    SciTech Connect

    1989-10-10

    SAGE-WASP is designed to find the optimal generation expansion policy for an electrical utility system. New units can be automatically selected from a user-supplied list of expansion candidates which can include hydroelectric and pumped storage projects. The existing system is modeled. The calculational procedure takes into account user restrictions to limit generation configurations to an area of economic interest. The optimization program reports whether the restrictions acted as a constraint on the solution. All expansion configurations considered are required to pass a user supplied reliability criterion. The discount rate and escalation rate are treated separately for each expansion candidate and for each fuel type. All expenditures are separated into local and foreign accounts, and a weighting factor can be applied to foreign expenditures.

  2. Weakly relativistic plasma expansion

    SciTech Connect

    Fermous, Rachid Djebli, Mourad

    2015-04-15

    Plasma expansion is an important physical process that takes place in laser interactions with solid targets. Within a self-similar model for the hydrodynamical multi-fluid equations, we investigated the expansion of both dense and under-dense plasmas. The weakly relativistic electrons are produced by ultra-intense laser pulses, while ions are supposed to be in a non-relativistic regime. Numerical investigations have shown that relativistic effects are important for under-dense plasma and are characterized by a finite ion front velocity. Dense plasma expansion is found to be governed mainly by quantum contributions in the fluid equations that originate from the degenerate pressure in addition to the nonlinear contributions from exchange and correlation potentials. The quantum degeneracy parameter profile provides clues to set the limit between under-dense and dense relativistic plasma expansions at a given density and temperature.

  3. Pen Branch delta expansion

    SciTech Connect

    Nelson, E.A.; Christensen, E.J.; Mackey, H.E.; Sharitz, R.R.; Jensen, J.R.; Hodgson, M.E.

    1984-02-01

    Since 1954, cooling water discharges from K Reactor ({anti X} = 370 cfs {at} 59 C) to Pen Branch have altered vegetation and deposited sediment in the Savannah River Swamp forming the Pen Branch delta. Currently, the delta covers over 300 acres and continues to expand at a rate of about 16 acres/yr. Examination of delta expansion can provide important information on environmental impacts to wetlands exposed to elevated temperature and flow conditions. To assess the current status and predict future expansion of the Pen Branch delta, historic aerial photographs were analyzed using both basic photo interpretation and computer techniques to provide the following information: (1) past and current expansion rates; (2) location and changes of impacted areas; (3) total acreage presently affected. Delta acreage changes were then compared to historic reactor discharge temperature and flow data to see if expansion rate variations could be related to reactor operations.

  4. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  5. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects.

  6. A visualization tool for the kernel-driven model with improved ability in data analysis and kernel assessment

    NASA Astrophysics Data System (ADS)

    Dong, Yadong; Jiao, Ziti; Zhang, Hu; Bai, Dongni; Zhang, Xiaoning; Li, Yang; He, Dandan

    2016-10-01

    The semi-empirical, kernel-driven Bidirectional Reflectance Distribution Function (BRDF) model has been widely used for many aspects of remote sensing. With the development of the kernel-driven model, there is a need to further assess the performance of newly developed kernels. The use of visualization tools can facilitate the analysis of model results and the assessment of newly developed kernels. However, the current version of the kernel-driven model does not contain a visualization function. In this study, a user-friendly visualization tool, named MaKeMAT, was developed specifically for the kernel-driven model. The POLDER-3 and CAR BRDF datasets were used to demonstrate the applicability of MaKeMAT. The visualization of inputted multi-angle measurements enhances understanding of multi-angle measurements and allows the choice of measurements with good representativeness. The visualization of modeling results facilitates the assessment of newly developed kernels. The study shows that the visualization tool MaKeMAT can promote the widespread application of the kernel-driven model.

  7. Oil extraction from sheanut (Vitellaria paradoxa Gaertn C.F.) kernels assisted by microwaves.

    PubMed

    Nde, Divine B; Boldor, Dorin; Astete, Carlos; Muley, Pranjali; Xu, Zhimin

    2016-03-01

    Shea butter, is highly solicited in cosmetics, pharmaceuticals, chocolates and biodiesel formulations. Microwave assisted extraction (MAE) of butter from sheanut kernels was carried using the Doehlert's experimental design. Factors studied were microwave heating time, temperature and solvent/solute ratio while the responses were the quantity of oil extracted and the acid number. Second order models were established to describe the influence of experimental parameters on the responses studied. Under optimum MAE conditions of heating time 23 min, temperature 75 °C and solvent/solute ratio 4:1 more than 88 % of the oil with a free fatty acid (FFA) value less than 2, was extracted compared to the 10 h and solvent/solute ratio of 10:1 required for soxhlet extraction. Scanning electron microscopy was used to elucidate the effect of microwave heating on the kernels' microstructure. Substantial reduction in extraction time and volumes of solvent used and oil of suitable quality are the main benefits derived from the MAE process.

  8. High-Temperature Expansions for Frenkel-Kontorova Model

    NASA Astrophysics Data System (ADS)

    Takahashi, K.; Mannari, I.; Ishii, T.

    1995-02-01

    Two high-temperature series expansions of the Frenkel-Kontorova (FK) model are investigated: the high-temperature approximation of Schneider-Stoll is extended to the FK model having the density ρ ≠ 1, and an alternative series expansion in terms of the modified Bessel function is examined. The first six-order terms for both expansions in free energy are explicitly obtained and compared with Ishii's approximation of the transfer-integral method. The specific heat based on the expansions is discussed by comparing with those of the transfer-integral method and Monte Carlo simulation.

  9. On the Kernelization Complexity of Colorful Motifs

    NASA Astrophysics Data System (ADS)

    Ambalath, Abhimanyu M.; Balasundaram, Radheshyam; Rao H., Chintan; Koppula, Venkata; Misra, Neeldhara; Philip, Geevarghese; Ramanujan, M. S.

    The Colorful Motif problem asks if, given a vertex-colored graph G, there exists a subset S of vertices of G such that the graph induced by G on S is connected and contains every color in the graph exactly once. The problem is motivated by applications in computational biology and is also well-studied from the theoretical point of view. In particular, it is known to be NP-complete even on trees of maximum degree three [Fellows et al, ICALP 2007]. In their pioneering paper that introduced the color-coding technique, Alon et al. [STOC 1995] show, inter alia, that the problem is FPT on general graphs. More recently, Cygan et al. [WG 2010] showed that Colorful Motif is NP-complete on comb graphs, a special subclass of the set of trees of maximum degree three. They also showed that the problem is not likely to admit polynomial kernels on forests.

  10. Kernel density estimation using graphical processing unit

    NASA Astrophysics Data System (ADS)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  11. Privacy preserving RBF kernel support vector machine.

    PubMed

    Li, Haoran; Xiong, Li; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2014-01-01

    Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data.

  12. Learning molecular energies using localized graph kernels

    NASA Astrophysics Data System (ADS)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  13. Labeled Graph Kernel for Behavior Analysis.

    PubMed

    Zhao, Ruiqi; Martinez, Aleix M

    2016-08-01

    Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data.

  14. Labeled Graph Kernel for Behavior Analysis

    PubMed Central

    Zhao, Ruiqi; Martinez, Aleix M.

    2016-01-01

    Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data. PMID:26415154

  15. Kolkhoung (Pistacia khinjuk) Hull Oil and Kernel Oil as Antioxidative Vegetable Oils with High Oxidative Stability 
and Nutritional Value.

    PubMed

    Asnaashari, Maryam; Hashemi, Seyed Mohammad Bagher; Mehr, Hamed Mahdavian; Yousefabad, Seyed Hossein Asadi

    2015-03-01

    In this study, in order to introduce natural antioxidative vegetable oil in food industry, the kolkhoung hull oil and kernel oil were extracted. To evaluate their antioxidant efficiency, gas chromatography analysis of the composition of kolkhoung hull and kernel oil fatty acids and high-performance liquid chromatography analysis of tocopherols were done. Also, the oxidative stability of the oil was considered based on the peroxide value and anisidine value during heating at 100, 110 and 120 °C. Gas chromatography analysis showed that oleic acid was the major fatty acid of both types of oil (hull and kernel) and based on a low content of saturated fatty acids, high content of monounsaturated fatty acids, and the ratio of ω-6 and ω-3 polyunsaturated fatty acids, they were nutritionally well--balanced. Moreover, both hull and kernel oil showed high oxidative stability during heating, which can be attributed to high content of tocotrienols. Based on the results, kolkhoung hull oil acted slightly better than its kernel oil. However, both of them can be added to oxidation-sensitive oils to improve their shelf life.

  16. Kolkhoung (Pistacia khinjuk) Hull Oil and Kernel Oil as Antioxidative Vegetable Oils with High Oxidative Stability 
and Nutritional Value

    PubMed Central

    Asnaashari, Maryam; Mehr, Hamed Mahdavian; Yousefabad, Seyed Hossein Asadi

    2015-01-01

    Summary In this study, in order to introduce natural antioxidative vegetable oil in food industry, the kolkhoung hull oil and kernel oil were extracted. To evaluate their antioxidant efficiency, gas chromatography analysis of the composition of kolkhoung hull and kernel oil fatty acids and high–performance liquid chromatography analysis of tocopherols were done. Also, the oxidative stability of the oil was considered based on the peroxide value and anisidine value during heating at 100, 110 and 120 °C. Gas chromatography analysis showed that oleic acid was the major fatty acid of both types of oil (hull and kernel) and based on a low content of saturated fatty acids, high content of monounsaturated fatty acids, and the ratio of ω-6 and ω-3 polyunsaturated fatty acids, they were nutritionally well--balanced. Moreover, both hull and kernel oil showed high oxidative stability during heating, which can be attributed to high content of tocotrienols. Based on the results, kolkhoung hull oil acted slightly better than its kernel oil. However, both of them can be added to oxidation–sensitive oils to improve their shelf life. PMID:27904335

  17. Biologic fluorescence decay characteristics: determination by Laguerre expansion technique

    NASA Astrophysics Data System (ADS)

    Snyder, Wendy J.; Maarek, Jean-Michel I.; Papaioannou, Thanassis; Marmarelis, Vasilis Z.; Grundfest, Warren S.

    1996-04-01

    Fluorescence decay characteristics are used to identify biologic fluorophores and to characterize interactions with the fluorophore environment. In many studies, fluorescence lifetimes are assessed by iterative reconvolution techniques. We investigated the use of a new approach: the Laguerre expansion of kernels technique (Marmarelis, V.Z., Ann. Biomed., Eng. 1993; 21, 573-589) which yields the fluorescence impulse response function by least- squares fitting of a discrete-time Laguerre functions expansion. Nitrogen (4 ns FWHM) and excimer (120 ns FWHM) laser pulses were used to excite the fluorescence of an anthracene and of type II collagen powder. After filtering (monochromator) and detection (MCP-PMT), the fluorescence response was digitized (digital storage oscilloscope) and transferred to a personal computer. Input and output data were deconvolved by the Laguerre expansion technique to compute the impulse response function which was then fitted to a multiexponential function for determination of the decay constants. A single exponential (time constant: 4.24 ns) best approximated the fluorescence decay of anthracene, whereas the Type II collagen response was best approximated by a double exponential (time constants: 2.24 and 9.92 ns) in agreement with previously reported data. The results of the Laguerre expansion technique were compared to the least-squares iterative reconvolution technique. The Laguerre expansion technique appeared computationally efficient and robust to experimental noise in the data. Furthermore, the proposed method does not impose a set multiexponential form to the decay.

  18. Spectrum-based kernel length estimation for Gaussian process classification.

    PubMed

    Wang, Liang; Li, Chuan

    2014-06-01

    Recent studies have shown that Gaussian process (GP) classification, a discriminative supervised learning approach, has achieved competitive performance in real applications compared with most state-of-the-art supervised learning methods. However, the problem of automatic model selection in GP classification, involving the kernel function form and the corresponding parameter values (which are unknown in advance), remains a challenge. To make GP classification a more practical tool, this paper presents a novel spectrum analysis-based approach for model selection by refining the GP kernel function to match the given input data. Specifically, we target the problem of GP kernel length scale estimation. Spectrums are first calculated analytically from the kernel function itself using the autocorrelation theorem as well as being estimated numerically from the training data themselves. Then, the kernel length scale is automatically estimated by equating the two spectrum values, i.e., the kernel function spectrum equals to the estimated training data spectrum. Compared with the classical Bayesian method for kernel length scale estimation via maximizing the marginal likelihood (which is time consuming and could suffer from multiple local optima), extensive experimental results on various data sets show that our proposed method is both efficient and accurate.

  19. Training Lp norm multiple kernel learning in the primal.

    PubMed

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method.

  20. Gaussian kernel width optimization for sparse Bayesian learning.

    PubMed

    Mohsenzadeh, Yalda; Sheikhzadeh, Hamid

    2015-04-01

    Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters.

  1. Relaxation and diffusion models with non-singular kernels

    NASA Astrophysics Data System (ADS)

    Sun, HongGuang; Hao, Xiaoxiao; Zhang, Yong; Baleanu, Dumitru

    2017-02-01

    Anomalous relaxation and diffusion processes have been widely quantified by fractional derivative models, where the definition of the fractional-order derivative remains a historical debate due to its limitation in describing different kinds of non-exponential decays (e.g. stretched exponential decay). Meanwhile, many efforts by mathematicians and engineers have been made to overcome the singularity of power function kernel in its definition. This study first explores physical properties of relaxation and diffusion models where the temporal derivative was defined recently using an exponential kernel. Analytical analysis shows that the Caputo type derivative model with an exponential kernel cannot characterize non-exponential dynamics well-documented in anomalous relaxation and diffusion. A legitimate extension of the previous derivative is then proposed by replacing the exponential kernel with a stretched exponential kernel. Numerical tests show that the Caputo type derivative model with the stretched exponential kernel can describe a much wider range of anomalous diffusion than the exponential kernel, implying the potential applicability of the new derivative in quantifying real-world, anomalous relaxation and diffusion processes.

  2. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    PubMed

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  3. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  4. Bounding the heat trace of a Calabi-Yau manifold

    NASA Astrophysics Data System (ADS)

    Fiset, Marc-Antoine; Walcher, Johannes

    2015-09-01

    The SCHOK bound states that the number of marginal deformations of certain two-dimensional conformal field theories is bounded linearly from above by the number of relevant operators. In conformal field theories defined via sigma models into Calabi-Yau manifolds, relevant operators can be estimated, in the point-particle approximation, by the low-lying spectrum of the scalar Laplacian on the manifold. In the strict large volume limit, the standard asymptotic expansion of Weyl and Minakshisundaram-Pleijel diverges with the higher-order curvature invariants. We propose that it would be sufficient to find an a priori uniform bound on the trace of the heat kernel for large but finite volume. As a first step in this direction, we then study the heat trace asymptotics, as well as the actual spectrum of the scalar Laplacian, in the vicinity of a conifold singularity. The eigenfunctions can be written in terms of confluent Heun functions, the analysis of which gives evidence that regions of large curvature will not prevent the existence of a bound of this type. This is also in line with general mathematical expectations about spectral continuity for manifolds with conical singularities. A sharper version of our results could, in combination with the SCHOK bound, provide a basis for a global restriction on the dimension of the moduli space of Calabi-Yau manifolds.

  5. Discriminant power analyses of non-linear dimension expansion methods

    NASA Astrophysics Data System (ADS)

    Woo, Seongyoun; Lee, Chulhee

    2016-05-01

    Most non-linear classification methods can be viewed as non-linear dimension expansion methods followed by a linear classifier. For example, the support vector machine (SVM) expands the dimensions of the original data using various kernels and classifies the data in the expanded data space using a linear SVM. In case of extreme learning machines or neural networks, the dimensions are expanded by hidden neurons and the final layer represents the linear classification. In this paper, we analyze the discriminant powers of various non-linear classifiers. Some analyses of the discriminating powers of non-linear dimension expansion methods are presented along with a suggestion of how to improve separability in non-linear classifiers.

  6. The great human expansion.

    PubMed

    Henn, Brenna M; Cavalli-Sforza, L L; Feldman, Marcus W

    2012-10-30

    Genetic and paleoanthropological evidence is in accord that today's human population is the result of a great demic (demographic and geographic) expansion that began approximately 45,000 to 60,000 y ago in Africa and rapidly resulted in human occupation of almost all of the Earth's habitable regions. Genomic data from contemporary humans suggest that this expansion was accompanied by a continuous loss of genetic diversity, a result of what is called the "serial founder effect." In addition to genomic data, the serial founder effect model is now supported by the genetics of human parasites, morphology, and linguistics. This particular population history gave rise to the two defining features of genetic variation in humans: genomes from the substructured populations of Africa retain an exceptional number of unique variants, and there is a dramatic reduction in genetic diversity within populations living outside of Africa. These two patterns are relevant for medical genetic studies mapping genotypes to phenotypes and for inferring the power of natural selection in human history. It should be appreciated that the initial expansion and subsequent serial founder effect were determined by demographic and sociocultural factors associated with hunter-gatherer populations. How do we reconcile this major demic expansion with the population stability that followed for thousands years until the inventions of agriculture? We review advances in understanding the genetic diversity within Africa and the great human expansion out of Africa and offer hypotheses that can help to establish a more synthetic view of modern human evolution.

  7. Virial Expansion Bounds

    NASA Astrophysics Data System (ADS)

    Tate, Stephen James

    2013-10-01

    In the 1960s, the technique of using cluster expansion bounds in order to achieve bounds on the virial expansion was developed by Lebowitz and Penrose (J. Math. Phys. 5:841, 1964) and Ruelle (Statistical Mechanics: Rigorous Results. Benjamin, Elmsford, 1969). This technique is generalised to more recent cluster expansion bounds by Poghosyan and Ueltschi (J. Math. Phys. 50:053509, 2009), which are related to the work of Procacci (J. Stat. Phys. 129:171, 2007) and the tree-graph identity, detailed by Brydges (Phénomènes Critiques, Systèmes Aléatoires, Théories de Jauge. Les Houches 1984, pp. 129-183, 1986). The bounds achieved by Lebowitz and Penrose can also be sharpened by doing the actual optimisation and achieving expressions in terms of the Lambert W-function. The different bound from the cluster expansion shows some improvements for bounds on the convergence of the virial expansion in the case of positive potentials, which are allowed to have a hard core.

  8. Micro-injection by thermal expansion.

    PubMed

    Zalokar, M

    1981-05-01

    A micropipette (diameter 5 to 20 micron) sealed near the orifice to provide a small closed reservoir is described. The reservoir is filled with oil and can be heated with a tiny electric resistance wire loop. Thermal expansion and contraction of the oil in the reservoir allows liquid to be expelled or aspirated. The flow of the liquid can be controlled accurately by varying electric current. Detailed instructions are given for fabricating the micropipette and the heating assembly. A plan for a handy micropipette puller is given. The technique has proved to be valuable in nuclear transplantation and injection of fluid volumes between 1 and 100 picoliters into Drosophila eggs.

  9. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    PubMed

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  10. Probing the physical determinants of thermal expansion of folded proteins.

    PubMed

    Dellarole, Mariano; Kobayashi, Kei; Rouget, Jean-Baptiste; Caro, José Alfredo; Roche, Julien; Islam, Mohammad M; Garcia-Moreno E, Bertrand; Kuroda, Yutaka; Royer, Catherine A

    2013-10-24

    The magnitude and sign of the volume change upon protein unfolding are strongly dependent on temperature. This temperature dependence reflects differences in the thermal expansivity of the folded and unfolded states. The factors that determine protein molar expansivities and the large differences in thermal expansivity for proteins of similar molar volume are not well understood. Model compound studies have suggested that a major contribution is made by differences in the molar volume of water molecules as they transfer from the protein surface to the bulk upon heating. The expansion of internal solvent-excluded voids upon heating is another possible contributing factor. Here, the contribution from hydration density to the molar thermal expansivity of a protein was examined by comparing bovine pancreatic trypsin inhibitor and variants with alanine substitutions at or near the protein-water interface. Variants of two of these proteins with an additional mutation that unfolded them under native conditions were also examined. A modest decrease in thermal expansivity was observed in both the folded and unfolded states for the alanine variants compared with the parent protein, revealing that large changes can be made to the external polarity of a protein without causing large ensuing changes in thermal expansivity. This modest effect is not surprising, given the small molar volume of the alanine residue. Contributions of the expansion of the internal void volume were probed by measuring the thermal expansion for cavity-containing variants of a highly stable form of staphylococcal nuclease. Significantly larger (2-3-fold) molar expansivities were found for these cavity-containing proteins relative to the reference protein. Taken together, these results suggest that a key determinant of the thermal expansivities of folded proteins lies in the expansion of internal solvent-excluded voids.

  11. A Generalized Grid-Based Fast Multipole Method for Integrating Helmholtz Kernels.

    PubMed

    Parkkinen, Pauli; Losilla, Sergio A; Solala, Eelis; Toivanen, Elias A; Xu, Wen-Hua; Sundholm, Dage

    2017-02-14

    A grid-based fast multipole method (GB-FMM) for optimizing three-dimensional (3D) numerical molecular orbitals in the bubbles and cube double basis has been developed and implemented. The present GB-FMM method is a generalization of our recently published GB-FMM approach for numerically calculating electrostatic potentials and two-electron interaction energies. The orbital optimization is performed by integrating the Helmholtz kernel in the double basis. The steep part of the functions in the vicinity of the nuclei is represented by one-center bubbles functions, whereas the remaining cube part is expanded on an equidistant 3D grid. The integration of the bubbles part is treated by using one-center expansions of the Helmholtz kernel in spherical harmonics multiplied with modified spherical Bessel functions of the first and second kind, analogously to the numerical inward and outward integration approach for calculating two-electron interaction potentials in atomic structure calculations. The expressions and algorithms for massively parallel calculations on general purpose graphics processing units (GPGPU) are described. The accuracy and the correctness of the implementation has been checked by performing Hartree-Fock self-consistent-field calculations (HF-SCF) on H2, H2O, and CO. Our calculations show that an accuracy of 10(-4) to 10(-7) Eh can be reached in HF-SCF calculations on general molecules.

  12. Bilinear analysis for kernel selection and nonlinear feature extraction.

    PubMed

    Yang, Shu; Yan, Shuicheng; Zhang, Chao; Tang, Xiaoou

    2007-09-01

    This paper presents a unified criterion, Fisher + kernel criterion (FKC), for feature extraction and recognition. This new criterion is intended to extract the most discriminant features in different nonlinear spaces, and then, fuse these features under a unified measurement. Thus, FKC can simultaneously achieve nonlinear discriminant analysis and kernel selection. In addition, we present an efficient algorithm Fisher + kernel analysis (FKA), which utilizes the bilinear analysis, to optimize the new criterion. This FKA algorithm can alleviate the ill-posed problem existed in traditional kernel discriminant analysis (KDA), and usually, has no singularity problem. The effectiveness of our proposed algorithm is validated by a series of face-recognition experiments on several different databases.

  13. Inheritance of Kernel Color in Corn: Explanations and Investigations.

    ERIC Educational Resources Information Center

    Ford, Rosemary H.

    2000-01-01

    Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)

  14. Intelligent classification methods of grain kernels using computer vision analysis

    NASA Astrophysics Data System (ADS)

    Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo

    2011-06-01

    In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.

  15. Nonlinear hyperspectral unmixing based on constrained multiple kernel NMF

    NASA Astrophysics Data System (ADS)

    Cui, Jiantao; Li, Xiaorun; Zhao, Liaoying

    2014-05-01

    Nonlinear spectral unmixing constitutes an important field of research for hyperspectral imagery. An unsupervised nonlinear spectral unmixing algorithm, namely multiple kernel constrained nonnegative matrix factorization (MKCNMF) is proposed by coupling multiple-kernel selection with kernel NMF. Additionally, a minimum endmemberwise distance constraint and an abundance smoothness constraint are introduced to alleviate the uniqueness problem of NMF in the algorithm. In the MKCNMF, two problems of optimizing matrices and selecting the proper kernel are jointly solved. The performance of the proposed unmixing algorithm is evaluated via experiments based on synthetic and real hyperspectral data sets. The experimental results demonstrate that the proposed method outperforms some existing unmixing algorithms in terms of spectral angle distance (SAD) and abundance fractions.

  16. Hash subgraph pairwise kernel for protein-protein interaction extraction.

    PubMed

    Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian; Li, Yanpeng

    2012-01-01

    Extracting protein-protein interaction (PPI) from biomedical literature is an important task in biomedical text mining (BioTM). In this paper, we propose a hash subgraph pairwise (HSP) kernel-based approach for this task. The key to the novel kernel is to use the hierarchical hash labels to express the structural information of subgraphs in a linear time. We apply the graph kernel to compute dependency graphs representing the sentence structure for protein-protein interaction extraction task, which can efficiently make use of full graph structural information, and particularly capture the contiguous topological and label information ignored before. We evaluate the proposed approach on five publicly available PPI corpora. The experimental results show that our approach significantly outperforms all-path kernel approach on all five corpora and achieves state-of-the-art performance.

  17. Kernel-based Linux emulation for Plan 9.

    SciTech Connect

    Minnich, Ronald G.

    2010-09-01

    CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9. In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.

  18. AUTO-EXPANSIVE FLOW

    EPA Science Inventory

    Physics suggests that the interplay of momentum, continuity, and geometry in outward radial flow must produce density and concomitant pressure reductions. In other words, this flow is intrinsically auto-expansive. It has been proposed that this process is the key to understanding...

  19. Static gas expansion cooler

    DOEpatents

    Guzek, J.C.; Lujan, R.A.

    1984-01-01

    Disclosed is a cooler for television cameras and other temperature sensitive equipment. The cooler uses compressed gas ehich is accelerated to a high velocity by passing it through flow passageways having nozzle portions which expand the gas. This acceleration and expansion causes the gas to undergo a decrease in temperature thereby cooling the cooler body and adjacent temperature sensitive equipment.

  20. Expansion of Pannes

    EPA Science Inventory

    For the Long Island, New Jersey, and southern New England region, one facet of marsh drowning as a result of accelerated sea level rise is the expansion of salt marsh ponds and pannes. Over the past century, marsh ponds and pannes have formed and expanded in areas of poor drainag...

  1. A Special Trinomial Expansion

    ERIC Educational Resources Information Center

    Ayoub, Ayoub B.

    2006-01-01

    In this article, the author takes up the special trinomial (1 + x + x[squared])[superscript n] and shows that the coefficients of its expansion are entries of a Pascal-like triangle. He also shows how to calculate these entries recursively and explicitly. This article could be used in the classroom for enrichment. (Contains 1 table.)

  2. Urban Expansion Study

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Under an Egyptian government contract, PADCO studies urban growth in the Nile Area. They were assisted by LANDSAT survey maps and measurements provided by TAC. TAC had classified the raw LANDSAT data and processed it into various categories to detail urban expansion. PADCO crews spot checked the results, and correlations were established.

  3. Landslide: Systematic Dynamic Race Detection in Kernel Space

    DTIC Science & Technology

    2012-05-01

    the general challenges of kernel-level concurrency, and we evaluate its effectiveness and usability as a debugging aid. We show that our techniques make...effectiveness and usability as a de- bugging aid. We show that our techniques make systematic testing in kernel-space feasible and that Landslide is a useful...Binary Instrumentation and Applications, WBIA ’09, pages 62–71, New York, NY, USA, 2009. ACM. [SKM+11] Eunsoo Seo , Mohammad Maifi Hasan Khan, Prasant

  4. Nonlinear stochastic system identification of skin using volterra kernels.

    PubMed

    Chen, Yi; Hunter, Ian W

    2013-04-01

    Volterra kernel stochastic system identification is a technique that can be used to capture and model nonlinear dynamics in biological systems, including the nonlinear properties of skin during indentation. A high bandwidth and high stroke Lorentz force linear actuator system was developed and used to test the mechanical properties of bulk skin and underlying tissue in vivo using a non-white input force and measuring an output position. These short tests (5 s) were conducted in an indentation configuration normal to the skin surface and in an extension configuration tangent to the skin surface. Volterra kernel solution methods were used including a fast least squares procedure and an orthogonalization solution method. The practical modifications, such as frequency domain filtering, necessary for working with low-pass filtered inputs are also described. A simple linear stochastic system identification technique had a variance accounted for (VAF) of less than 75%. Representations using the first and second Volterra kernels had a much higher VAF (90-97%) as well as a lower Akaike information criteria (AICc) indicating that the Volterra kernel models were more efficient. The experimental second Volterra kernel matches well with results from a dynamic-parameter nonlinearity model with fixed mass as a function of depth as well as stiffness and damping that increase with depth into the skin. A study with 16 subjects showed that the kernel peak values have mean coefficients of variation (CV) that ranged from 3 to 8% and showed that the kernel principal components were correlated with location on the body, subject mass, body mass index (BMI), and gender. These fast and robust methods for Volterra kernel stochastic system identification can be applied to the characterization of biological tissues, diagnosis of skin diseases, and determination of consumer product efficacy.

  5. The Weighted Super Bergman Kernels Over the Supermatrix Spaces

    NASA Astrophysics Data System (ADS)

    Feng, Zhiming

    2015-12-01

    The purpose of this paper is threefold. Firstly, using Howe duality for , we obtain integral formulas of the super Schur functions with respect to the super standard Gaussian distributions. Secondly, we give explicit expressions of the super Szegö kernels and the weighted super Bergman kernels for the Cartan superdomains of type I. Thirdly, combining these results, we obtain duality relations of integrals over the unitary groups and the Cartan superdomains, and the marginal distributions of the weighted measure.

  6. Kernel generalized neighbor discriminant embedding for SAR automatic target recognition

    NASA Astrophysics Data System (ADS)

    Huang, Yulin; Pei, Jifang; Yang, Jianyu; Wang, Tao; Yang, Haiguang; Wang, Bing

    2014-12-01

    In this paper, we propose a new supervised feature extraction algorithm in synthetic aperture radar automatic target recognition (SAR ATR), called generalized neighbor discriminant embedding (GNDE). Based on manifold learning, GNDE integrates class and neighborhood information to enhance discriminative power of extracted feature. Besides, the kernelized counterpart of this algorithm is also proposed, called kernel-GNDE (KGNDE). The experiment in this paper shows that the proposed algorithms have better recognition performance than PCA and KPCA.

  7. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    DTIC Science & Technology

    2016-01-05

    events (and subsequently, their likelihood of occurrence) based on historical evidence of the counts of previous event occurrences. The novel Bayesian...Aug-2014 22-May-2015 Approved for Public Release; Distribution Unlimited Final Report: Sparse Event Modeling with Hierarchical Bayesian Kernel Methods...Sparse Event Modeling with Hierarchical Bayesian Kernel Methods Report Title The research objective of this proposal was to develop a predictive Bayesian

  8. The great human expansion

    PubMed Central

    Henn, Brenna M.; Cavalli-Sforza, L. L.; Feldman, Marcus W.

    2012-01-01

    Genetic and paleoanthropological evidence is in accord that today’s human population is the result of a great demic (demographic and geographic) expansion that began approximately 45,000 to 60,000 y ago in Africa and rapidly resulted in human occupation of almost all of the Earth’s habitable regions. Genomic data from contemporary humans suggest that this expansion was accompanied by a continuous loss of genetic diversity, a result of what is called the “serial founder effect.” In addition to genomic data, the serial founder effect model is now supported by the genetics of human parasites, morphology, and linguistics. This particular population history gave rise to the two defining features of genetic variation in humans: genomes from the substructured populations of Africa retain an exceptional number of unique variants, and there is a dramatic reduction in genetic diversity within populations living outside of Africa. These two patterns are relevant for medical genetic studies mapping genotypes to phenotypes and for inferring the power of natural selection in human history. It should be appreciated that the initial expansion and subsequent serial founder effect were determined by demographic and sociocultural factors associated with hunter-gatherer populations. How do we reconcile this major demic expansion with the population stability that followed for thousands years until the inventions of agriculture? We review advances in understanding the genetic diversity within Africa and the great human expansion out of Africa and offer hypotheses that can help to establish a more synthetic view of modern human evolution. PMID:23077256

  9. Protoribosome by quantum kernel energy method.

    PubMed

    Huang, Lulu; Krupkin, Miri; Bashan, Anat; Yonath, Ada; Massa, Lou

    2013-09-10

    Experimental evidence suggests the existence of an RNA molecular prebiotic entity, called by us the "protoribosome," which may have evolved in the RNA world before evolution of the genetic code and proteins. This vestige of the RNA world, which possesses all of the capabilities required for peptide bond formation, seems to be still functioning in the heart of all of the contemporary ribosome. Within the modern ribosome this remnant includes the peptidyl transferase center. Its highly conserved nucleotide sequence is suggestive of its robustness under diverse environmental conditions, and hence on its prebiotic origin. Its twofold pseudosymmetry suggests that this entity could have been a dimer of self-folding RNA units that formed a pocket within which two activated amino acids might be accommodated, similar to the binding mode of modern tRNA molecules that carry amino acids or peptidyl moieties. Using quantum mechanics and crystal coordinates, this work studies the question of whether the putative protoribosome has properties necessary to function as an evolutionary precursor to the modern ribosome. The quantum model used in the calculations is density functional theory--B3LYP/3-21G*, implemented using the kernel energy method to make the computations practical and efficient. It occurs that the necessary conditions that would characterize a practicable protoribosome--namely (i) energetic structural stability and (ii) energetically stable attachment to substrates--are both well satisfied.

  10. Enhanced FMAM based on empirical kernel map.

    PubMed

    Wang, Min; Chen, Songcan

    2005-05-01

    The existing morphological auto-associative memory models based on the morphological operations, typically including morphological auto-associative memories (auto-MAM) proposed by Ritter et al. and our fuzzy morphological auto-associative memories (auto-FMAM), have many attractive advantages such as unlimited storage capacity, one-shot recall speed and good noise-tolerance to single erosive or dilative noise. However, they suffer from the extreme vulnerability to noise of mixing erosion and dilation, resulting in great degradation on recall performance. To overcome this shortcoming, we focus on FMAM and propose an enhanced FMAM (EFMAM) based on the empirical kernel map. Although it is simple, EFMAM can significantly improve the auto-FMAM with respect to the recognition accuracy under hybrid-noise and computational effort. Experiments conducted on the thumbnail-sized faces (28 x 23 and 14 x 11) scaled from the ORL database show the average accuracies of 92%, 90%, and 88% with 40 classes under 10%, 20%, and 30% randomly generated hybrid-noises, respectively, which are far higher than the auto-FMAM (67%, 46%, 31%) under the same noise levels.

  11. Generalized Bergman kernels and geometric quantization

    NASA Astrophysics Data System (ADS)

    Tuynman, G. M.

    1987-03-01

    In geometric quantization it is well known that, if f is an observable and F a polarization on a symplectic manifold (M,ω), then the condition ``Xf leaves F invariant'' (where Xf denotes the Hamiltonian vector field associated to f ) is sufficient to guarantee that one does not have to compute the BKS kernel explicitly in order to know the corresponding quantum operator. It is shown in this paper that this condition on f can be weakened to ``Xf leaves F+F° invariant''and the corresponding quantum operator is then given implicitly by formula (4.8); in particular when F is a (positive) Kähler polarization, all observables can be quantized ``directly'' and moreover, an ``explicit'' formula for the corresponding quantum operator is derived (Theorem 5.8). Applying this to the phase space R2n one obtains a quantization prescription which ressembles the normal ordering of operators in quantum field theory. When we translate this prescription to the usual position representation of quantum mechanics, the result is (a.o) that the operator associated to a classical potential is multiplication by a function which is essentially the convolution of the potential function with a Gaussian function of width ℏ, instead of multiplication by the potential itself.

  12. Local Kernel for Brains Classification in Schizophrenia

    NASA Astrophysics Data System (ADS)

    Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.

    In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.

  13. The Dynamic Kernel Scheduler-Part 1

    NASA Astrophysics Data System (ADS)

    Adelmann, Andreas; Locans, Uldis; Suter, Andreas

    2016-10-01

    Emerging processor architectures such as GPUs and Intel MICs provide a huge performance potential for high performance computing. However developing software that uses these hardware accelerators introduces additional challenges for the developer. These challenges may include exposing increased parallelism, handling different hardware designs, and using multiple development frameworks in order to utilise devices from different vendors. The Dynamic Kernel Scheduler (DKS) is being developed in order to provide a software layer between the host application and different hardware accelerators. DKS handles the communication between the host and the device, schedules task execution, and provides a library of built-in algorithms. Algorithms available in the DKS library will be written in CUDA, OpenCL, and OpenMP. Depending on the available hardware, the DKS can select the appropriate implementation of the algorithm. The first DKS version was created using CUDA for the Nvidia GPUs and OpenMP for Intel MIC. DKS was further integrated into OPAL (Object-oriented Parallel Accelerator Library) in order to speed up a parallel FFT based Poisson solver and Monte Carlo simulations for particle-matter interaction used for proton therapy degrader modelling. DKS was also used together with Minuit2 for parameter fitting, where χ2 and max-log-likelihood functions were offloaded to the hardware accelerator. The concepts of the DKS, first results, and plans for the future will be shown in this paper.

  14. Kernel MAD Algorithm for Relative Radiometric Normalization

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Tang, Ping; Hu, Changmiao

    2016-06-01

    The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  15. Searching for efficient Markov chain Monte Carlo proposal kernels.

    PubMed

    Yang, Ziheng; Rodríguez, Carlos E

    2013-11-26

    Markov chain Monte Carlo (MCMC) or the Metropolis-Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis-Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals.

  16. Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.

    PubMed

    Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K

    2016-03-01

    Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.

  17. An Ensemble Approach to Building Mercer Kernels with Prior Information

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  18. Multiple Kernel Learning for Visual Object Recognition: A Review.

    PubMed

    Bucak, Serhat S; Rong Jin; Jain, Anil K

    2014-07-01

    Multiple kernel learning (MKL) is a principled approach for selecting and combining kernels for a given recognition task. A number of studies have shown that MKL is a useful tool for object recognition, where each image is represented by multiple sets of features and MKL is applied to combine different feature sets. We review the state-of-the-art for MKL, including different formulations and algorithms for solving the related optimization problems, with the focus on their applications to object recognition. One dilemma faced by practitioners interested in using MKL for object recognition is that different studies often provide conflicting results about the effectiveness and efficiency of MKL. To resolve this, we conduct extensive experiments on standard datasets to evaluate various approaches to MKL for object recognition. We argue that the seemingly contradictory conclusions offered by studies are due to different experimental setups. The conclusions of our study are: (i) given a sufficient number of training examples and feature/kernel types, MKL is more effective for object recognition than simple kernel combination (e.g., choosing the best performing kernel or average of kernels); and (ii) among the various approaches proposed for MKL, the sequential minimal optimization, semi-infinite programming, and level method based ones are computationally most efficient.

  19. Quantum heat traces

    NASA Astrophysics Data System (ADS)

    Avramidi, Ivan G.

    2017-02-01

    We study new invariants of elliptic partial differential operators acting on sections of a vector bundle over a closed Riemannian manifold that we call the relativistic heat trace and the quantum heat traces. We obtain some reduction formulas expressing these new invariants in terms of some integral transforms of the usual classical heat trace and compute the asymptotics of these invariants. The coefficients of these asymptotic expansion are determined by the usual heat trace coefficients (which are locally computable) as well as by some new global invariants.

  20. Simulating non-Newtonian flows with the moving particle semi-implicit method with an SPH kernel

    NASA Astrophysics Data System (ADS)

    Xiang, Hao; Chen, Bin

    2015-02-01

    The moving particle semi-implicit (MPS) method and smoothed particle hydrodynamics (SPH) are commonly used mesh-free particle methods for free surface flows. The MPS method has superiority in incompressible flow simulation and simple programing. However, the crude kernel function is not accurate enough for the discretization of the divergence of the shear stress tensor by the particle inconsistency when the MPS method is extended to non-Newtonian flows. This paper presents an improved MPS method with an SPH kernel to simulate non-Newtonian flows. To improve the consistency of the partial derivative, the SPH cubic spline kernel and the Taylor series expansion are combined with the MPS method. This approach is suitable for all non-Newtonian fluids that can be described with τ = μ(|γ|) Δ (where τ is the shear stress tensor, μ is the viscosity, |γ| is the shear rate, and Δ is the strain tensor), e.g., the Casson and Cross fluids. Two examples are simulated including the Newtonian Poiseuille flow and container filling process of the Cross fluid. The results of Poiseuille flow are more accurate than the traditional MPS method, and different filling processes are obtained with good agreement with previous results, which verified the validation of the new algorithm. For the Cross fluid, the jet fracture length can be correlated with We0.28Fr0.78 (We is the Weber number, Fr is the Froude number).

  1. Thermal Expansion of Vacuum Plasma Sprayed Coatings

    NASA Technical Reports Server (NTRS)

    Raj, S V.; Palczer, A. R.

    2010-01-01

    Metallic Cu-8%Cr, Cu-26%Cr, Cu-8%Cr-1%Al, NiAl and NiCrAlY monolithic coatings were fabricated by vacuum plasma spray deposition processes for thermal expansion property measurements between 293 and 1223 K. The corrected thermal expansion, (DL/L(sub 0) varies with the absolute temperature, T, as (DL/L(sub 0) = A(T - 293)(sup 3) + BIT - 293)(sup 2) + C(T - 293) + D, where, A, B, C and D are thermal, regression constants. Excellent reproducibility was observed for all of the coatings except for data obtained on the Cu-8%Cr and Cu-26%Cr coatings in the first heat-up cycle, which deviated from those determined in the subsequent cycles. This deviation is attributed to the presence of residual stresses developed during the spraying of the coatings, which are relieved after the first heat-up cycle. In the cases of Cu-8%Cr and NiAl, the thermal expansion data were observed to be reproducible for three specimens. The linear expansion data for Cu-8% Cr and Cu-26%Cr agree extremely well with rule of mixture (ROM) predictions. Comparison of the data for the Cu-8%Cr coating with literature data for Cr and Cu revealed that the thermal expansion behavior of this alloy is determined by the Cu-rich matrix. The data for NiAl and NiCrAlY are in excellent agreement with published results irrespective of composition and the methods used for processing the materials. The implications of these results on coating GRCop-84 copper alloy combustor liners for reusable launch vehicles are discussed.

  2. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  3. A Gabor-block-based kernel discriminative common vector approach using cosine kernels for human face recognition.

    PubMed

    Kar, Arindam; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita; Kundu, Mahantapas

    2012-01-01

    In this paper a nonlinear Gabor Wavelet Transform (GWT) discriminant feature extraction approach for enhanced face recognition is proposed. Firstly, the low-energized blocks from Gabor wavelet transformed images are extracted. Secondly, the nonlinear discriminating features are analyzed and extracted from the selected low-energized blocks by the generalized Kernel Discriminative Common Vector (KDCV) method. The KDCV method is extended to include cosine kernel function in the discriminating method. The KDCV with the cosine kernels is then applied on the extracted low-energized discriminating feature vectors to obtain the real component of a complex quantity for face recognition. In order to derive positive kernel discriminative vectors, we apply only those kernel discriminative eigenvectors that are associated with nonzero eigenvalues. The feasibility of the low-energized Gabor-block-based generalized KDCV method with cosine kernel function models has been successfully tested for classification using the L(1), L(2) distance measures; and the cosine similarity measure on both frontal and pose-angled face recognition. Experimental results on the FRAV2D and the FERET database demonstrate the effectiveness of this new approach.

  4. Bigravity from gradient expansion

    SciTech Connect

    Yamashita, Yasuho; Tanaka, Takahiro

    2016-05-04

    We discuss how the ghost-free bigravity coupled with a single scalar field can be derived from a braneworld setup. We consider DGP two-brane model without radion stabilization. The bulk configuration is solved for given boundary metrics, and it is substituted back into the action to obtain the effective four-dimensional action. In order to obtain the ghost-free bigravity, we consider the gradient expansion in which the brane separation is supposed to be sufficiently small so that two boundary metrics are almost identical. The obtained effective theory is shown to be ghost free as expected, however, the interaction between two gravitons takes the Fierz-Pauli form at the leading order of the gradient expansion, even though we do not use the approximation of linear perturbation. We also find that the radion remains as a scalar field in the four-dimensional effective theory, but its coupling to the metrics is non-trivial.

  5. Range expansion of mutualists

    NASA Astrophysics Data System (ADS)

    Muller, Melanie J. I.; Korolev, Kirill S.; Murray, Andrew W.; Nelson, David R.

    2012-02-01

    The expansion of a species into new territory is often strongly influenced by the presence of other species. This effect is particularly striking for the case of mutualistic species that enhance each other's proliferation. Examples range from major events in evolutionary history, such as the spread and diversification of flowering plants due to their mutualism with pollen-dispersing insects, to modern examples like the surface colonisation of multi-species microbial biofilms. Here, we investigate the spread of cross-feeding strains of the budding yeast Saccharomyces cerevisiae on an agar surface as a model system for expanding mutualists. Depending on the degree of mutualism, the two strains form distinctive spatial patterns during their range expansion. This change in spatial patterns can be understood as a phase transition within a stepping stone model generalized to two mutualistic species.

  6. Kernel density estimator methods for Monte Carlo radiation transport

    NASA Astrophysics Data System (ADS)

    Banerjee, Kaushik

    In this dissertation, the Kernel Density Estimator (KDE), a nonparametric probability density estimator, is studied and used to represent global Monte Carlo (MC) tallies. KDE is also employed to remove the singularities from two important Monte Carlo tallies, namely point detector and surface crossing flux tallies. Finally, KDE is also applied to accelerate the Monte Carlo fission source iteration for criticality problems. In the conventional MC calculation histograms are used to represent global tallies which divide the phase space into multiple bins. Partitioning the phase space into bins can add significant overhead to the MC simulation and the histogram provides only a first order approximation to the underlying distribution. The KDE method is attractive because it can estimate MC tallies in any location within the required domain without any particular bin structure. Post-processing of the KDE tallies is sufficient to extract detailed, higher order tally information for an arbitrary grid. The quantitative and numerical convergence properties of KDE tallies are also investigated and they are shown to be superior to conventional histograms as well as the functional expansion tally developed by Griesheimer. Monte Carlo point detector and surface crossing flux tallies are two widely used tallies but they suffer from an unbounded variance. As a result, the central limit theorem can not be used for these tallies to estimate confidence intervals. By construction, KDE tallies can be directly used to estimate flux at a point but the variance of this point estimate does not converge as 1/N, which is not unexpected for a point quantity. However, an improved approach is to modify both point detector and surface crossing flux tallies directly by using KDE within a variance reduction approach by taking advantage of the fact that KDE estimates the underlying probability density function. This methodology is demonstrated by several numerical examples and demonstrates that

  7. Cold-moderator scattering kernel methods

    SciTech Connect

    MacFarlane, R. E.

    1998-01-01

    An accurate representation of the scattering of neutrons by the materials used to build cold sources at neutron scattering facilities is important for the initial design and optimization of a cold source, and for the analysis of experimental results obtained using the cold source. In practice, this requires a good representation of the physics of scattering from the material, a method to convert this into observable quantities (such as scattering cross sections), and a method to use the results in a neutron transport code (such as the MCNP Monte Carlo code). At Los Alamos, the authors have been developing these capabilities over the last ten years. The final set of cold-moderator evaluations, together with evaluations for conventional moderator materials, was released in 1994. These materials have been processed into MCNP data files using the NJOY Nuclear Data Processing System. Over the course of this work, they were able to develop a new module for NJOY called LEAPR based on the LEAP + ADDELT code from the UK as modified by D.J. Picton for cold-moderator calculations. Much of the physics for methane came from Picton`s work. The liquid hydrogen work was originally based on a code using the Young-Koppel approach that went through a number of hands in Europe (including Rolf Neef and Guy Robert). It was generalized and extended for LEAPR, and depends strongly on work by Keinert and Sax of the University of Stuttgart. Thus, their collection of cold-moderator scattering kernels is truly an international effort, and they are glad to be able to return the enhanced evaluations and processing techniques to the international community. In this paper, they give sections on the major cold moderator materials (namely, solid methane, liquid methane, and liquid hydrogen) using each section to introduce the relevant physics for that material and to show typical results.

  8. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  9. Negative thermal expansion induced by intermetallic charge transfer.

    PubMed

    Azuma, Masaki; Oka, Kengo; Nabetani, Koichiro

    2015-06-01

    Suppression of thermal expansion is of great importance for industry. Negative thermal expansion (NTE) materials which shrink on heating and expand on cooling are therefore attracting keen attention. Here we provide a brief overview of NTE induced by intermetallic charge transfer in A-site ordered double perovskites SaCu3Fe4O12 and LaCu3Fe4-x Mn x O12, as well as in Bi or Ni substituted BiNiO3. The last compound shows a colossal dilatometric linear thermal expansion coefficient exceeding -70 × 10(-6) K(-1) near room temperature, in the temperature range which can be controlled by substitution.

  10. Thermal expansion of glassy polymers.

    PubMed

    Davy, K W; Braden, M

    1992-01-01

    The thermal expansion of a number of glassy polymers of interest in dentistry has been studied using a quartz dilatometer. In some cases, the expansion was linear and therefore the coefficient of thermal expansion readily determined. Other polymers exhibited non-linear behaviour and values appropriate to different temperature ranges are quoted. The linear coefficient of thermal expansion was, to a first approximation, a function of both the molar volume and van der Waal's volume of the repeating unit.

  11. Digestibility of solvent-treated Jatropha curcas kernel by broiler chickens in Senegal.

    PubMed

    Nesseim, Thierry Daniel Tamsir; Dieng, Abdoulaye; Mergeai, Guy; Ndiaye, Saliou; Hornick, Jean-Luc

    2015-12-01

    Jatropha curcas is a drought-resistant shrub belonging to the Euphorbiaceae family. The kernel contains approximately 60 % lipid in dry matter, and the meal obtained after oil extraction could be an exceptional source of protein for family poultry farming, in the absence of curcin and, especially, some diterpene derivatives phorbol esters that are partially lipophilic. The nutrient digestibility of J. curcas kernel meal (JKM), obtained after partial physicochemical deoiling was thus evaluated in broiler chickens. Twenty broiler chickens, 6 weeks old, were maintained in individual metabolic cages and divided into four groups of five animals, according to a 4 × 4 Latin square design where deoiled JKM was incorporated into grinded corn at 0, 4, 8, and 12 % levels (diets 0, 4, 8, and 12 J), allowing measurement of nutrient digestibility by the differential method. The dry matter (DM) and organic matter (OM) digestibility of diets was affected to a low extent by JKM (85 and 86 % in 0 J and 81 % in 12 J, respectively) in such a way that DM and OM digestibility of JKM was estimated to be close to 50 %. The ether extract (EE) digestibility of JKM remained high, at about 90 %, while crude protein (CP) and crude fiber (CF) digestibility were largely impacted by JKM, with values closed to 40 % at the highest levels of incorporation. J. curcas kernel presents various nutrient digestibilities but has adverse effects on CP and CF digestibility of the diet. The effects of an additional heat or biological treatment on JKM remain to be assessed.

  12. Mean kernels to improve gravimetric geoid determination based on modified Stokes's integration

    NASA Astrophysics Data System (ADS)

    Hirt, C.

    2011-11-01

    Gravimetric geoid computation is often based on modified Stokes's integration, where Stokes's integral is evaluated with some stochastic or deterministic kernel modification. Accurate numerical evaluation of Stokes's integral requires the modified kernel to be integrated across the area of each discretised grid cell (mean kernel). Evaluating the modified kernel at the center of the cell (point kernel) is an approximation, which may result in larger numerical integration errors near the computation point, where the modified kernel exhibits a strongly nonlinear behavior. The present study deals with the computation of whole-of-the-cell mean values of modified kernels, exemplified here with the Featherstone-Evans-Olliver (1998) kernel modification [Featherstone, W.E., Evans, J.D., Olliver, J.G., 1998. A Meissl-modified Vaníček and Kleusberg kernel to reduce the truncation error in gravimetric geoid computations. Journal of Geodesy 72(3), 154-160]. We investigate two approaches (analytical and numerical integration), which are capable of providing accurate mean kernels. The analytical integration approach is based on kernel weighting factors which are used for the conversion of point to mean kernels. For the efficient numerical integration, Gauss-Legendre quadrature is applied. The comparison of mean kernels from both approaches shows a satisfactory mutual agreement at the level of 10 -4 and better, which is considered to be sufficient for practical geoid computation requirements. Closed-loop tests based on the EGM2008 geopotential model demonstrate that using mean instead of point kernels reduces numerical integration errors by ˜65%. The use of mean kernels is recommended in remove-compute-restore geoid determination with the Featherstone-Evans-Olliver (1998) kernel or any other kernel modification under the condition that the kernel changes rapidly across the cells in the neighborhood of the computation point.

  13. Expansion: A Plan for Success.

    ERIC Educational Resources Information Center

    Callahan, A.P.

    This report provides selling brokers' guidelines for the successful expansion of their operations outlining a basic method of preparing an expansion plan. Topic headings are: The Pitfalls of Expansion (The Language of Business, Timely Financial Reporting, Regulatory Agencies of Government, Preoccupation with the Facade of Business, A Business Is a…

  14. A one-class kernel fisher criterion for outlier detection.

    PubMed

    Dufrenois, Franck

    2015-05-01

    Recently, Dufrenois and Noyer proposed a one class Fisher's linear discriminant to isolate normal data from outliers. In this paper, a kernelized version of their criterion is presented. Originally on the basis of an iterative optimization process, alternating between subspace selection and clustering, I show here that their criterion has an upper bound making these two problems independent. In particular, the estimation of the label vector is formulated as an unconstrained binary linear problem (UBLP) which can be solved using an iterative perturbation method. Once the label vector is estimated, an optimal projection subspace is obtained by solving a generalized eigenvalue problem. Like many other kernel methods, the performance of the proposed approach depends on the choice of the kernel. Constructed with a Gaussian kernel, I show that the proposed contrast measure is an efficient indicator for selecting an optimal kernel width. This property simplifies the model selection problem which is typically solved by costly (generalized) cross-validation procedures. Initialization, convergence analysis, and computational complexity are also discussed. Lastly, the proposed algorithm is compared with recent novelty detectors on synthetic and real data sets.

  15. On flame kernel formation and propagation in premixed gases

    SciTech Connect

    Eisazadeh-Far, Kian; Metghalchi, Hameed; Parsinejad, Farzan; Keck, James C.

    2010-12-15

    Flame kernel formation and propagation in premixed gases have been studied experimentally and theoretically. The experiments have been carried out at constant pressure and temperature in a constant volume vessel located in a high speed shadowgraph system. The formation and propagation of the hot plasma kernel has been simulated for inert gas mixtures using a thermodynamic model. The effects of various parameters including the discharge energy, radiation losses, initial temperature and initial volume of the plasma have been studied in detail. The experiments have been extended to flame kernel formation and propagation of methane/air mixtures. The effect of energy terms including spark energy, chemical energy and energy losses on flame kernel formation and propagation have been investigated. The inputs for this model are the initial conditions of the mixture and experimental data for flame radii. It is concluded that these are the most important parameters effecting plasma kernel growth. The results of laminar burning speeds have been compared with previously published results and are in good agreement. (author)

  16. CRKSPH - A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme

    NASA Astrophysics Data System (ADS)

    Frontiere, Nicholas; Raskin, Cody D.; Owen, J. Michael

    2017-03-01

    We present a formulation of smoothed particle hydrodynamics (SPH) that utilizes a first-order consistent reproducing kernel, a smoothing function that exactly interpolates linear fields with particle tracers. Previous formulations using reproducing kernel (RK) interpolation have had difficulties maintaining conservation of momentum due to the fact the RK kernels are not, in general, spatially symmetric. Here, we utilize a reformulation of the fluid equations such that mass, linear momentum, and energy are all rigorously conserved without any assumption about kernel symmetries, while additionally maintaining approximate angular momentum conservation. Our approach starts from a rigorously consistent interpolation theory, where we derive the evolution equations to enforce the appropriate conservation properties, at the sacrifice of full consistency in the momentum equation. Additionally, by exploiting the increased accuracy of the RK method's gradient, we formulate a simple limiter for the artificial viscosity that reduces the excess diffusion normally incurred by the ordinary SPH artificial viscosity. Collectively, we call our suite of modifications to the traditional SPH scheme Conservative Reproducing Kernel SPH, or CRKSPH. CRKSPH retains many benefits of traditional SPH methods (such as preserving Galilean invariance and manifest conservation of mass, momentum, and energy) while improving on many of the shortcomings of SPH, particularly the overly aggressive artificial viscosity and zeroth-order inaccuracy. We compare CRKSPH to two different modern SPH formulations (pressure based SPH and compatibly differenced SPH), demonstrating the advantages of our new formulation when modeling fluid mixing, strong shock, and adiabatic phenomena.

  17. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    PubMed

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function.

  18. Coupled kernel embedding for low resolution face image recognition.

    PubMed

    Ren, Chuan-Xian; Dai, Dao-Qing; Yan, Hong

    2012-08-01

    Practical video scene and face recognition systems are sometimes confronted with low-resolution (LR) images. The faces may be very small even if the video is clear, thus it is difficult to directly measure the similarity between the faces and the high-resolution (HR) training samples. Traditional super-resolution (SR) methods based face recognition usually have limited performance because the target of SR may not be consistent with that of classification, and time-consuming SR algorithms are not suitable for real-time applications. In this paper, a new feature extraction method called Coupled Kernel Embedding (CKE) is proposed for LR face recognition without any SR preprocessing. In this method, the final kernel matrix is constructed by concatenating two individual kernel matrices in the diagonal direction, and the (semi-)positively definite properties are preserved for optimization. CKE addresses the problem of comparing multi-modal data that are difficult for conventional methods in practice due to the lack of an efficient similarity measure. Particularly, different kernel types (e.g., linear, Gaussian, polynomial) can be integrated into an uniformed optimization objective, which cannot be achieved by simple linear methods. CKE solves this problem by minimizing the dissimilarities captured by their kernel Gram matrices in the low- and high-resolution spaces. In the implementation, the nonlinear objective function is minimized by a generalized eigenvalue decomposition. Experiments on benchmark and real databases show that our CKE method indeed improves the recognition performance.

  19. Optimizing spatial filters with kernel methods for BCI applications

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacai; Tang, Jianjun; Yao, Li

    2007-11-01

    Brain Computer Interface (BCI) is a communication or control system in which the user's messages or commands do not depend on the brain's normal output channels. The key step of BCI technology is to find a reliable method to detect the particular brain signals, such as the alpha, beta and mu components in EEG/ECOG trials, and then translate it into usable control signals. In this paper, our objective is to introduce a novel approach that is able to extract the discriminative pattern from the non-stationary EEG signals based on the common spatial patterns(CSP) analysis combined with kernel methods. The basic idea of our Kernel CSP method is performing a nonlinear form of CSP by the use of kernel methods that can efficiently compute the common and distinct components in high dimensional feature spaces related to input space by some nonlinear map. The algorithm described here is tested off-line with dataset I from the BCI Competition 2005. Our experiments show that the spatial filters employed with kernel CSP can effectively extract discriminatory information from single-trial EGOG recorded during imagined movements. The high recognition of linear discriminative rates and computational simplicity of "Kernel Trick" make it a promising method for BCI systems.

  20. Travel-time sensitivity kernels in long-range propagation.

    PubMed

    Skarsoulis, E K; Cornuelle, B D; Dzieciuch, M A

    2009-11-01

    Wave-theoretic travel-time sensitivity kernels (TSKs) are calculated in two-dimensional (2D) and three-dimensional (3D) environments and their behavior with increasing propagation range is studied and compared to that of ray-theoretic TSKs and corresponding Fresnel-volumes. The differences between the 2D and 3D TSKs average out when horizontal or cross-range marginals are considered, which indicates that they are not important in the case of range-independent sound-speed perturbations or perturbations of large scale compared to the lateral TSK extent. With increasing range, the wave-theoretic TSKs expand in the horizontal cross-range direction, their cross-range extent being comparable to that of the corresponding free-space Fresnel zone, whereas they remain bounded in the vertical. Vertical travel-time sensitivity kernels (VTSKs)-one-dimensional kernels describing the effect of horizontally uniform sound-speed changes on travel-times-are calculated analytically using a perturbation approach, and also numerically, as horizontal marginals of the corresponding TSKs. Good agreement between analytical and numerical VTSKs, as well as between 2D and 3D VTSKs, is found. As an alternative method to obtain wave-theoretic sensitivity kernels, the parabolic approximation is used; the resulting TSKs and VTSKs are in good agreement with normal-mode results. With increasing range, the wave-theoretic VTSKs approach the corresponding ray-theoretic sensitivity kernels.

  1. Face detection based on multiple kernel learning algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun

    2016-09-01

    Face detection is important for face localization in face or facial expression recognition, etc. The basic idea is to determine whether there is a face in an image or not, and also its location, size. It can be seen as a binary classification problem, which can be well solved by support vector machine (SVM). Though SVM has strong model generalization ability, it has some limitations, which will be deeply analyzed in the paper. To access them, we study the principle and characteristics of the Multiple Kernel Learning (MKL) and propose a MKL-based face detection algorithm. In the paper, we describe the proposed algorithm in the interdisciplinary research perspective of machine learning and image processing. After analyzing the limitation of describing a face with a single feature, we apply several ones. To fuse them well, we try different kernel functions on different feature. By MKL method, the weight of each single function is determined. Thus, we obtain the face detection model, which is the kernel of the proposed method. Experiments on the public data set and real life face images are performed. We compare the performance of the proposed algorithm with the single kernel-single feature based algorithm and multiple kernels-single feature based algorithm. The effectiveness of the proposed algorithm is illustrated. Keywords: face detection, feature fusion, SVM, MKL

  2. Spine labeling in axial magnetic resonance imaging via integral kernels.

    PubMed

    Miles, Brandon; Ben Ayed, Ismail; Hojjat, Seyed-Parsa; Wang, Michael H; Li, Shuo; Fenster, Aaron; Garvin, Gregory J

    2016-12-01

    This study investigates a fast integral-kernel algorithm for classifying (labeling) the vertebra and disc structures in axial magnetic resonance images (MRI). The method is based on a hierarchy of feature levels, where pixel classifications via non-linear probability product kernels (PPKs) are followed by classifications of 2D slices, individual 3D structures and groups of 3D structures. The algorithm further embeds geometric priors based on anatomical measurements of the spine. Our classifier requires evaluations of computationally expensive integrals at each pixel, and direct evaluations of such integrals would be prohibitively time consuming. We propose an efficient computation of kernel density estimates and PPK evaluations for large images and arbitrary local window sizes via integral kernels. Our method requires a single user click for a whole 3D MRI volume, runs nearly in real-time, and does not require an intensive external training. Comprehensive evaluations over T1-weighted axial lumbar spine data sets from 32 patients demonstrate a competitive structure classification accuracy of 99%, along with a 2D slice classification accuracy of 88%. To the best of our knowledge, such a structure classification accuracy has not been reached by the existing spine labeling algorithms. Furthermore, we believe our work is the first to use integral kernels in the context of medical images.

  3. Working fluids and expansion machines for ORC

    NASA Astrophysics Data System (ADS)

    Richter, Lukáš; Linhart, Jiří

    2016-06-01

    This paper discusses the key technical aspects of the Organic Rankin - Clausius cycle (ORC), unconventional technology with great potential for the use of low-potential heat and the use of geothermal and solar energy, and in connection with the burning of biomass. The principle of ORC has been known since the late 19th century. The development of new organic substances and improvements to the expansion device now allows full commercial exploitation of ORC. The right choice of organic working substances has the most important role in the design of ORC, depending on the specific application. The chosen working substance and achieved operating parameters will affect the selection and construction of the expansion device. For this purpose the screw engine, inversion of the screw compressor, can be used.

  4. Kernel Manifold Alignment for Domain Adaptation.

    PubMed

    Tuia, Devis; Camps-Valls, Gustau

    2016-01-01

    The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors' knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational

  5. Kernel Manifold Alignment for Domain Adaptation

    PubMed Central

    Tuia, Devis; Camps-Valls, Gustau

    2016-01-01

    The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors’ knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational

  6. Lifting kernel-based sprite codec

    NASA Astrophysics Data System (ADS)

    Dasu, Aravind R.; Panchanathan, Sethuraman

    2000-12-01

    The International Standards Organization (ISO) has proposed a family of standards for compression of image and video sequences, including the JPEG, MPEG-1 and MPEG-2. The latest MPEG-4 standard has many new dimensions to coding and manipulation of visual content. A video sequence usually contains a background object and many foreground objects. Portions of this background may not be visible in certain frames due to the occlusion of the foreground objects or camera motion. MPEG-4 introduces the novel concepts of Video Object Planes (VOPs) and Sprites. A VOP is a visual representation of real world objects with shapes that need not be rectangular. Sprite is a large image composed of pixels belonging to a video object visible throughout a video segment. Since a sprite contains all parts of the background that were at least visible once, it can be used for direct reconstruction of the background Video Object Plane (VOP). Sprite reconstruction is dependent on the mode in which it is transmitted. In the Static sprite mode, the entire sprite is decoded as an Intra VOP before decoding the individual VOPs. Since sprites consist of the information needed to display multiple frames of a video sequence, they are typically much larger than a single frame of video. Therefore a static sprite can be considered as a large static image. In this paper, a novel solution to address the problem of spatial scalability has been proposed, where the sprite is encoded in Discrete Wavelet Transform (DWT). A lifting kernel method of DWT implementation has been used for encoding and decoding sprites. Modifying the existing lifting scheme while maintaining it to be shape adaptive results in a reduced complexity. The proposed scheme has the advantages of (1) avoiding the need for any extensions to image or tile border pixels and is hence superior to the DCT based low latency scheme (used in the current MPEG-4 verification model), (2) mapping the in place computed wavelet coefficients into a zero

  7. Expansible quantum secret sharing network

    NASA Astrophysics Data System (ADS)

    Sun, Ying; Xu, Sheng-Wei; Chen, Xiu-Bo; Niu, Xin-Xin; Yang, Yi-Xian

    2013-08-01

    In the practical applications, member expansion is a usual demand during the development of a secret sharing network. However, there are few consideration and discussion on network expansibility in the existing quantum secret sharing schemes. We propose an expansible quantum secret sharing scheme with relatively simple and economical quantum resources and show how to split and reconstruct the quantum secret among an expansible user group in our scheme. Its trait, no requirement of any agent's assistant during the process of member expansion, can help to prevent potential menaces of insider cheating. We also give a discussion on the security of this scheme from three aspects.

  8. Semisupervised kernel marginal Fisher analysis for face recognition.

    PubMed

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  9. Semisupervised Kernel Marginal Fisher Analysis for Face Recognition

    PubMed Central

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm. PMID:24163638

  10. A method of smoothed particle hydrodynamics using spheroidal kernels

    NASA Technical Reports Server (NTRS)

    Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.

    1995-01-01

    We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.

  11. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  12. Weighted Feature Gaussian Kernel SVM for Emotion Recognition

    PubMed Central

    Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443

  13. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    PubMed Central

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  14. Compression loading behaviour of sunflower seeds and kernels

    NASA Astrophysics Data System (ADS)

    Selvam, Thasaiya A.; Manikantan, Musuvadi R.; Chand, Tarsem; Sharma, Rajiv; Seerangurayar, Thirupathi

    2014-10-01

    The present study was carried out to investigate the compression loading behaviour of five Indian sunflower varieties (NIRMAL-196, NIRMAL-303, CO-2, KBSH-41, and PSH- 996) under four different moisture levels (6-18% d.b). The initial cracking force, mean rupture force, and rupture energy were measured as a function of moisture content. The observed results showed that the initial cracking force decreased linearly with an increase in moisture content for all varieties. The mean rupture force also decreased linearly with an increase in moisture content. However, the rupture energy was found to be increasing linearly for seed and kernel with moisture content. NIRMAL-196 and PSH-996 had maximum and minimum values of all the attributes studied for both seed and kernel, respectively. The values of all the studied attributes were higher for seed than kernel of all the varieties at all moisture levels. There was a significant effect of moisture and variety on compression loading behaviour.

  15. Weighted Feature Gaussian Kernel SVM for Emotion Recognition.

    PubMed

    Wei, Wei; Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods.

  16. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  17. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  18. Single aflatoxin contaminated corn kernel analysis with fluorescence hyperspectral image

    NASA Astrophysics Data System (ADS)

    Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Ononye, Ambrose; Brown, Robert L.; Cleveland, Thomas E.

    2010-04-01

    Aflatoxins are toxic secondary metabolites of the fungi Aspergillus flavus and Aspergillus parasiticus, among others. Aflatoxin contaminated corn is toxic to domestic animals when ingested in feed and is a known carcinogen associated with liver and lung cancer in humans. Consequently, aflatoxin levels in food and feed are regulated by the Food and Drug Administration (FDA) in the US, allowing 20 ppb (parts per billion) limits in food and 100 ppb in feed for interstate commerce. Currently, aflatoxin detection and quantification methods are based on analytical tests including thin-layer chromatography (TCL) and high performance liquid chromatography (HPLC). These analytical tests require the destruction of samples, and are costly and time consuming. Thus, the ability to detect aflatoxin in a rapid, nondestructive way is crucial to the grain industry, particularly to corn industry. Hyperspectral imaging technology offers a non-invasive approach toward screening for food safety inspection and quality control based on its spectral signature. The focus of this paper is to classify aflatoxin contaminated single corn kernels using fluorescence hyperspectral imagery. Field inoculated corn kernels were used in the study. Contaminated and control kernels under long wavelength ultraviolet excitation were imaged using a visible near-infrared (VNIR) hyperspectral camera. The imaged kernels were chemically analyzed to provide reference information for image analysis. This paper describes a procedure to process corn kernels located in different images for statistical training and classification. Two classification algorithms, Maximum Likelihood and Binary Encoding, were used to classify each corn kernel into "control" or "contaminated" through pixel classification. The Binary Encoding approach had a slightly better performance with accuracy equals to 87% or 88% when 20 ppb or 100 ppb was used as classification threshold, respectively.

  19. Aflatoxin detection in whole corn kernels using hyperspectral methods

    NASA Astrophysics Data System (ADS)

    Casasent, David; Chen, Xue-Wen

    2004-03-01

    Hyperspectral (HS) data for the inspection of whole corn kernels for aflatoxin is considered. The high-dimensionality of HS data requires feature extraction or selection for good classifier generalization. For fast and inexpensive data collection, only several features (λ responses) can be used. These are obtained by feature selection from the full HS response. A new high dimensionality branch and bound (HDBB) feature selection algorithm is used; it is found to be optimum, fast and very efficient. Initial results indicate that HS data is very promising for aflatoxin detection in whole kernel corn.

  20. Characterizations of linear Volterra integral equations with nonnegative kernels

    NASA Astrophysics Data System (ADS)

    Naito, Toshiki; Shin, Jong Son; Murakami, Satoru; Ngoc, Pham Huu Anh

    2007-11-01

    We first introduce the notion of positive linear Volterra integral equations. Then, we offer a criterion for positive equations in terms of the resolvent. In particular, equations with nonnegative kernels are positive. Next, we obtain a variant of the Paley-Wiener theorem for equations of this class and its extension to perturbed equations. Furthermore, we get a Perron-Frobenius type theorem for linear Volterra integral equations with nonnegative kernels. Finally, we give a criterion for positivity of the initial function semigroup of linear Volterra integral equations and provide a necessary and sufficient condition for exponential stability of the semigroups.

  1. Heat pipe array heat exchanger

    DOEpatents

    Reimann, Robert C.

    1987-08-25

    A heat pipe arrangement for exchanging heat between two different temperature fluids. The heat pipe arrangement is in a ounterflow relationship to increase the efficiency of the coupling of the heat from a heat source to a heat sink.

  2. Global Monte Carlo Simulation with High Order Polynomial Expansions

    SciTech Connect

    William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin

    2007-12-13

    The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as “local” piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi’s method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source

  3. Research on classifying performance of SVMs with basic kernel in HCCR

    NASA Astrophysics Data System (ADS)

    Sun, Limin; Gai, Zhaoxin

    2006-02-01

    It still is a difficult task for handwritten chinese character recognition (HCCR) to put into practical use. An efficient classifier occupies very important position for increasing offline HCCR rate. SVMs offer a theoretically well-founded approach to automated learning of pattern classifiers for mining labeled data sets. As we know, the performance of SVM largely depends on the kernel function. In this paper, we investigated the classification performance of SVMs with various common kernels in HCCR. We found that except for sigmoid kernel, SVMs with polynomial kernel, linear kernel, RBF kernel and multi-quadratic kernel are all efficient classifier for HCCR, their behavior has a little difference, taking one with another, SVM with multi-quadratic kernel is the best.

  4. A multiple-kernel fuzzy C-means algorithm for image segmentation.

    PubMed

    Chen, Long; Chen, C L Philip; Lu, Mingzhu

    2011-10-01

    In this paper, a generalized multiple-kernel fuzzy C-means (FCM) (MKFCM) methodology is introduced as a framework for image-segmentation problems. In the framework, aside from the fact that the composite kernels are used in the kernel FCM (KFCM), a linear combination of multiple kernels is proposed and the updating rules for the linear coefficients of the composite kernel are derived as well. The proposed MKFCM algorithm provides us a new flexible vehicle to fuse different pixel information in image-segmentation problems. That is, different pixel information represented by different kernels is combined in the kernel space to produce a new kernel. It is shown that two successful enhanced KFCM-based image-segmentation algorithms are special cases of MKFCM. Several new segmentation algorithms are also derived from the proposed MKFCM framework. Simulations on the segmentation of synthetic and medical images demonstrate the flexibility and advantages of MKFCM-based approaches.

  5. Operation of Mammoth Pacific`s MP1-100 turbine with metastable, supersaturated expansions

    SciTech Connect

    Mines, G.L.

    1996-01-01

    INEL`s Heat Cycle Research project continues to develop a technology base for increasing use of moderate-temperature hydrothermal resources to generate electrical power. One concept is the use of metastable, supersaturated turbine expansions. These expansions support a supersaturated working fluid vapor; at equilibrium conditions, liquid condensate would be present during the turbine expansion process. Studies suggest that if these expansions do not adversely affect the turbine performance, up to 8-10% more power could be produced from a given geothermal fluid. Determining the impact of these expansions on turbine performance is the focus of the project investigations being reported.

  6. Optical imaging. Expansion microscopy.

    PubMed

    Chen, Fei; Tillberg, Paul W; Boyden, Edward S

    2015-01-30

    In optical microscopy, fine structural details are resolved by using refraction to magnify images of a specimen. We discovered that by synthesizing a swellable polymer network within a specimen, it can be physically expanded, resulting in physical magnification. By covalently anchoring specific labels located within the specimen directly to the polymer network, labels spaced closer than the optical diffraction limit can be isotropically separated and optically resolved, a process we call expansion microscopy (ExM). Thus, this process can be used to perform scalable superresolution microscopy with diffraction-limited microscopes. We demonstrate ExM with apparent ~70-nanometer lateral resolution in both cultured cells and brain tissue, performing three-color superresolution imaging of ~10(7) cubic micrometers of the mouse hippocampus with a conventional confocal microscope.

  7. Cryogenic expansion machine

    DOEpatents

    Pallaver, Carl B.; Morgan, Michael W.

    1978-01-01

    A cryogenic expansion engine includes intake and exhaust poppet valves each controlled by a cam having adjustable dwell, the valve seats for the valves being threaded inserts in the valve block. Each cam includes a cam base and a ring-shaped cam insert disposed at an exterior corner of the cam base, the cam base and cam insert being generally circular but including an enlarged cam dwell, the circumferential configuration of the cam base and cam dwell being identical, the cam insert being rotatable with respect to the cam base. GI CONTRACTUAL ORIGIN OF THE INVENTION The invention described herein was made in the course of, or under, a contract with the UNITED STATES ENERGY RESEARCH AND DEVELOPMENT ADMINISTRATION.

  8. Fast reactor power plant design having heat pipe heat exchanger

    DOEpatents

    Huebotter, P.R.; McLennan, G.A.

    1984-08-30

    The invention relates to a pool-type fission reactor power plant design having a reactor vessel containing a primary coolant (such as liquid sodium), and a steam expansion device powered by a pressurized water/steam coolant system. Heat pipe means are disposed between the primary and water coolants to complete the heat transfer therebetween. The heat pipes are vertically oriented, penetrating the reactor deck and being directly submerged in the primary coolant. A U-tube or line passes through each heat pipe, extended over most of the length of the heat pipe and having its walls spaced from but closely proximate to and generally facing the surrounding walls of the heat pipe. The water/steam coolant loop includes each U-tube and the steam expansion device. A heat transfer medium (such as mercury) fills each of the heat pipes. The thermal energy from the primary coolant is transferred to the water coolant by isothermal evaporation-condensation of the heat transfer medium between the heat pipe and U-tube walls, the heat transfer medium moving within the heat pipe primarily transversely between these walls.

  9. Fast reactor power plant design having heat pipe heat exchanger

    DOEpatents

    Huebotter, Paul R.; McLennan, George A.

    1985-01-01

    The invention relates to a pool-type fission reactor power plant design having a reactor vessel containing a primary coolant (such as liquid sodium), and a steam expansion device powered by a pressurized water/steam coolant system. Heat pipe means are disposed between the primary and water coolants to complete the heat transfer therebetween. The heat pipes are vertically oriented, penetrating the reactor deck and being directly submerged in the primary coolant. A U-tube or line passes through each heat pipe, extended over most of the length of the heat pipe and having its walls spaced from but closely proximate to and generally facing the surrounding walls of the heat pipe. The water/steam coolant loop includes each U-tube and the steam expansion device. A heat transfer medium (such as mercury) fills each of the heat pipes. The thermal energy from the primary coolant is transferred to the water coolant by isothermal evaporation-condensation of the heat transfer medium between the heat pipe and U-tube walls, the heat transfer medium moving within the heat pipe primarily transversely between these walls.

  10. High flux expansion divertor studies in NSTX

    SciTech Connect

    Soukhanovskii, V A; Maingi, R; Bell, R E; Gates, D A; Kaita, R; Kugel, H W; LeBlanc, B P; Maqueda, R; Menard, J E; Mueller, D; Paul, S F; Raman, R; Roquemore, A L

    2009-06-29

    Projections for high-performance H-mode scenarios in spherical torus (ST)-based devices assume low electron collisionality for increased efficiency of the neutral beam current drive. At lower collisionality (lower density), the mitigation techniques based on induced divertor volumetric power and momentum losses may not be capable of reducing heat and material erosion to acceptable levels in a compact ST divertor. Divertor geometry can also be used to reduce high peak heat and particle fluxes by flaring a scrape-off layer (SOL) flux tube at the divertor plate, and by optimizing the angle at which the flux tube intersects the divertor plate, or reduce heat flow to the divertor by increasing the length of the flux tube. The recently proposed advanced divertor concepts [1, 2] take advantage of these geometry effects. In a high triangularity ST plasma configuration, the magnetic flux expansion at the divertor strike point (SP) is inherently high, leading to a reduction of heat and particle fluxes and a facilitated access to the outer SP detachment, as has been demonstrated recently in NSTX [3]. The natural synergy of the highly-shaped high-performance ST plasmas with beneficial divertor properties motivated a further systematic study of the high flux expansion divertor. The National Spherical Torus Experiment (NSTX) is a mid-sized device with the aspect ratio A = 1.3-1.5 [4]. In NSTX, the graphite tile divertor has an open horizontal plate geometry. The divertor magnetic configuration geometry was systematically changed in an experiment by either (1) changing the distance between the lower divertor X-point and the divertor plate (X-point height h{sub X}), or by (2) keeping the X-point height constant and increasing the outer SP radius. An initial analysis of the former experiment is presented below. Since in the divertor the poloidal field B{sub {theta}} strength is proportional to h{sub X}, the X-point height variation changed the divertor plasma wetted area due to

  11. Pyrolysis and combustion of oil palm stone and palm kernel cake in fixed-bed reactors.

    PubMed

    Razuan, R; Chen, Q; Zhang, X; Sharifi, V; Swithenbank, J

    2010-06-01

    The main objective of this research was to investigate the main characteristics of the thermo-chemical conversion of oil palm stone (OPS) and palm kernel cake (PKC). A series of combustion and pyrolysis tests were carried out in two fixed-bed reactors. The effects of heating rate at the temperature of 700 degrees C on the yields and properties of the pyrolysis products were investigated. The results from the combustion experiments showed that the burning rates increased with an increase in the air flow rate. In addition, the FLIC code was used to simulate the combustion of the oil palm stone to investigate the effect of primary air flow on the combustion process. The FLIC modelling results were in good agreement with the experimental data in terms of predicting the temperature profiles along the bed height and the composition of the flue gases.

  12. Chemical heat pump

    DOEpatents

    Greiner, Leonard

    1980-01-01

    A chemical heat pump system is disclosed for use in heating and cooling structures such as residences or commercial buildings. The system is particularly adapted to utilizing solar energy, but also increases the efficiency of other forms of thermal energy when solar energy is not available. When solar energy is not available for relatively short periods of time, the heat storage capacity of the chemical heat pump is utilized to heat the structure as during nighttime hours. The design also permits home heating from solar energy when the sun is shining. The entire system may be conveniently rooftop located. In order to facilitate installation on existing structures, the absorber and vaporizer portions of the system may each be designed as flat, thin wall, thin pan vessels which materially increase the surface area available for heat transfer. In addition, this thin, flat configuration of the absorber and its thin walled (and therefore relatively flexible) construction permits substantial expansion and contraction of the absorber material during vaporization and absorption without generating voids which would interfere with heat transfer. The heat pump part of the system heats or cools a house or other structure through a combination of evaporation and absorption or, conversely, condensation and desorption, in a pair of containers. A set of automatic controls change the system for operation during winter and summer months and for daytime and nighttime operation to satisfactorily heat and cool a house during an entire year. The absorber chamber is subjected to solar heating during regeneration cycles and is covered by one or more layers of glass or other transparent material. Daytime home air used for heating the home is passed at appropriate flow rates between the absorber container and the first transparent cover layer in heat transfer relationship in a manner that greatly reduce eddies and resultant heat loss from the absorbant surface to ambient atmosphere.

  13. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  14. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images.

  15. Genome-wide Association Analysis of Kernel Weight in Hard Winter Wheat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Wheat kernel weight is an important and heritable component of wheat grain yield and a key predictor of flour extraction. Genome-wide association analysis was conducted to identify genomic regions associated with kernel weight and kernel weight environmental response in 8 trials of 299 hard winter ...

  16. Indicator Expansion with Analysis Pipeline

    DTIC Science & Technology

    2015-01-13

    2014 Carnegie Mellon University Indicator Expansion with Analysis Pipeline Dan Ruef 1/13/15 Report Documentation Page Form ApprovedOMB No. 0704...4. TITLE AND SUBTITLE Indicator Expansion with Analysis Pipeline 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...Mellon®, CERT® and FloCon® are registered marks of Carnegie Mellon University. DM-0002067 3 Definition “Indicator expansion is a process of using one or

  17. An Adaptive Genetic Association Test Using Double Kernel Machines.

    PubMed

    Zhan, Xiang; Epstein, Michael P; Ghosh, Debashis

    2015-10-01

    Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study.

  18. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    ERIC Educational Resources Information Center

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  19. Predicting disease trait with genomic data: a composite kernel approach.

    PubMed

    Yang, Haitao; Li, Shaoyu; Cao, Hongyan; Zhang, Chichen; Cui, Yuehua

    2016-06-02

    With the advancement of biotechniques, a vast amount of genomic data is generated with no limit. Predicting a disease trait based on these data offers a cost-effective and time-efficient way for early disease screening. Here we proposed a composite kernel partial least squares (CKPLS) regression model for quantitative disease trait prediction focusing on genomic data. It can efficiently capture nonlinear relationships among features compared with linear learning algorithms such as Least Absolute Shrinkage and Selection Operator or ridge regression. We proposed to optimize the kernel parameters and kernel weights with the genetic algorithm (GA). In addition to improved performance for parameter optimization, the proposed GA-CKPLS approach also has better learning capacity and generalization ability compared with single kernel-based KPLS method as well as other nonlinear prediction models such as the support vector regression. Extensive simulation studies demonstrated that GA-CKPLS had better prediction performance than its counterparts under different scenarios. The utility of the method was further demonstrated through two case studies. Our method provides an efficient quantitative platform for disease trait prediction based on increasing volume of omics data.

  20. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  1. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  2. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  3. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  4. Notes on a storage manager for the Clouds kernel

    NASA Technical Reports Server (NTRS)

    Pitts, David V.; Spafford, Eugene H.

    1986-01-01

    The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.

  5. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    DTIC Science & Technology

    2006-01-01

    successfully used by the machine learning community for pattern recognition and image denoising [14]. A Gaussian kernel was used by Cremers et al. [8] for...matrix M, where φi ∈ RNd . Using Singular Value Decomposition ( SVD ), the covariance matrix 1nMM T is decomposed as: UΣUT = 1 n MMT (1) where U is a

  6. Classification of Microarray Data Using Kernel Fuzzy Inference System.

    PubMed

    Kumar, Mukesh; Kumar Rath, Santanu

    2014-01-01

    The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature selection method. Kernel functions are used to map original data points into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function ϕ through a mathematical process called the kernel trick. This paper also presents a comparative study for classification using K-FIS along with support vector machine (SVM) for different set of features (genes). Performance parameters available in the literature such as precision, recall, specificity, F-measure, ROC curve, and accuracy are considered to analyze the efficiency of the classification model. From the proposed approach, it is apparent that K-FIS model obtains similar results when compared with SVM model. This is an indication that the proposed approach relies on kernel function.

  7. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  8. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  9. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  10. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  11. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  12. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  13. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  14. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  15. Online multiple kernel similarity learning for visual search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2014-03-01

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in content-based image retrieval (CBIR). Despite their successes, most existing methods on distance metric learning are limited in two aspects. First, they usually assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multimodal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel similarity learning framework for learning kernel-based proximity functions which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel online multiple kernel similarity (OMKS) learning method which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets in which encouraging results show that OMKS outperforms the state-of-the-art techniques significantly.

  16. Online Multiple Kernel Similarity Learning for Visual Search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2013-08-13

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in Content-Based Image Retrieval (CBIR). Despite their popularity and success, most existing methods on distance metric learning are limited in two aspects. First, they typically assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multi-modal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel ranking framework for learning kernel-based proximity functions, which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel Online Multiple Kernel Ranking (OMKR) method, which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets, in which encouraging results show that OMKR outperforms the state-of-the-art techniques significantly.

  17. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2009-02-20

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  18. Microwave moisture meter for in-shell peanut kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    . A microwave moisture meter built with off-the-shelf components was developed, calibrated and tested in the laboratory and in the field for nondestructive and instantaneous in-shell peanut kernel moisture content determination from dielectric measurements on unshelled peanut pod samples. The meter ...

  19. Music emotion detection using hierarchical sparse kernel machines.

    PubMed

    Chin, Yu-Hao; Lin, Chang-Hong; Siahaan, Ernestasia; Wang, Jia-Ching

    2014-01-01

    For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA) is implemented to reduce the dimension. The acoustical features are utilized to generate the first-level decision vector, which is a vector with each element being a significant value of an emotion. The significant values of eight main emotional classes are utilized in this paper. To calculate the significant value of an emotion, we construct its 2-class SVM with calm emotion as the global (non-target) side of the SVM. The probability distributions of the adopted acoustical features are calculated and the probability product kernel is applied in the first-level SVMs to obtain first-level decision vector feature. In the second level of the hierarchical system, we merely construct a 2-class relevance vector machine (RVM) with happiness as the target side and other emotions as the background side of the RVM. The first-level decision vector is used as the feature with conventional radial basis function kernel. The happiness verification threshold is built on the probability value. In the experimental results, the detection error tradeoff (DET) curve shows that the proposed system has a good performance on verifying if a music clip reveals happiness emotion.

  20. Matrix kernels for MEG and EEG source localization and imaging

    SciTech Connect

    Mosher, J.C.; Lewis, P.S.; Leahy, R.M.

    1994-12-31

    The most widely used model for electroencephalography (EEG) and magnetoencephalography (MEG) assumes a quasi-static approximation of Maxwell`s equations and a piecewise homogeneous conductor model. Both models contain an incremental field element that linearly relates an incremental source element (current dipole) to the field or voltage at a distant point. The explicit form of the field element is dependent on the head modeling assumptions and sensor configuration. Proper characterization of this incremental element is crucial to the inverse problem. The field element can be partitioned into the product of a vector dependent on sensor characteristics and a matrix kernel dependent only on head modeling assumptions. We present here the matrix kernels for the general boundary element model (BEM) and for MEG spherical models. We show how these kernels are easily interchanged in a linear algebraic framework that includes sensor specifics such as orientation and gradiometer configuration. We then describe how this kernel is easily applied to ``gain`` or ``transfer`` matrices used in multiple dipole and source imaging models.

  1. Quality Characteristics of Soft Kernel Durum -- A New Cereal Crop

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Production of crops is in part limited by consumer demand and utilization. In this regard, world production of durum wheat (Triticum turgidum subsp. durum is limited by its culinary uses. The leading constraint is its very hard kernels. Puroindolines, which act to soften the endosperm, are completel...

  2. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) FOOD FOR HUMAN CONSUMPTION (CONTINUED) INDIRECT FOOD ADDITIVES: PAPER AND PAPERBOARD COMPONENTS Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind..., manufacturing, packing, processing, preparing, treating, packaging, transporting, or holding food, subject...

  3. Heat pump system

    DOEpatents

    Swenson, Paul F.; Moore, Paul B.

    1977-01-01

    An air heating and cooling system for a building includes an expansion type refrigeration circuit and a vapor power circuit. The refrigeration circuit includes two heat exchangers, one of which is communicated with a source of indoor air from the building and the other of which is communicated with a source of air from outside the building. The vapor power circuit includes two heat exchangers, one of which is disposed in series air flow relationship with the indoor refrigeration circuit heat exchanger and the other of which is disposed in series air flow relationship with the outdoor refrigeration circuit heat exchanger. Fans powered by electricity generated by a vapor power circuit alternator circulate indoor air through the two indoor heat exchangers and circulate outside air through the two outdoor heat exchangers. The system is assembled as a single roof top unit, with a vapor power generator and turbine and compressor thermally insulated from the heat exchangers, and with the indoor heat exchangers thermally insulated from the outdoor heat exchangers.

  4. Heat pump system

    DOEpatents

    Swenson, Paul F.; Moore, Paul B.

    1983-01-01

    An air heating and cooling system for a building includes an expansion type refrigeration circuit and a vapor power circuit. The refrigeration circuit includes two heat exchangers, one of which is communicated with a source of indoor air from the building and the other of which is communicated with a source of air from outside the building. The vapor power circuit includes two heat exchangers, one of which is disposed in series air flow relationship with the indoor refrigeration circuit heat exchanger and the other of which is disposed in series air flow relationship with the outdoor refrigeration circuit heat exchanger. Fans powered by electricity generated by a vapor power circuit alternator circulate indoor air through the two indoor heat exchangers and circulate outside air through the two outdoor heat exchangers. The system is assembled as a single roof top unit, with a vapor power generator and turbine and compressor thermally insulated from the heat exchangers, and with the indoor heat exchangers thermally insulated from the outdoor heat exchangers.

  5. Heat pump system

    DOEpatents

    Swenson, Paul F.; Moore, Paul B.

    1983-06-21

    An air heating and cooling system for a building includes an expansion type refrigeration circuit and a vapor power circuit. The refrigeration circuit includes two heat exchangers, one of which is communicated with a source of indoor air from the building and the other of which is communicated with a source of air from outside the building. The vapor power circuit includes two heat exchangers, one of which is disposed in series air flow relationship with the indoor refrigeration circuit heat exchanger and the other of which is disposed in series air flow relationship with the outdoor refrigeration circuit heat exchanger. Fans powered by electricity generated by a vapor power circuit alternator circulate indoor air through the two indoor heat exchangers and circulate outside air through the two outdoor heat exchangers. The system is assembled as a single roof top unit, with a vapor power generator and turbine and compressor thermally insulated from the heat exchangers, and with the indoor heat exchangers thermally insulated from the outdoor heat exchangers.

  6. Concentric tubes cold-bonded by drawing and internal expansion

    NASA Technical Reports Server (NTRS)

    Hymes, L. C.; Stone, C. C.

    1971-01-01

    Metal tubes bonded together without heat application or brazing materials retain strength at elevated temperatures, and when subjected to constant or cyclic temperature gradients. Combination drawing and expansion process produces residual tangential tensile stress in the outer tube and tangential compressive stress in the inner tube.

  7. Burial Ground Expansion Hydrogeologic Characterization

    SciTech Connect

    Gaughan , T.F.

    1999-02-26

    Sirrine Environmental Consultants provided technical oversight of the installation of eighteen groundwater monitoring wells and six exploratory borings around the location of the Burial Ground Expansion.

  8. Relativistic Sommerfeld Low Temperature Expansion

    NASA Astrophysics Data System (ADS)

    Lourenço, O.; Dutra, M.; Delfino, A.; Sá Martins, J. S.

    We derive a relativistic Sommerfeld expansion for thermodynamic quantities in many-body fermionic systems. The expansion is used to generate the equation of state of the Walecka model and its isotherms. We find that these results are in good agreement with numerical calculations, even when the expansion is truncated at its lowest order, in the low temperature regime, defined by T/xf ≪ 1. Although the interesting region near the liquid-gas phase transition is excluded by this criterion, the expansion may still find usefulness in the study of very cold nuclear matter systems, such as neutron stars.

  9. Distortion-invariant kernel correlation filters for general object recognition

    NASA Astrophysics Data System (ADS)

    Patnaik, Rohit

    General object recognition is a specific application of pattern recognition, in which an object in a background must be classified in the presence of several distortions such as aspect-view differences, scale differences, and depression-angle differences. Since the object can be present at different locations in the test input, a classification algorithm must be applied to all possible object locations in the test input. We emphasize one type of classifier, the distortion-invariant filter (DIF), for fast object recognition, since it can be applied to all possible object locations using a fast Fourier transform (FFT) correlation. We refer to distortion-invariant correlation filters simply as DIFs. DIFs all use a combination of training-set images that are representative of the expected distortions in the test set. In this dissertation, we consider a new approach that combines DIFs and the higher-order kernel technique; these form what we refer to as "kernel DIFs." Our objective is to develop higher-order classifiers that can be applied (efficiently and fast) to all possible locations of the object in the test input. All prior kernel DIFs ignored the issue of efficient filter shifts. We detail which kernel DIF formulations are computational realistic to use and why. We discuss the proper way to synthesize DIFs and kernel DIFs for the wide area search case (i.e., when a small filter must be applied to a much larger test input) and the preferable way to perform wide area search with these filters; this is new. We use computer-aided design (CAD) simulated infrared (IR) object imagery and real IR clutter imagery to obtain test results. Our test results on IR data show that a particular kernel DIF, the kernel SDF filter and its new "preprocessed" version, is promising, in terms of both test-set performance and on-line calculations, and is emphasized in this dissertation. We examine the recognition of object variants. We also quantify the effect of different constant

  10. High frequency-heated air turbojet

    NASA Technical Reports Server (NTRS)

    Miron, J. H. D.

    1986-01-01

    A description is given of a method to heat air coming from a turbojet compressor to a temperature necessary to produce required expansion without requiring fuel. This is done by high frequency heating, which heats the walls corresponding to the combustion chamber in existing jets, by mounting high frequency coils in them. The current transformer and high frequency generator to be used are discussed.

  11. Colossal negative thermal expansion in reduced layered ruthenate.

    PubMed

    Takenaka, Koshi; Okamoto, Yoshihiko; Shinoda, Tsubasa; Katayama, Naoyuki; Sakai, Yuki

    2017-01-10

    Large negative thermal expansion (NTE) has been discovered during the last decade in materials of various kinds, particularly materials associated with a magnetic, ferroelectric or charge-transfer phase transition. Such NTE materials have attracted considerable attention for use as thermal-expansion compensators. Here, we report the discovery of giant NTE for reduced layered ruthenate. The total volume change related to NTE reaches 6.7% in dilatometry, a value twice as large as the largest volume change reported to date. We observed a giant negative coefficient of linear thermal expansion α=-115 × 10(-6) K(-1) over 200 K interval below 345 K. This dilatometric NTE is too large to be attributable to the crystallographic unit-cell volume variation with temperature. The highly anisotropic thermal expansion of the crystal grains might underlie giant bulk NTE via microstructural effects consuming open spaces in the sintered body on heating.

  12. Colossal negative thermal expansion in reduced layered ruthenate

    NASA Astrophysics Data System (ADS)

    Takenaka, Koshi; Okamoto, Yoshihiko; Shinoda, Tsubasa; Katayama, Naoyuki; Sakai, Yuki

    2017-01-01

    Large negative thermal expansion (NTE) has been discovered during the last decade in materials of various kinds, particularly materials associated with a magnetic, ferroelectric or charge-transfer phase transition. Such NTE materials have attracted considerable attention for use as thermal-expansion compensators. Here, we report the discovery of giant NTE for reduced layered ruthenate. The total volume change related to NTE reaches 6.7% in dilatometry, a value twice as large as the largest volume change reported to date. We observed a giant negative coefficient of linear thermal expansion α=-115 × 10-6 K-1 over 200 K interval below 345 K. This dilatometric NTE is too large to be attributable to the crystallographic unit-cell volume variation with temperature. The highly anisotropic thermal expansion of the crystal grains might underlie giant bulk NTE via microstructural effects consuming open spaces in the sintered body on heating.

  13. Colossal negative thermal expansion in reduced layered ruthenate

    PubMed Central

    Takenaka, Koshi; Okamoto, Yoshihiko; Shinoda, Tsubasa; Katayama, Naoyuki; Sakai, Yuki

    2017-01-01

    Large negative thermal expansion (NTE) has been discovered during the last decade in materials of various kinds, particularly materials associated with a magnetic, ferroelectric or charge-transfer phase transition. Such NTE materials have attracted considerable attention for use as thermal-expansion compensators. Here, we report the discovery of giant NTE for reduced layered ruthenate. The total volume change related to NTE reaches 6.7% in dilatometry, a value twice as large as the largest volume change reported to date. We observed a giant negative coefficient of linear thermal expansion α=−115 × 10−6 K−1 over 200 K interval below 345 K. This dilatometric NTE is too large to be attributable to the crystallographic unit-cell volume variation with temperature. The highly anisotropic thermal expansion of the crystal grains might underlie giant bulk NTE via microstructural effects consuming open spaces in the sintered body on heating. PMID:28071647

  14. Cryptococcal heat shock protein 70 homolog Ssa1 contributes to pulmonary expansion of Cryptococcus neoformans during the afferent phase of the immune response by promoting macrophage M2 polarization.

    PubMed

    Eastman, Alison J; He, Xiumiao; Qiu, Yafeng; Davis, Michael J; Vedula, Priya; Lyons, Daniel M; Park, Yoon-Dong; Hardison, Sarah E; Malachowski, Antoni N; Osterholzer, John J; Wormley, Floyd L; Williamson, Peter R; Olszewski, Michal A

    2015-06-15

    Numerous virulence factors expressed by Cryptococcus neoformans modulate host defenses by promoting nonprotective Th2-biased adaptive immune responses. Prior studies demonstrate that the heat shock protein 70 homolog, Ssa1, significantly contributes to serotype D C. neoformans virulence through the induction of laccase, a Th2-skewing and CNS tropic factor. In the present study, we sought to determine whether Ssa1 modulates host defenses in mice infected with a highly virulent serotype A strain of C. neoformans (H99). To investigate this, we assessed pulmonary fungal growth, CNS dissemination, and survival in mice infected with either H99, an SSA1-deleted H99 strain (Δssa1), and a complement strain with restored SSA1 expression (Δssa1::SSA1). Mice infected with the Δssa1 strain displayed substantial reductions in lung fungal burden during the innate phase (days 3 and 7) of the host response, whereas less pronounced reductions were observed during the adaptive phase (day 14) and mouse survival increased only by 5 d. Surprisingly, laccase activity assays revealed that Δssa1 was not laccase deficient, demonstrating that H99 does not require Ssa1 for laccase expression, which explains the CNS tropism we still observed in the Ssa1-deficient strain. Lastly, our immunophenotyping studies showed that Ssa1 directly promotes early M2 skewing of lung mononuclear phagocytes during the innate phase, but not the adaptive phase, of the immune response. We conclude that Ssa1's virulence mechanism in H99 is distinct and laccase-independent. Ssa1 directly interferes with early macrophage polarization, limiting innate control of C. neoformans, but ultimately has no effect on cryptococcal control by adaptive immunity.

  15. A thermodynamic analysis of the system LiAlSiO4-NaAlSiO4-Al2O3-SiO2-H2O based on new heat capacity, thermal expansion, and compressibility data for selected phases

    NASA Astrophysics Data System (ADS)

    Fasshauer, Detlef W.; Chatterjee, Niranjan D.; Cemic, Ladislav

    Heat capacity, thermal expansion, and compressibility data have been obtained for a number of selected phases of the system NaAlSiO4-LiAlSiO4-Al2O3-SiO2-H2O. All Cp measurements have been executed by DSC in the temperature range 133-823K. The data for T>=223K have been fitted to the function Cp(T)=a+cT -2+dT -0.5+fT -3, the fit parameters being The thermal expansion data (up to 525°C) have been fitted to the function V0(T)=V0(T) [1+v1 (T-T0)+v2 (T-T0)2], with T0=298.15K. The room-temperature compressibility data (up to 6 GPa) have been smoothed by the Murnaghan equation of state. The resulting parameters are These data, along with other phase property and reaction reversal data from the literature, have been simultaneously processed by the Bayes method to derive an internally consistent thermodynamic dataset (see Tables 6 and 7) for the NaAlSiO4-LiAlSiO4-Al2O3-SiO2-H2O quinary. Phase diagrams generated from this dataset are compatible with cookeite-, ephesite-, and paragonite-bearing assemblages observed in metabauxites and common metasediments. Phase diagrams obtained from the same database are also in agreement with the cookeite-free, petalite-, spodumene-, eucryptite-, and bikitaite-bearing assemblages known to develop in the subsolidus phase of recrystallization of lithium-bearing pegmatites. It is gratifying to note that the cookeite phase relations predicted earlier by Vidal and Goffé (1991) in the context of the system Li2O-Al2O3-SiO2-H2O agree with our results in a general way.

  16. A locally adaptive kernel regression method for facies delineation

    NASA Astrophysics Data System (ADS)

    Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.

    2015-12-01

    Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.

  17. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  18. Choosing parameters of kernel subspace LDA for recognition of face images under pose and illumination variations.

    PubMed

    Huang, Jian; Yuen, Pong C; Chen, Wen-Sheng; Lai, Jian Huang

    2007-08-01

    This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.

  19. Lattice harmonics expansion revisited

    NASA Astrophysics Data System (ADS)

    Kontrym-Sznajd, G.; Holas, A.

    2017-04-01

    The main subject of the work is to provide the most effective way of determining the expansion of some quantities into orthogonal polynomials, when these quantities are known only along some limited number of sampling directions. By comparing the commonly used Houston method with the method based on the orthogonality relation, some relationships, which define the applicability and correctness of these methods, are demonstrated. They are verified for various sets of sampling directions applicable for expanding quantities having the full symmetry of the Brillouin zone of cubic and non-cubic lattices. All results clearly show that the Houston method is always better than the orthogonality-relation one. For the cubic symmetry we present a few sets of special directions (SDs) showing how their construction and, next, a proper application depend on the choice of various sets of lattice harmonics. SDs are important mainly for experimentalists who want to reconstruct anisotropic quantities from their measurements, performed at a limited number of sampling directions.

  20. Singularity Expansion Method

    NASA Astrophysics Data System (ADS)

    Riggs, Lloyd Stephen

    In this work the transient currents induced on an arbitrary system of thin linear scatterers by an electromagnetic plane wave are solved by using an electric field integral equation (EFIE) formulation. The transient analysis is carried out using the singularity expansion method (SEM). The general analysis developed here is useful for assessing the vulnerability of military aircraft to a nuclear generated electromagnetic pulse (EMP). It is also useful as a modal synthesis tool in the analysis and design of frequency selective surfaces (FSS). SEM parameters for a variety of thin cylindrical geometries have been computed. Specifically, SEM poles, modes, coupling coefficients, and transient currents are given for the two and three element planar array. Poles and modes for planar arrays with a larger number (as many as eight) of identical equally spaced elements are also considered. SEM pole-mode results are given for identical parallel elements with ends located at the vertices of a regular N-agon. Pole-mode patterns are found for symmetric (and slightly perturbed) single junction N-arm elements and for the five junction Jerusalem cross. The Jerusalem cross element has been used extensively in FSS.