Heat kernel asymptotic expansions for the Heisenberg sub-Laplacian and the Grushin operator
Chang, Der-Chen; Li, Yutian
2015-01-01
The sub-Laplacian on the Heisenberg group and the Grushin operator are typical examples of sub-elliptic operators. Their heat kernels are both given in the form of Laplace-type integrals. By using Laplace's method, the method of stationary phase and the method of steepest descent, we derive the small-time asymptotic expansions for these heat kernels, which are related to the geodesic structure of the induced geometries. PMID:25792966
Asymptotic expansion of the trace of the heat kernel associated to the Dirichlet-to-Neumann operator
NASA Astrophysics Data System (ADS)
Liu, Genqian
2015-10-01
For a given bounded domain Ω with smooth boundary in a smooth Riemannian manifold (M, g), by decomposing the Dirichlet-to-Neumann operator into a sum of the square root of the Laplacian and a pseudodifferential operator, and by applying Grubb's method of symbolic calculus for the corresponding pseudodifferential heat kernel operators, we establish a procedure to calculate all the coefficients of the asymptotic expansion of the trace of the heat kernel associated to Dirichlet-to-Neumann operator as t →0+. In particular, we explicitly give the first four coefficients of this asymptotic expansion. These coefficients provide precise information regarding the area and curvatures of the boundary of the domain in terms of the spectrum of the Steklov problem.
Heat kernel smoothing using Laplace-Beltrami eigenfunctions.
Seo, Seongho; Chung, Moo K; Vorperian, Houri K
2010-01-01
We present a novel surface smoothing framework using the Laplace-Beltrami eigenfunctions. The Green's function of an isotropic diffusion equation on a manifold is constructed as a linear combination of the Laplace-Beltraimi operator. The Green's function is then used in constructing heat kernel smoothing. Unlike many previous approaches, diffusion is analytically represented as a series expansion avoiding numerical instability and inaccuracy issues. This proposed framework is illustrated with mandible surfaces, and is compared to a widely used iterative kernel smoothing technique in computational anatomy. The MATLAB source code is freely available at http://brainimaging.waisman.wisc.edu/ chung/lb.
Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.
2014-01-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435
Scalar heat kernel with boundary in the worldline formalism
NASA Astrophysics Data System (ADS)
Bastianelli, Fiorenzo; Corradini, Olindo; Pisani, Pablo A. G.; Schubert, Christian
2008-10-01
The worldline formalism has in recent years emerged as a powerful tool for the computation of effective actions and heat kernels. However, implementing nontrivial boundary conditions in this formalism has turned out to be a difficult problem. Recently, such a generalization was developed for the case of a scalar field on the half-space Bbb R+ × Bbb RD-1, based on an extension of the associated worldline path integral to the full Bbb RD using image charges. We present here an improved version of this formalism which allows us to write down non-recursive master formulas for the n-point contribution to the heat kernel trace of a scalar field on the half-space with Dirichlet or Neumann boundary conditions. These master formulas are suitable to computerization. We demonstrate the efficiency of the formalism by a calculation of two new heat-kernel coefficients for the half-space, a4 and a9/2.
The supertrace of the steady asymptotic of the spinorial heat kernel
NASA Astrophysics Data System (ADS)
Bleecker, David D.
1992-06-01
An essentially self-contained, rigorous proof of the Atiyah-Singer index theorem is given for the case of the twisted Dirac operator. Indeed, the stronger local index theorem, which implies the general case, is proven here by computing the supertrace of the steady (time-independent) asymptotic of the twisted spinorial heat kernel. The computation is carried out in the spirit of Patodi's proof of the Gauss-Bonnet-Chern theorem. Gilkey's approach involving invariant theory is not used, but rather a generalization of Mehler's formula is derived and utilized along with some elementary properties of Clifford algebras. The use of Mehler's formula was inspired by work of Getzler, but families of Clifford algebra-valued pseudodifferential operators, and limits and estimates of families of heat kernels, are avoided here. Asymptotic expansions of heat kernels for operators on Euclidean fields are of fundamental importance in the computation of quantum corrections to the effective action for classical fields. Thus, the general formula developed here for the asymptotics may also be helpful in this regard.
Frostless heat pump having thermal expansion valves
Chen, Fang C [Knoxville, TN; Mei, Viung C [Oak Ridge, TN
2002-10-22
A heat pump system having an operable relationship for transferring heat between an exterior atmosphere and an interior atmosphere via a fluid refrigerant and further having a compressor, an interior heat exchanger, an exterior heat exchanger, a heat pump reversing valve, an accumulator, a thermal expansion valve having a remote sensing bulb disposed in heat transferable contact with the refrigerant piping section between said accumulator and said reversing valve, an outdoor temperature sensor, and a first means for heating said remote sensing bulb in response to said outdoor temperature sensor thereby opening said thermal expansion valve to raise suction pressure in order to mitigate defrosting of said exterior heat exchanger wherein said heat pump continues to operate in a heating mode.
Laplace asymptotic expansions of conditional Wiener integrals and generalized Mehler kernel formulas
NASA Astrophysics Data System (ADS)
Davies, Ian; Truman, Aubrey
1982-11-01
Imitating Schilder's results for Wiener integrals rigorous Laplace asymptotic expansions are proven for conditional Wiener integrals. Applications are given for deriving generalized Mehler kernel formulas, up to arbitrarily high orders in powers of ℏ, for exp{-TH(ℏ)/ℏ}(x, y), T>0 where H(ℏ)=[(-ℏ2/2)Δ1+V], Δ1 being the one-dimensional Laplacian, V being a real-valued potential V∈C∞(R), bounded below, together with its second derivative.
Analysis of heat kernel highlights the strongly modular and heat-preserving structure of proteins
NASA Astrophysics Data System (ADS)
Livi, Lorenzo; Maiorino, Enrico; Pinna, Andrea; Sadeghian, Alireza; Rizzi, Antonello; Giuliani, Alessandro
2016-01-01
In this paper, we study the structure and dynamical properties of protein contact networks with respect to other biological networks, together with simulated archetypal models acting as probes. We consider both classical topological descriptors, such as modularity and statistics of the shortest paths, and different interpretations in terms of diffusion provided by the discrete heat kernel, which is elaborated from the normalized graph Laplacians. A principal component analysis shows high discrimination among the network types, by considering both the topological and heat kernel based vector characterizations. Furthermore, a canonical correlation analysis demonstrates the strong agreement among those two characterizations, providing thus an important justification in terms of interpretability for the heat kernel. Finally, and most importantly, the focused analysis of the heat kernel provides a way to yield insights on the fact that proteins have to satisfy specific structural design constraints that the other considered networks do not need to obey. Notably, the heat trace decay of an ensemble of varying-size proteins denotes subdiffusion, a peculiar property of proteins.
Reduction of Salmonella Enteritidis Population Sizes on Almond Kernels with Infrared Heat
Technology Transfer Automated Retrieval System (TEKTRAN)
Catalytic infrared (IR) heating was investigated to determine its effect on Salmonella enterica serovar Enteritidis population sizes on raw almond kernels. Using a double-sided catalytic infrared heating system, a radiation intensity of 5458 W/m2 caused a fast temperature increase at the kernel surf...
Turbulent expansion during parametric plasma heating
NASA Astrophysics Data System (ADS)
Trakhtengerts, V. Iu.
1983-10-01
In recent experiments on the parametric heating of the ionosphere, the application of intense electromagnetic radiation in the shortwave range to the ionospheric F layer has been accompanied by comparatively broad-band stimulated radio emission with a central frequency near the frequency of the pump wave. This emission is thought to result from the conversion of plasma waves into electromagnetic radiation during the three-wave interaction with the ion probe, and is observed even after the pump is turned off. Suprathermal electrons accelerated to 25-30 eV have been observed simultaneously. The anomalously long lifetime of the stimulated emission is explained here in terms of the turbulent expansion of a cloud of suprathermal particles in a collisionless plasma.
Sharp Two-Sided Heat Kernel Estimates of Twisted Tubes and Applications
NASA Astrophysics Data System (ADS)
Grillo, Gabriele; Kovařík, Hynek; Pinchover, Yehuda
2014-07-01
We prove on-diagonal bounds for the heat kernel of the Dirichlet Laplacian in locally twisted three-dimensional tubes Ω. In particular, we show that for any fixed x the heat kernel decays for large times as , where E 1 is the fundamental eigenvalue of the Dirichlet Laplacian on the cross section of the tube. This shows that any, suitably regular, local twisting speeds up the decay of the heat kernel with respect to the case of straight (untwisted) tubes. Moreover, the above large time decay is valid for a wide class of subcritical operators defined on a straight tube. We also discuss some applications of this result, such as Sobolev inequalities and spectral estimates for Schrödinger operators.
Heat kernel estimates and spectral properties of a pseudorelativistic operator with magnetic field
NASA Astrophysics Data System (ADS)
Jakubassa-Amundsen, D. H.
2008-03-01
Based on the Mehler heat kernel of the Schrödinger operator for a free electron in a constant magnetic field, an estimate for the kernel of EA=∣α(p-eA)+βm∣ is derived, where EA represents the kinetic energy of a Dirac electron within the pseudorelativistic no-pair Brown-Ravenhall model. This estimate is used to provide the bottom of the essential spectrum for the two-particle Brown-Ravenhall operator, describing the motion of the electrons in a central Coulomb field and a constant magnetic field, if the central charge is restricted to Z ⩽86.
Characterising brain network topologies: A dynamic analysis approach using heat kernels.
Chung, A W; Schirmer, M D; Krishnan, M L; Ball, G; Aljabar, P; Edwards, A D; Montana, G
2016-11-01
Network theory provides a principled abstraction of the human brain: reducing a complex system into a simpler representation from which to investigate brain organisation. Recent advancement in the neuroimaging field is towards representing brain connectivity as a dynamic process in order to gain a deeper understanding of how the brain is organised for information transport. In this paper we propose a network modelling approach based on the heat kernel to capture the process of heat diffusion in complex networks. By applying the heat kernel to structural brain networks, we define new features which quantify change in heat propagation. Identifying suitable features which can classify networks between cohorts is useful towards understanding the effect of disease on brain architecture. We demonstrate the discriminative power of heat kernel features in both synthetic and clinical preterm data. By generating an extensive range of synthetic networks with varying density and randomisation, we investigate heat diffusion in relation to changes in network topology. We demonstrate that our proposed features provide a metric of network efficiency and may be indicative of organisational principles commonly associated with, for example, small-world architecture. In addition, we show the potential of these features to characterise and classify between network topologies. We further demonstrate our methodology in a clinical setting by applying it to a large cohort of preterm babies scanned at term equivalent age from which diffusion networks were computed. We show that our heat kernel features are able to successfully predict motor function measured at two years of age (sensitivity, specificity, F-score, accuracy = 75.0, 82.5, 78.6, and 82.3%, respectively).
Characterising brain network topologies: A dynamic analysis approach using heat kernels.
Chung, A W; Schirmer, M D; Krishnan, M L; Ball, G; Aljabar, P; Edwards, A D; Montana, G
2016-11-01
Network theory provides a principled abstraction of the human brain: reducing a complex system into a simpler representation from which to investigate brain organisation. Recent advancement in the neuroimaging field is towards representing brain connectivity as a dynamic process in order to gain a deeper understanding of how the brain is organised for information transport. In this paper we propose a network modelling approach based on the heat kernel to capture the process of heat diffusion in complex networks. By applying the heat kernel to structural brain networks, we define new features which quantify change in heat propagation. Identifying suitable features which can classify networks between cohorts is useful towards understanding the effect of disease on brain architecture. We demonstrate the discriminative power of heat kernel features in both synthetic and clinical preterm data. By generating an extensive range of synthetic networks with varying density and randomisation, we investigate heat diffusion in relation to changes in network topology. We demonstrate that our proposed features provide a metric of network efficiency and may be indicative of organisational principles commonly associated with, for example, small-world architecture. In addition, we show the potential of these features to characterise and classify between network topologies. We further demonstrate our methodology in a clinical setting by applying it to a large cohort of preterm babies scanned at term equivalent age from which diffusion networks were computed. We show that our heat kernel features are able to successfully predict motor function measured at two years of age (sensitivity, specificity, F-score, accuracy = 75.0, 82.5, 78.6, and 82.3%, respectively). PMID:27421183
Neto, V Q; Narain, N; Silva, J B; Bora, P S
2001-08-01
The functional properties viz. solubility, water and oil absorption, emulsifying and foaming capacities of the protein isolates prepared from raw and heat processed cashew nut kernels were evaluated. Protein solubility vs. pH profile showed the isoelectric point at pH 5 for both isolates. The isolate prepared from raw cashew nuts showed superior solubility at and above isoelectric point pH. The water and oil absorption capacities of the proteins were slightly improved by heat treatment of cashew nut kernels. The emulsifying capacity of the isolates showed solubility dependent behavior and was better for raw cashew nut protein isolate at pH 5 and above. However, heat treated cashew nut protein isolate presented better foaming capacity at pH 7 and 8 but both isolates showed extremely low foam stability as compared to that of egg albumin.
Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J; Wang, Yalin
2015-05-01
Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the gray matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer's disease (AD) and mild cognitive impairment (MCI) on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces.
Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J.; Wang, Yalin
2015-01-01
Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the grey matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer’s disease (AD) and mild cognitive impairment (MCI) on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces. PMID:25700360
NASA Astrophysics Data System (ADS)
Muñoz-Castañeda, Jose M.; Kirsten, Klaus; Bordag, Michael
2015-04-01
Following the seminal works of Asorey-Ibort-Marmo and Muñoz-Castañeda-Asorey about selfadjoint extensions and quantum fields in bounded domains, we compute all the heat kernel coefficients for any strongly consistent selfadjoint extension of the Laplace operator over the finite line [0, L]. The derivative of the corresponding spectral zeta function at s = 0 (partition function of the corresponding quantum field theory) is obtained. To compute the correct expression for the a 1/2 heat kernel coefficient, it is necessary to know in detail which non-negative selfadjoint extensions have zero modes and how many of them they have. The answer to this question leads us to analyze zeta function properties for the Von Neumann-Krein extension, the only extension with two zero modes.
Selecting the kernel in a peridynamic formulation: A study for transient heat diffusion
NASA Astrophysics Data System (ADS)
Chen, Ziguang; Bobaru, Florin
2015-12-01
The kernel in a peridynamic diffusion model represents the detailed interaction between points inside the nonlocal region around each material point. Several versions of the kernel function have been proposed. Although solutions associated with different kernels may all converge, under the appropriate discretization scheme, to the classical model when the horizon goes to zero, their convergence behavior varies. In this paper, we focus on the particular one-point Gauss quadrature method of spatial discretization of the peridynamic diffusion model and study the convergence properties of different kernels with respect to convergence to the classical, local, model for transient heat transfer equation in 1D, where exact representation of geometry is available. The one-point Gauss quadrature is the preferred method for discretizing peridynamic models because it leads to a meshfree model, well suited for problems with damage and fracture. We show the equivalency of two definitions for the peridynamic heat flux. We explain an apparent paradox and discuss a common pitfall in numerical approximations of nonlocal models and their convergence to local models. We also analyze the influence of two ways of imposing boundary conditions and that of the "skin effect" on the solution. We explain an interesting behavior of the peridynamic solutions for different horizon sizes, the crossing of m-convergence curves at the classical solution value that happens for one of the ways of implementing the classical boundary conditions. The results presented here provide practical guidance in selecting the appropriate peridynamic kernel that makes the one-point Gauss quadrature an "asymptotically compatible" scheme. These results are directly applicable to any diffusion-type model, including mass diffusion problems.
Direct expansion solar collector and heat pump
NASA Astrophysics Data System (ADS)
1982-05-01
A hybrid heat pump/solar collector combination in which solar collectors replace the outside air heat exchanger found in conventional air-to-air heat pump systems is discussed. The solar panels ordinarily operate at or below ambient temperature, eliminating the need to install the collector panels in a glazed and insulated enclosure. The collectors simply consist of a flat plate with a centrally located tube running longitudinally. Solar energy absorbed by exposed panels directly vaporizes the refrigerant fluid. The resulting vapor is compressed to higher temperature and pressure; then, it is condensed to release the heat absorbed during the vaporization process. Control and monitoring of the demonstration system are addressed, and the tests conducted with the demonstration system are described. The entire heat pump system is modelled, including predicted performance and costs, and economic comparisons are made with conventional flat-plate collector systems.
Heat Pumps With Direct Expansion Solar Collectors
NASA Astrophysics Data System (ADS)
Ito, Sadasuke
In this paper, the studies of heat pump systems using solar collectors as the evaporators, which have been done so far by reserchers, are reviwed. Usually, a solar collector without any cover is preferable to one with ac over because of the necessity of absorbing heat from the ambient air when the intensity of the solar energy on the collector is not enough. The performance of the collector depends on its area and the intensity of the convective heat transfer on the surface. Fins are fixed on the backside of the collector-surface or on the tube in which the refrigerant flows in order to increase the convective heat transfer. For the purpose of using a heat pump efficiently throughout year, a compressor with variable capacity is applied. The solar assisted heat pump can be used for air conditioning at night during the summer. Only a few groups of people have studied cooling by using solar assisted heat pump systems. In Japan, a kind of system for hot water supply has been produced commercially in a company and a kind of system for air conditioning has been installed in buildings commercially by another company.
Heat damage and in vitro starch digestibility of puffed wheat kernels.
Cattaneo, Stefano; Hidalgo, Alyssa; Masotti, Fabio; Stuknytė, Milda; Brandolini, Andrea; De Noni, Ivano
2015-12-01
The effect of processing conditions on heat damage, starch digestibility, release of advanced glycation end products (AGEs) and antioxidant capacity of puffed cereals was studied. The determination of several markers arising from Maillard reaction proved pyrraline (PYR) and hydroxymethylfurfural (HMF) as the most reliable indices of heat load applied during puffing. The considerable heat load was evidenced by the high levels of both PYR (57.6-153.4 mg kg(-1) dry matter) and HMF (13-51.2 mg kg(-1) dry matter). For cost and simplicity, HMF looked like the most appropriate index in puffed cereals. Puffing influenced starch in vitro digestibility, being most of the starch (81-93%) hydrolyzed to maltotriose, maltose and glucose whereas only limited amounts of AGEs were released. The relevant antioxidant capacity revealed by digested puffed kernels can be ascribed to both the new formed Maillard reaction products and the conditions adopted during in vitro digestion.
Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation
Huang, Hao; Yoo, Shinjae; Yu, Dantong; Qin, Hong
2015-06-01
Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, somore » it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.« less
Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation
Huang, Hao; Yoo, Shinjae; Yu, Dantong; Qin, Hong
2015-06-01
Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, so it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.
Kornilov, Oleg; Toennies, J. Peter
2015-02-21
The size distribution of para-H{sub 2} (pH{sub 2}) clusters produced in free jet expansions at a source temperature of T{sub 0} = 29.5 K and pressures of P{sub 0} = 0.9–1.96 bars is reported and analyzed according to a cluster growth model based on the Smoluchowski theory with kernel scaling. Good overall agreement is found between the measured and predicted, N{sub k} = A k{sup a} e{sup −bk}, shape of the distribution. The fit yields values for A and b for values of a derived from simple collision models. The small remaining deviations between measured abundances and theory imply a (pH{sub 2}){sub k} magic number cluster of k = 13 as has been observed previously by Raman spectroscopy. The predicted linear dependence of b{sup −(a+1)} on source gas pressure was verified and used to determine the value of the basic effective agglomeration reaction rate constant. A comparison of the corresponding effective growth cross sections σ{sub 11} with results from a similar analysis of He cluster size distributions indicates that the latter are much larger by a factor 6-10. An analysis of the three body recombination rates, the geometric sizes and the fact that the He clusters are liquid independent of their size can explain the larger cross sections found for He.
NASA Astrophysics Data System (ADS)
Kornilov, Oleg; Toennies, J. Peter
2015-02-01
The size distribution of para-H2 (pH2) clusters produced in free jet expansions at a source temperature of T0 = 29.5 K and pressures of P0 = 0.9-1.96 bars is reported and analyzed according to a cluster growth model based on the Smoluchowski theory with kernel scaling. Good overall agreement is found between the measured and predicted, Nk = A ka e-bk, shape of the distribution. The fit yields values for A and b for values of a derived from simple collision models. The small remaining deviations between measured abundances and theory imply a (pH2)k magic number cluster of k = 13 as has been observed previously by Raman spectroscopy. The predicted linear dependence of b-(a+1) on source gas pressure was verified and used to determine the value of the basic effective agglomeration reaction rate constant. A comparison of the corresponding effective growth cross sections σ11 with results from a similar analysis of He cluster size distributions indicates that the latter are much larger by a factor 6-10. An analysis of the three body recombination rates, the geometric sizes and the fact that the He clusters are liquid independent of their size can explain the larger cross sections found for He.
Sustained and generalized extracellular fluid expansion following heat acclimation
Patterson, Mark J; Stocks, Jodie M; Taylor, Nigel A S
2004-01-01
We measured intra- and extravascular body-fluid compartments in 12 resting males before (day 1; control), during (day 8) and after (day 22) a 3-week, exercise–heat acclimation protocol to investigate plasma volume (PV) changes. Our specific focus was upon the selective nature of the acclimation-induced PV expansion, and the possibility that this expansion could be sustained during prolonged acclimation. Acclimation was induced by cycling in the heat, and involved 16 treatment days (controlled hyperthermia (90 min); core temperature = 38.5°C) and three experimental exposures (40 min rest, 96.9 min (s.d. 9.5 min) cycling), each preceded by a rest day. The environmental conditions were a temperature of 39.8°C (s.d. 0.5°C) and relative humidity of 59.2% (s.d. 0.8%). On days 8 and 22, PV was expanded and maintained relative to control values (day 1: 44.0 ± 1.8; day 8: 48.8 ± 1.7; day 22: 48.8 ± 2.0 ml kg−1; P < 0.05). The extracellular fluid compartment (ECF) was equivalently expanded from control values on days 8 (279.6 ± 14.2versus 318.6 ± 14.3 ml kg−1; n = 8; P < 0.05) and 22 (287.5 ± 10.6 versus 308.4 ± 14.8 ml kg−1; n = 12; P < 0.05). Plasma electrolyte, total protein and albumin concentrations were unaltered following heat acclimation (P > 0.05), although the total plasma content of these constituents was elevated (P < 0.05). The PV and interstitial fluid (ISF) compartments exhibited similar relative expansions on days 8 (15.0 ± 2.2% versus 14.7 ± 4.1%; P > 0.05) and 22 (14.4 ± 3.6%versus 6.4 ± 2.2%; P = 0.10). It is concluded that the acclimation-induced PV expansion can be maintained following prolonged heat acclimation. In addition, this PV expansion was not selective, but represented a ubiquitous expansion of the extracellular compartment. PMID:15218070
Shape-Based Image Matching Using Heat Kernels and Diffusion Maps
NASA Astrophysics Data System (ADS)
Vizilter, Yu. V.; Gorbatsevich, V. S.; Rubis, A. Yu.; Zheltov, S. Yu.
2014-08-01
2D image matching problem is often stated as an image-to-shape or shape-to-shape matching problem. Such shape-based matching techniques should provide the matching of scene image fragments registered in various lighting, weather and season conditions or in different spectral bands. Most popular shape-to-shape matching technique is based on mutual information approach. Another wellknown approach is a morphological image-to-shape matching proposed by Pytiev. In this paper we propose the new image-to-shape matching technique based on heat kernels and diffusion maps. The corresponding Diffusion Morphology is proposed as a new generalization of Pytiev morphological scheme. The fast implementation of morphological diffusion filtering is described. Experimental comparison of new and aforementioned shape-based matching techniques is reported applying to the TV and IR image matching problem.
NASA Astrophysics Data System (ADS)
Juan-Mian, Lei; Xue-Ying, Peng
2016-02-01
Kernel gradient free-smoothed particle hydrodynamics (KGF-SPH) is a modified smoothed particle hydrodynamics (SPH) method which has higher precision than the conventional SPH. However, the Laplacian in KGF-SPH is approximated by the two-pass model which increases computational cost. A new kind of discretization scheme for the Laplacian is proposed in this paper, then a method with higher precision and better stability, called Improved KGF-SPH, is developed by modifying KGF-SPH with this new Laplacian model. One-dimensional (1D) and two-dimensional (2D) heat conduction problems are used to test the precision and stability of the Improved KGF-SPH. The numerical results demonstrate that the Improved KGF-SPH is more accurate than SPH, and stabler than KGF-SPH. Natural convection in a closed square cavity at different Rayleigh numbers are modeled by the Improved KGF-SPH with shifting particle position, and the Improved KGF-SPH results are presented in comparison with those of SPH and finite volume method (FVM). The numerical results demonstrate that the Improved KGF-SPH is a more accurate method to study and model the heat transfer problems.
Energy recovery during expansion of compressed gas using power plant low-quality heat sources
Ochs, Thomas L.; O'Connor, William K.
2006-03-07
A method of recovering energy from a cool compressed gas, compressed liquid, vapor, or supercritical fluid is disclosed which includes incrementally expanding the compressed gas, compressed liquid, vapor, or supercritical fluid through a plurality of expansion engines and heating the gas, vapor, compressed liquid, or supercritical fluid entering at least one of the expansion engines with a low quality heat source. Expansion engines such as turbines and multiple expansions with heating are disclosed.
Calculates Thermal Neutron Scattering Kernel.
1989-11-10
Version 00 THRUSH computes the thermal neutron scattering kernel by the phonon expansion method for both coherent and incoherent scattering processes. The calculation of the coherent part is suitable only for calculating the scattering kernel for heavy water.
Hypervelocity Heat-Transfer Measurements in an Expansion Tube
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Perkins, John N.
1996-01-01
A series of experiments has been conducted in the NASA HYPULSE Expansion Tube, in both CO2 and air test gases, in order to obtain data for comparison with computational results and to assess the capability for performing hypervelocity heat-transfer studies in this facility. Heat-transfer measurements were made in both test gases on 70 deg sphere-cone models and on hemisphere models of various radii. HYPULSE freestream flow conditions in these test gases were found to be repeatable to within 3-10%, and aerothermodynamic test times of 150 microsec in CO2 and 125 microsec in air were identified. Heat-transfer measurement uncertainty was estimated to be 10-15%. Comparisons were made with computational results from the non-equilibrium Navier-Stokes solver NEQ2D. Measured and computed heat-transfer rates agreed to within 10% on the hemispheres and on the sphere-cone forebodies, and to within 10% in CO2 and 25% in air on the afterbodies and stings of the sphere-cone models.
Optimal heat kernel estimates for schrödinger operators with magnetic fields in two dimensions
NASA Astrophysics Data System (ADS)
Loss, Michael; Thaller, Bernd
1997-06-01
Sharp smoothing estimates are proven for magnetic Schrödinger semigroups in two dimensions under the assumption that the magnetic field is bounded below by some positive constant B 0. As a consequence the L∞ norm of the associated integral kernel is bounded by the L∞ norm of the Mehler kernel of the Schrödinger semigroup with the constant magnetic field B 0.
NASA Astrophysics Data System (ADS)
Rayborn, L.; Rose, B. E. J.
2015-12-01
Transient climate change depends on both radiative forcing and ocean heat uptake. A substantial fraction of the inter-model spread in transient warming under future emission scenarios can be attributed to differences in "efficacy" of ocean heat uptake (suppression of surface warming per unit energy flux into the deep oceans relative to CO2 forcing). Previous studies have suggested that this efficacy depends strongly on the spatial pattern of ocean heat uptake. Rose et al (2014) studied this dependence in an ensemble of aquaplanet simulations with prescribed ocean heat uptake, and found large differences in model responses to high-versus low latitude uptake. In this study we use radiative kernel analysis to accurately partition these responses into feedbacks associated with temperature, water vapor and clouds. We find large and robust differences in both clear-sky longwave feedbacks and shortwave cloud feedbacks, with high-latitude uptake exciting substantially more positive feedback (higher efficacy) than low-latitude uptake. These robust clear-sky longwave feedbacks are particularly associated with lapse rate feedbacks, implying differences in large-scale circulation patterns associated with ocean heat uptake. A particularly surprising result is the robustness across several independent GCMs of the differences in subtropical low cloud feedback (positive under high-latitude uptake, strongly negative under tropical uptake). We trace these robust differences to thermodynamic constraints associated with lower-tropospheric stability and boundary layer moisture. Our results imply that global cloud feedback under global warming may be partly modulated by the spatial pattern of ocean heat uptake.
NASA Astrophysics Data System (ADS)
Chen, Li-Hao; Liu, Zong-Pei; Pan, Yung-Ning
2016-08-01
In this paper, the effect of homogenization heat treatment on α value [coefficient of thermal expansion (10-6 K-1)] of low thermal expansion cast irons was studied. In addition, constrained thermal cyclic tests were conducted to evaluate the dimensional stability of the low thermal expansion cast irons with various heat treatment conditions. The results indicate that when the alloys were homogenized at a relatively low temperature, e.g., 1023 K (750 °C), the elimination of Ni segregation was not very effective, but the C concentration in the matrix was moderately reduced. On the other hand, if the alloys were homogenized at a relatively high temperature, e.g., 1473 K (1200 °C), opposite results were obtained. Consequently, not much improvement (reduction) in α value was achieved in both cases. Therefore, a compound homogenization heat treatment procedure was designed, namely 1473 K (1200 °C)/4 hours/FC/1023 K (750 °C)/2 hours/WQ, in which a relatively high homogenization temperature of 1473 K (1200 °C) can effectively eliminate the Ni segregation, and a subsequent holding stage at 1023.15 K (750 °C) can reduce the C content in the matrix. As a result, very low α values of around (1 to 2) × 10-6 K-1 were obtained. Regarding the constrained thermal cyclic testing in 303 K to 473 K (30 °C to 200 °C), the results indicate that regardless of heat treatment condition, low thermal expansion cast irons exhibit exceedingly higher dimensional stability than either the regular ductile cast iron or the 304 stainless steel. Furthermore, positive correlation exists between the α 303.15 K to 473.15 K value and the amount of shape change after the thermal cyclic testing. Among the alloys investigated, Heat I-T3B (1473 K (1200 °C)/4 hours/FC/1023 K (750 °C)/2 hours/WQ) exhibits the lowest α 303 K to 473 K value (1.72 × 10-6 K-1), and hence has the least shape change (7.41 μm) or the best dimensional stability.
Eigenvalue Expansion Approach to Study Bio-Heat Equation
NASA Astrophysics Data System (ADS)
Khanday, M. A.; Nazir, Khalid
2016-07-01
A mathematical model based on Pennes bio-heat equation was formulated to estimate temperature profiles at peripheral regions of human body. The heat processes due to diffusion, perfusion and metabolic pathways were considered to establish the second-order partial differential equation together with initial and boundary conditions. The model was solved using eigenvalue method and the numerical values of the physiological parameters were used to understand the thermal disturbance on the biological tissues. The results were illustrated at atmospheric temperatures TA = 10∘C and 20∘C.
The Statistical Interpretation of Classical Thermodynamic Heating and Expansion Processes
ERIC Educational Resources Information Center
Cartier, Stephen F.
2011-01-01
A statistical model has been developed and applied to interpret thermodynamic processes typically presented from the macroscopic, classical perspective. Through this model, students learn and apply the concepts of statistical mechanics, quantum mechanics, and classical thermodynamics in the analysis of the (i) constant volume heating, (ii)…
A rapid heating and cooling rate dilatometer for measuring thermal expansion in dental porcelain.
Twiggs, S W; Searle, J R; Ringle, R D; Fairhurst, C W
1989-09-01
Herein we describe a dilatometer that consists of a low-mass infrared furnace for rapid heating or cooling, an optical pyrometer, and a laser interferometer. The dilatometer facilitates observations of thermal expansion at rates comparable with those in dental laboratory practice over the temperature range necessary for comparison of thermal expansion of dental porcelain and alloy. Examples of thermal expansion data obtained at a 600 degrees C/min heating rate on NIST SRM 710 glass and dental porcelain are reported. To a limited extent, thermal expansion data above the glass-transition temperature range of dental porcelain were obtained. A shift of the glass-transition temperature range to higher temperatures was observed for both materials, compared with data obtained at 20 degrees C/min. PMID:2778175
Bergman kernel, balanced metrics and black holes
NASA Astrophysics Data System (ADS)
Klevtsov, Semyon
In this thesis we explore the connections between the Kahler geometry and Landau levels on compact manifolds. We rederive the expansion of the Bergman kernel on Kahler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory. The physics interpretation of this result is as an expansion of the projector of wavefunctions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kahler form. This is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry. We also generalize this expansion to supersymmetric quantum mechanics and more general magnetic fields, and explore its applications. These include the quantum Hall effect in curved space, the balanced metrics and Kahler gravity. In particular, we conjecture that for a probe in a BPS black hole in type II strings compactified on Calabi-Yau manifolds, the moduli space metric is the balanced metric.
Pressurized heat treatment of glass-ceramic to control thermal expansion
Kramer, Daniel P.
1985-01-01
A method of producing a glass-ceramic having a specified thermal expansion value is disclosed. The method includes the step of pressurizing the parent glass material to a predetermined pressure during heat treatment so that the glass-ceramic produced has a specified thermal expansion value. Preferably, the glass-ceramic material is isostatically pressed. A method for forming a strong glass-ceramic to metal seal is also disclosed in which the glass-ceramic is fabricated to have a thermal expansion value equal to that of the metal. The determination of the thermal expansion value of a parent glass material placed in a high-temperature environment is also used to determine the pressure in the environment.
Debye temperature, thermal expansion, and heat capacity of TcC up to 100 GPa
Song, T.; Ma, Q.; Tian, J.H.; Liu, X.B.; Ouyang, Y.H.; Zhang, C.L.; Su, W.F.
2015-01-15
Highlights: • A number of thermodynamic properties of rocksalt TcC are investigated for the first time. • The quasi-harmonic Debye model is applied to take into account the thermal effect. • The pressure and temperature up to about 100 GPa and 3000 K, respectively. - Abstract: Debye temperature, thermal expansion coefficient, and heat capacity of ideal stoichiometric TcC in the rocksalt structure have been studied systematically by using ab initio plane-wave pseudopotential density functional theory method within the generalized gradient approximation. Through the quasi-harmonic Debye model, in which the phononic effects are considered, the dependences of Debye temperature, thermal expansion coefficient, constant-volume heat capacity, and constant-pressure heat capacity on pressure and temperature are successfully predicted. All the thermodynamic properties of TcC with rocksalt phase have been predicted in the entire temperature range from 300 to 3000 K and pressure up to 100 GPa.
Negative thermal expansion and anomalies of heat capacity of LuB50 at low temperatures
Novikov, V. V.; Zhemoedov, N. A.; Matovnikov, A. V.; Mitroshenkov, N. V.; Kuznetsov, S. V.; Bud'ko, S. L.
2015-07-20
Heat capacity and thermal expansion of LuB50 boride were experimentally studied in the 2–300 K temperature range. The data reveal an anomalous contribution to the heat capacity at low temperatures. The value of this contribution is proportional to the first degree of temperature. It was identified that this anomaly in heat capacity is caused by the effect of disorder in the LuB50 crystalline structure and it can be described in the soft atomic potential model (SAP). The parameters of the approximation were determined. The temperature dependence of LuB50 heat capacity in the whole temperature range was approximated by the summore » of SAP contribution, Debye and two Einstein components. The parameters of SAP contribution for LuB50 were compared to the corresponding values for LuB66, which was studied earlier. Negative thermal expansion at low temperatures was experimentally observed for LuB50. The analysis of the experimental temperature dependence for the Gruneisen parameter of LuB50 suggested that the low-frequency oscillations, described in SAP mode, are responsible for the negative thermal expansion. As a result, the glasslike character of the behavior of LuB50 thermal characteristics at low temperatures was confirmed.« less
Ha, Jae-Won; Kang, Dong-Hyun
2015-07-01
The aim of this study was to investigate the efficacy of near-infrared radiation (NIR) heating combined with lactic acid (LA) sprays for inactivating Salmonella enterica serovar Enteritidis on almond and pine nut kernels and to elucidate the mechanisms of the lethal effect of the NIR-LA combined treatment. Also, the effect of the combination treatment on product quality was determined. Separately prepared S. Enteritidis phage type (PT) 30 and non-PT 30 S. Enteritidis cocktails were inoculated onto almond and pine nut kernels, respectively, followed by treatments with NIR or 2% LA spray alone, NIR with distilled water spray (NIR-DW), and NIR with 2% LA spray (NIR-LA). Although surface temperatures of nuts treated with NIR were higher than those subjected to NIR-DW or NIR-LA treatment, more S. Enteritidis survived after NIR treatment alone. The effectiveness of NIR-DW and NIR-LA was similar, but significantly more sublethally injured cells were recovered from NIR-DW-treated samples. We confirmed that the enhanced bactericidal effect of the NIR-LA combination may not be attributable to cell membrane damage per se. NIR heat treatment might allow S. Enteritidis cells to become permeable to applied LA solution. The NIR-LA treatment (5 min) did not significantly (P > 0.05) cause changes in the lipid peroxidation parameters, total phenolic contents, color values, moisture contents, and sensory attributes of nut kernels. Given the results of the present study, NIR-LA treatment may be a potential intervention for controlling food-borne pathogens on nut kernel products.
Ha, Jae-Won
2015-01-01
The aim of this study was to investigate the efficacy of near-infrared radiation (NIR) heating combined with lactic acid (LA) sprays for inactivating Salmonella enterica serovar Enteritidis on almond and pine nut kernels and to elucidate the mechanisms of the lethal effect of the NIR-LA combined treatment. Also, the effect of the combination treatment on product quality was determined. Separately prepared S. Enteritidis phage type (PT) 30 and non-PT 30 S. Enteritidis cocktails were inoculated onto almond and pine nut kernels, respectively, followed by treatments with NIR or 2% LA spray alone, NIR with distilled water spray (NIR-DW), and NIR with 2% LA spray (NIR-LA). Although surface temperatures of nuts treated with NIR were higher than those subjected to NIR-DW or NIR-LA treatment, more S. Enteritidis survived after NIR treatment alone. The effectiveness of NIR-DW and NIR-LA was similar, but significantly more sublethally injured cells were recovered from NIR-DW-treated samples. We confirmed that the enhanced bactericidal effect of the NIR-LA combination may not be attributable to cell membrane damage per se. NIR heat treatment might allow S. Enteritidis cells to become permeable to applied LA solution. The NIR-LA treatment (5 min) did not significantly (P > 0.05) cause changes in the lipid peroxidation parameters, total phenolic contents, color values, moisture contents, and sensory attributes of nut kernels. Given the results of the present study, NIR-LA treatment may be a potential intervention for controlling food-borne pathogens on nut kernel products. PMID:25911473
Heat transfer in laminar wall boundary layer within noncentered unsteady expansion wave
NASA Technical Reports Server (NTRS)
Srinivasan, G.; Hall, J. G.
1975-01-01
The paper summarizes results of theoretical and experimental investigations of wall temperature change and heat transfer in the laminar boundary layer formed within noncentered unsteady expansion waves in shock tubes. The study is restricted to a class of noncentered plane waves which have one key feature in common with centered waves, i.e., the first derivatives of the inviscid flow quantities have values or are discontinuous at the wavehead. The wall temperature is calculated implicitly by matching the local heat transfer rates of gas and wall thermal boundary layers. Experimentally, the wall temperature was measured using a thin-film resistance thermometer and heat transfer was then deduced using a one-dimensional unsteady heat conduction equation.
NASA Astrophysics Data System (ADS)
Bodryakov, V. Yu.; Bykov, A. A.
2016-05-01
The correlation between the volumetric thermal expansion coefficient β( T) and the heat capacity C( T) of aluminum is considered in detail. It is shown that a clear correlation is observed in a significantly wider temperature range, up to the melting temperature of the metal, along with the low-temperature range where it is linear. The significant deviation of dependence β( C) from the low-temperature linear behavior is observed up to the point where the heat capacity achieves the classical Dulong-Petit limit of 3 R ( R is the universal gas constant).
Heat capacity and thermal expansion of icosahedral lutetium boride LuB66
Novikov, V V; Avdashchenko, D V; Matovnikov, A V; Mitroshenkov, N V; Bud’ko, S L
2014-01-07
The experimental values of heat capacity and thermal expansion for lutetium boride LuB66 in the temperature range of 2-300 K were analysed in the Debye-Einstein approximation. It was found that the vibration of the boron sub-lattice can be considered within the Debye model with high characteristic temperatures; low-frequency vibration of weakly connected metal atoms is described by the Einstein model.
Claudio Filippone, Ph.D.
1999-06-01
Thermal-hydraulic analysis of a specially designed steam expansion device (heat cavity) was performed to prove the feasibility of steam expansions at elevated rates for power generation with higher efficiency. The steam expansion process inside the heat cavity greatly depends on the gap within which the steam expands and accelerates. This system can be seen as a miniaturized boiler integrated inside the expander where steam (or the proper fluid) is generated almost instantaneously prior to its expansion in the work-producing unit. Relatively cold water is pulsed inside the heat cavity, where the heat transferred causes the water to flash to steam, thereby increasing its specific volume by a large factor. The gap inside the heat cavity forms a special nozzle-shaped system in which the fluid expands rapidly, accelerating toward the system outlet. The expansion phenomenon is the cause of ever-increasing fluid speed inside the cavity system, eliminating the need for moving parts (pumps, valves, etc.). In fact, the subsequent velocity induced by the sudden fluid expansion causes turbulent conditions, forcing accelerating Reynolds and Nusselt numbers which, in turn, increase the convective heat transfer coefficient. When the combustion of fossil fuels constitutes the heat source, the heat cavity concept can be applied directly inside the stator of conventional turbines, thereby greatly increasing the overall system efficiency.
Heat Transfer and Fluid Dynamics Measurements in the Expansion Space of a Stirling Cycle Engine
NASA Technical Reports Server (NTRS)
Jiang, Nan; Simon, Terrence W.
2006-01-01
The heater (or acceptor) of a Stirling engine, where most of the thermal energy is accepted into the engine by heat transfer, is the hottest part of the engine. Almost as hot is the adjacent expansion space of the engine. In the expansion space, the flow is oscillatory, impinging on a two-dimensional concavely-curved surface. Knowing the heat transfer on the inside surface of the engine head is critical to the engine design for efficiency and reliability. However, the flow in this region is not well understood and support is required to develop the CFD codes needed to design modern Stirling engines of high efficiency and power output. The present project is to experimentally investigate the flow and heat transfer in the heater head region. Flow fields and heat transfer coefficients are measured to characterize the oscillatory flow as well as to supply experimental validation for the CFD Stirling engine design codes. Presented also is a discussion of how these results might be used for heater head and acceptor region design calculations.
ERIC Educational Resources Information Center
Forsyth Technical Inst., Winston-Salem, NC.
This vocational physics individualized student instructional module on thermometers consists of the three units: Temperature and heat, expansion thermometers, and electrical thermometers. Designed with a laboratory orientation, experiments are included on linear expansion; making a bimetallic thermometer, a liquid-in-gas thermometer, and a gas…
NASA Astrophysics Data System (ADS)
Lortz, R.; Wang, Y.; Abe, S.; Meingast, C.; Paderno, Yu. B.; Filippov, V.; Junod, A.
2005-07-01
In an attempt to clarify conflicting published data, we report new measurements of specific heat, resistivity, magnetic susceptibility, and thermal expansivity up to room temperature for the 6K superconductor ZrB12 , using well-characterized single crystals with a residual resistivity ratio >9 . The specific heat gives the bulk result 2Δ(0)/kBTc=3.7 for the superconducting gap ratio, and excludes multiple gaps and d -wave symmetry for the Cooper pairs. The Sommerfeld constant γn=0.34mJK-2gat-1 and the magnetic susceptibility χ=-2.1×10-5 indicate a low density of states at the Fermi level. The Debye temperature θD is in the range 1000-1200K near zero and room temperature, but decreases by a factor of ˜2 at ˜35K . The specific heat and resistivity curves are inverted to yield approximations of the phonon density of states F(ω) and the spectral electron-phonon scattering function αtr2F(ω) , respectively. Both unveil a 15meV mode, attributed to Zr vibrations in oversized B cages, which gives rise to electron-phonon coupling. The thermal expansivity further shows that this mode is anharmonic, while the vanishingly small discontinuity at Tc establishes that the cell volume is nearly optimal with respect to Tc .
On the calculation of turbulent heat and mass transport downstream from an abrupt pipe expansion
NASA Technical Reports Server (NTRS)
Amano, R. S.
1982-01-01
A numerical study is reported of heat/mass transfer in the separated flow region created by an abrupt pipe expansion. Computations have employed a hybrid method of central and upwind finite differencing to solve the full Navier-Stokes equations with turbulent model (k approximately equal to epsilon). The study has given its main attention to the simulation of the region in the immediate vicinity of the wall, by formulating near-wall model for the evaluation of the mean generation and destruction rate of the epsilon equation. The computed results were compared with the experimental data and they showed generally encouraging agreement with the measurements.
Hemingway, B.S.; Evans, H.T.; Nord, G.L.; Haselton, H.T.; Robie, R.A.; McGee, J.J.
1986-01-01
A small but sharp anomaly in the heat capacity of akermanite at 357.9 K, and a discontinuity in its thermal expansion at 693 K, as determined by XRD, have been found. The enthalpy and entropy assigned to the heat-capacity anomaly, for the purpose of tabulation, are 679 J/mol and 1.9 J/(mol.K), respectively. They were determined from the difference between the measured values of the heat capacity in the T interval 320-365 K and that obtained from an equation which fits the heat-capacity and heat-content data for akermanite from 290 to 1731 K. Heat-capacity measurements are reported for the T range from 9 to 995 K. The entropy and enthalpy of formation of akermanite at 298.15 K and 1 bar are 212.5 + or - 0.4 J/(mol.K) and -3864.5 + or - 4.0 kJ/mol, respectively. Weak satellite reflections have been observed in hk0 single-crystal X-ray precession photographs and electron-diffraction patterns of this material at room T. With in situ heating by TEM, the satellite reflections decreased significantly in intensity above 358 K and disappeared at about 580 K and, on cooling, reappeared. These observations suggest that the anomalies in the thermal behaviour of akermanite are associated with local displacements of Ca ions from the mirror plane (space group P421m) and accompanying distortion of the MgSi2O7 framework.-L.C.C.
NASA Astrophysics Data System (ADS)
Rodriguez, G.; Clarke, S. A.; Taylor, A. J.; Forsman, A.
2004-07-01
We report on the development of a novel technique to measure the critical surface displacement in intense, ultrashort, laser-solid target experiments. Determination of the critical surface position is important for understanding near solid density plasma dynamics and transport from warm dense matter systems, and for diagnosing short scale length plasma expansion and hydrodynamic surface motion from short pulse, laser-heated, solid targets. Instead of inferring critical surface motion from spectral power shifts using a time-delayed probe pulse or from phase shifts using ultrafast pump-probe frequency domain interferometry (FDI), this technique directly measures surface displacement using a single ultrafast laser heating pulse. Our technique is based on an application of a Michelson Stellar interferometer to microscopic rather than stellar scales, and we report plasma scale length motion as small as 10 nm. We will present results for motion of plasmas generated from several target materials (Au, Al, Au on CH plastic) for a laser pulse intensity range from 1011 to 1016 W/cm2. Varying both, the pulse duration and the pulse energy, explores the dependence of the expansion mechanism on the energy deposited and on the peak intensity. Comparisons with hydrocodes reveal the applicability of hydrodynamic models.
High Enthalpy Studies of Capsule Heating in an Expansion Tunnel Facility
NASA Technical Reports Server (NTRS)
Dufrene, Aaron; MacLean, Matthew; Holden, Michael
2012-01-01
Measurements were made on an Orion heat shield model to demonstrate the capability of the new LENS-XX expansion tunnel facility to make high quality measurements of heat transfer distributions at flow velocities from 3 km/s (h(sub 0) = 5 MJ/kg) to 8.4 km/s (h(sub 0) = 36 MJ/kg). Thirty-nine heat transfer gauges, including both thin-film and thermocouple instruments, as well as four pressure gauges, and high-speed Schlieren were used to assess the aerothermal environment on the capsule heat shield. Only results from laminar boundary layer runs are reported. A major finding of this test series is that the high enthalpy, low-density flows displayed surface heating behavior that is observed to be consistent with some finite-rate recombination process occurring on the surface of the model. It is too early to speculate on the nature of the mechanism, but the response of the gages on the surface seems generally repeatable and consistent for a range of conditions. This result is an important milestone in developing and proving a capability to make measurements in a ground test environment and extrapolate them to flight for conditions with extreme non-equilibrium effects. Additionally, no significant, isolated stagnation point augmentation ("bump") was observed in the tests in this facility. Cases at higher Reynolds number seemed to show the greatest amount of overall increase in heating on the windward side of the model, which may in part be due to small-scale particulate.
Are heat waves susceptible to mitigate the expansion of a species progressing with global warming?
Robinet, Christelle; Rousselet, Jérôme; Pineau, Patrick; Miard, Florie; Roques, Alain
2013-09-01
A number of organisms, especially insects, are extending their range in response of the increasing trend of warmer temperatures. However, the effects of more frequent climatic anomalies on these species are not clearly known. The pine processionary moth, Thaumetopoea pityocampa, is a forest pest that is currently extending its geographical distribution in Europe in response to climate warming. However, its population density largely decreased in its northern expansion range (near Paris, France) the year following the 2003 heat wave. In this study, we tested whether the 2003 heat wave could have killed a large part of egg masses. First, the local heat wave intensity was determined. Then, an outdoor experiment was conducted to measure the deviation between the temperatures recorded by weather stations and those observed within sun-exposed egg masses. A second experiment was conducted under laboratory conditions to simulate heat wave conditions (with night/day temperatures of 20/32°C and 20/40°C compared to the control treatment 13/20°C) and measure the potential effects of this heat wave on egg masses. No effects were noticed on egg development. Then, larvae hatched from these egg masses were reared under mild conditions until the third instar and no delayed effects on the development of larvae were found. Instead of eggs, the 2003 heat wave had probably affected directly or indirectly the young larvae that were already hatched when it occurred. Our results suggest that the effects of extreme climatic anomalies occurring over narrow time windows are difficult to determine because they strongly depend on the life stage of the species exposed to these anomalies. However, these effects could potentially reduce or enhance the average warming effects. As extreme weather conditions are predicted to become more frequent in the future, it is necessary to disentangle the effects of the warming trend from the effects of climatic anomalies when predicting the response of a
Are heat waves susceptible to mitigate the expansion of a species progressing with global warming?
Robinet, Christelle; Rousselet, Jérôme; Pineau, Patrick; Miard, Florie; Roques, Alain
2013-01-01
A number of organisms, especially insects, are extending their range in response of the increasing trend of warmer temperatures. However, the effects of more frequent climatic anomalies on these species are not clearly known. The pine processionary moth, Thaumetopoea pityocampa, is a forest pest that is currently extending its geographical distribution in Europe in response to climate warming. However, its population density largely decreased in its northern expansion range (near Paris, France) the year following the 2003 heat wave. In this study, we tested whether the 2003 heat wave could have killed a large part of egg masses. First, the local heat wave intensity was determined. Then, an outdoor experiment was conducted to measure the deviation between the temperatures recorded by weather stations and those observed within sun-exposed egg masses. A second experiment was conducted under laboratory conditions to simulate heat wave conditions (with night/day temperatures of 20/32°C and 20/40°C compared to the control treatment 13/20°C) and measure the potential effects of this heat wave on egg masses. No effects were noticed on egg development. Then, larvae hatched from these egg masses were reared under mild conditions until the third instar and no delayed effects on the development of larvae were found. Instead of eggs, the 2003 heat wave had probably affected directly or indirectly the young larvae that were already hatched when it occurred. Our results suggest that the effects of extreme climatic anomalies occurring over narrow time windows are difficult to determine because they strongly depend on the life stage of the species exposed to these anomalies. However, these effects could potentially reduce or enhance the average warming effects. As extreme weather conditions are predicted to become more frequent in the future, it is necessary to disentangle the effects of the warming trend from the effects of climatic anomalies when predicting the response of a
Heat kernels on cone of AdS2 and k-wound circular Wilson loop in AdS5 × S5 superstring
NASA Astrophysics Data System (ADS)
Bergamin, R.; Tseytlin, A. A.
2016-04-01
We compute the one-loop world-sheet correction to partition function of {{AdS}}5× {{{S}}}5 superstring that should be representing k-fundamental circular Wilson loop in planar limit. The 2d metric of the minimal surface ending on k-wound circle at the boundary is that of a cone of AdS2 with deficit 2π (1-k). We compute the determinants of 2d fluctuation operators by first constructing heat kernels of scalar and spinor Laplacians on the cone using the Sommerfeld formula. The final expression for the k-dependent part of the one-loop correction has simple integral representation but is different from earlier results.
On the calculation of turbulent heat transport downstream from an abrupt pipe expansion
NASA Technical Reports Server (NTRS)
Chieng, C. C.; Launder, B. E.
1980-01-01
A numerical study is reported of flow and heat transfer in the separated flow region created by an abrupt pipe expansion. Computations employed an adaptation of the TEACH-2E computer program with the standard model of turbulence. Emphasis is given to the simulation, from both a physical and numerical viewpoint, of the region in the immediate vicinity of the wall where turbulent transport gives way to molecular conduction and diffusion. Wall resistance laws or wall functions used to bridge this near-wall region are based on the idea that, beyond the viscous sublayer, the turbulent length scale is universal, increasing linearly with distance from the wall. Predictions of expermental data for a diameter ratio of 0.54 show generally encouraging agreement with experiment. At a diameter of 0.43 different trends are discernible between measurement and calculation though this appears to be due to effects unconnected with the wall region studied.
Boundary-layer computational model for predicting the flow and heat transfer in sudden expansions
NASA Technical Reports Server (NTRS)
Lewis, J. P.; Pletcher, R. H.
1986-01-01
Fully developed turbulent and laminar flows through symmetric planar and axisymmetric expansions with heat transfer were modeled using a finite-difference discretization of the boundary-layer equations. By using the boundary-layer equations to model separated flow in place of the Navier-Stokes equations, computational effort was reduced permitting turbulence modelling studies to be economically carried out. For laminar flow, the reattachment length was well predicted for Reynolds numbers as low as 20 and the details of the trapped eddy were well predicted for Reynolds numbers above 200. For turbulent flows, the Boussinesq assumption was used to express the Reynolds stresses in terms of a turbulent viscosity. Near-wall algebraic turbulence models based on Prandtl's-mixing-length model and the maximum Reynolds shear stress were compared.
Surface urban heat island effect and its relationship with urban expansion in Nanjing, China
NASA Astrophysics Data System (ADS)
Tu, Lili; Qin, Zhihao; Li, Wenjuan; Geng, Jun; Yang, Lechan; Zhao, Shuhe; Zhan, Wenfeng; Wang, Fei
2016-04-01
Nanjing, a typical megacity in eastern China, has undergone dramatic expansion during the past decade. The surface urban heat island (SUHI) effect is an important indicator of the environmental consequences of urbanization and has rapidly changed the dynamics of Nanjing. Accurate measurements of the effects and changes resulting from the SUHI effect may provide useful information for urban planning. Index, centroid transfer, and correlation analyses were conducted to measure the dynamics of the SUHI and elucidate the relationship between the SUHI and urban expansion in Nanjing over the past decade. Overall, the results indicated that (1) the region affected by the SUHI effect gradually expanded southward and eastward from 2000 to 2012; (2) the centroid of the SUHI moved gradually southeastward and then southward and southwestward, which is consistent with the movement of the urban centroid; (3) the trajectory of the level-3 SUHI centroid did not correspond with the urban mass or SUHI centroids during the study period and (4) the SUHI intensity and urban fractal characteristics were negatively correlated. In addition, we presented insights regarding the minimization of the SUHI effect in cities such as Nanjing, China.
NASA Astrophysics Data System (ADS)
Speranza, Giulio; Vona, Alessandro; Di Genova, Danilo; Romano, Claudia
2015-04-01
Rocksalt overall characteristics and peculiarity are well known and have made rocksalt bodies one of the most favorable choice for nuclear waste storage purposes. Low to medium temperature effects related to nuclear waste heat generation have been studied by several authors. However, high temperature related salt behavior has been poorly investigated as well as studies focused on the effect of temperature increase on fluids contained in halite. Here we present the results of thermal expansion experiments in the range 50 - 700°C made on halite single crystals with different fluid contents. Our results show that thermally unaltered halite is subjected, upon heating, to thermal instability around 300 - 450°C, with sudden increase in expansivity, sample cracking and fluids emission. Moreover, thermal expansion results higher for fluid-rich salts. In contrast, thermally altered halite, lacks the instability occurrence, showing a constant linear thermal expansion regardless its fluid contents. Rocksalt thermal instability, that is likely to be due to fluids overpressure development upon heating, lead also to a bulk density reduction. Thus, unaltered salt heated to temperature around 300°C or more could cause damage, fluids emission and density drop, increasing the salt mobility. For this reason, a detailed and quantitative study of fluid type, abundance and arrangement within crystals, as well as their response to stress and thermal changes is fundamental for both scientific and applicative purposes regarding halite.
Heats of solution and lattice-expansion and trapping energies of hydrogen in transition metals
NASA Astrophysics Data System (ADS)
Griessen, R.
1988-08-01
The heat of hydrogen solution in a metal at infinite dilution ΔH¯∞ is shown to depend on (1) the distance R between a hydrogen atom and its metallic nearest neighbors, (2) the characteristic band-structure energy ΔE=EF-Es, where EF is the Fermi energy and Es basically the center of the lowest conduction band of the host metal, and (3) the width Wd of the d band of the host metal. The semiempirical relation ΔH¯∞=αΔE W1/2d [with α=18.6 (kJ/mol H)(AṨ eV-3/2) and β=-90 kJ/mol H if ΔE, Wd, and R are given in units of eV and Å, respectively] reproduces the experimental values of ΔH¯∞ remarkably well. It also reproduces the volume expansion accompanying hydrogen absorption and predicts the correct interstitial site occupancy of hydrogen in a transition metal. Furthermore, it makes it possible to estimate the binding energy of hydrogen to a vacancy.
A STUDY ON DEF-RELATED EXPANSION IN HEAT-CURED CONCRETE
NASA Astrophysics Data System (ADS)
Kawabata, Yuichiro; Matsushita, Hiromichi
This paper reports the requirements for deleterious expansion due to delayed ettringite formation (DEF) based on field experience. In recent years, the delete rious expansion of concrete have been reported. The concrete have been characterized by expansion and cracking after several years of service in environments exposed in wet conditions. In many cases, the concrete consists of white cement, limestone and copper slag and it has been manufactured at elevated temperatures for early shipment. From detailed analysis, it was made clear that the cause of deleterious expansion was DEF. The gaps which are featured in DEF-damaged concrete were observed around limest one aggregate. There was a possibility that use of limestone aggregate affects DEF-related expansion while the condition of steam curing was the most effective factor for DEF-related expansion. Based on experimental data, the mechanism of DEF-related expansion and the methodology of diagnosing DEF-deterior ated concrete structures were discussed in this paper.
Negative thermal expansion and anomalies of heat capacity of LuB_{50} at low temperatures
Novikov, V. V.; Zhemoedov, N. A.; Matovnikov, A. V.; Mitroshenkov, N. V.; Kuznetsov, S. V.; Bud'ko, S. L.
2015-07-20
Heat capacity and thermal expansion of LuB_{50} boride were experimentally studied in the 2–300 K temperature range. The data reveal an anomalous contribution to the heat capacity at low temperatures. The value of this contribution is proportional to the first degree of temperature. It was identified that this anomaly in heat capacity is caused by the effect of disorder in the LuB_{50} crystalline structure and it can be described in the soft atomic potential model (SAP). The parameters of the approximation were determined. The temperature dependence of LuB_{50} heat capacity in the whole temperature range was approximated by the sum of SAP contribution, Debye and two Einstein components. The parameters of SAP contribution for LuB_{50} were compared to the corresponding values for LuB_{66}, which was studied earlier. Negative thermal expansion at low temperatures was experimentally observed for LuB_{50}. The analysis of the experimental temperature dependence for the Gruneisen parameter of LuB_{50} suggested that the low-frequency oscillations, described in SAP mode, are responsible for the negative thermal expansion. As a result, the glasslike character of the behavior of LuB_{50} thermal characteristics at low temperatures was confirmed.
NASA Astrophysics Data System (ADS)
Terekhov, V. I.; Bogatko, T. V.
2016-06-01
The results of a numerical study of the influence of the thicknesses of dynamic and thermal boundary layers on turbulent separation and heat transfer in a tube with sudden expansion are presented. The first part of this work studies the influence of the thickness of the dynamic boundary layer, which was varied by changing the length of the stabilization area within the maximal extent possible: from zero to half of the tube diameter. In the second part of the study, the flow before separation was hydrodynamically stabilized and the thermal layer before the expansion could simultaneously change its thickness from 0 to D1/2. The Reynolds number was varied in the range of {Re}_{{{{D}}1 }} = 6.7 \\cdot 103 {{to}} 1.33 \\cdot 105 , and the degree of tube expansion remained constant at ER = (D 2/D 1)2 = 1.78. A significant effect of the thickness of the separated boundary layer on both dynamic and thermal characteristics of the flow is shown. In particular, it was found out that with an increase in the thickness of the boundary layer the recirculation zone increases and the maximal Nusselt number decreases. It was determined that the growth of the heat layer thickness does not affect the hydrodynamic characteristics of the flow after separation but does lead to a reduction of heat transfer intensity in the separation area and removal of the coordinates of maximal heat transfer from the point of tube expansion. The generalizing dependence for the maximal Nusselt number at various thermal layer thicknesses is given. Comparison with experimental data confirmed the main trends in the behavior of heat and mass transfer processes in separated flows behind a step with different thermal prehistories.
NASA Astrophysics Data System (ADS)
Oon, Cheen Sean; Nee Yew, Sin; Chew, Bee Teng; Salim Newaz, Kazi Md; Al-Shamma'a, Ahmed; Shaw, Andy; Amiri, Ahmad
2015-05-01
Flow separation and reattachment of 0.2% TiO2 nanofluid in an asymmetric abrupt expansion is studied in this paper. Such flows occur in various engineering and heat transfer applications. Computational fluid dynamics package (FLUENT) is used to investigate turbulent nanofluid flow in the horizontal double-tube heat exchanger. The meshing of this model consists of 43383 nodes and 74891 elements. Only a quarter of the annular pipe is developed and simulated as it has symmetrical geometry. Standard k-epsilon second order implicit, pressure based-solver equation is applied. Reynolds numbers between 17050 and 44545, step height ratio of 1 and 1.82 and constant heat flux of 49050 W/m2 was utilized in the simulation. Water was used as a working fluid to benchmark the study of the heat transfer enhancement in this case. Numerical simulation results show that the increase in the Reynolds number increases the heat transfer coefficient and Nusselt number of the flowing fluid. Moreover, the surface temperature will drop to its lowest value after the expansion and then gradually increase along the pipe. Finally, the chaotic movement and higher thermal conductivity of the TiO2 nanoparticles have contributed to the overall heat transfer enhancement of the nanofluid compare to the water.
NASA Astrophysics Data System (ADS)
Capra, B. R.; Morgan, R. G.; Leyland, P.
2005-02-01
The present study focused on simulating a trajectory point towards the end of the first experimental heatshield of the FIRE II vehicle, at a total flight time of 1639.53s. Scale replicas were sized according to binary scaling and instrumented with thermocouples for testing in the X1 expansion tube, located at The University of Queensland. Correlation of flight to experimental data was achieved through the separation, and independent treatment of the heat modes. Preliminary investigation indicates that the absolute value of radiant surface flux is conserved between two binary scaled models, whereas convective heat transfer increases with the length scale. This difference in the scaling techniques result in the overall contribution of radiative heat transfer diminishing to less than 1% in expansion tubes from a flight value of approximately 9-17%. From empirical correlation's it has been shown that the St √ Re number decreases, under special circumstances, in expansion tubes by the percentage radiation present on the flight vehicle. Results obtained in this study give a strong indication that the relative radiative heat transfer contribution in the expansion tube tests is less than that in flight, supporting the analysis that the absolute value remains constant with binary scaling. Key words: Heat Transfer, Fire II Flight Vehicle, Expansion Tubes, Binary Scaling. NOMENCLATURE dA elemental surface area, m2 H0 stagnation enthalpy, MJ/kg L arbitrary length, m ls scale factor equal to Lf /Le M Mach Number ˙m mass flow rate, kg/s p pressure, kPa ˙q heat transfer rate, W/m2 ¯q averaged heat transfer rate W/m2 RN nose radius m Re Reynolds number, equal to ρURN µ s/RD radial distance from symmetry axis St Stanton number, equal to ˙q ρUH0 St √ Re = ˙qR 1/2 N (ρU)1/2 µ1/2H0 over radius of forebody (D/2) T temperature, K U velocity, m/s Ue equivalent velocity m/s, equal to √ 2H0 U1 primary shock speed m/s U2 secondary shock speed m/s ρ density, kg/m3 ρL binary
ERIC Educational Resources Information Center
Moore, William M.
1984-01-01
Describes the procedures and equipment for an experiment on the adiabatic expansion of gases suitable for demonstration and discussion in the physical chemical laboratory. The expansion produced shows how the process can change temperature and still return to a different location on an isotherm. (JN)
Flaishman, Moshe A; Peles, Yuval; Dahan, Yardena; Milo-Cochavi, Shira; Frieman, Aviad; Naor, Amos
2015-04-01
Temperature is one of the most significant factors affecting physiological and biochemical aspects of fruit development. Current and progressing global warming is expected to change climate in the traditional deciduous fruit tree cultivation regions. In this study, 'Golden Delicious' trees, grown in a controlled environment or commercial orchard, were exposed to different periods of heat treatment. Early fruitlet development was documented by evaluating cell number, cell size and fruit diameter for 5-70 days after full bloom. Normal activities of molecular developmental and growth processes in apple fruitlets were disrupted under daytime air temperatures of 29°C and higher as a result of significant temporary declines in cell-production and cell-expansion rates, respectively. Expression screening of selected cell cycle and cell expansion genes revealed the influence of high temperature on genetic regulation of apple fruitlet development. Several core cell-cycle and cell-expansion genes were differentially expressed under high temperatures. While expression levels of B-type cyclin-dependent kinases and A- and B-type cyclins declined moderately in response to elevated temperatures, expression of several cell-cycle inhibitors, such as Mdwee1, Mdrbr and Mdkrps was sharply enhanced as the temperature rose, blocking the cell-cycle cascade at the G1/S and G2/M transition points. Moreover, expression of several expansin genes was associated with high temperatures, making them potentially useful as molecular platforms to enhance cell-expansion processes under high-temperature regimes. Understanding the molecular mechanisms of heat tolerance associated with genes controlling cell cycle and cell expansion may lead to the development of novel strategies for improving apple fruit productivity under global warming.
Chemical path of ettringite formation in heat-cured mortar and its relationship to expansion
NASA Astrophysics Data System (ADS)
Shimada, Yukie
Delayed ettringite formation (DEF) refers to a deterioration process of cementitious materials that have been exposed to high temperatures and subsequent moist conditions, often resulting in damaging expansion. The occurrence of DEF-related damage may lead to severe economic consequences. While concerns of related industries continue to raise the need for reliable and practical test methods for DEF assessment, the mechanism(s) involved in DEF remains controversial. In order to provide a better understanding of the DEF phenomenon, the present study investigated mortar systems made with various mixing and curing parameters for detailed changes in pore solution chemistry and solid phase development, while corresponding changes in physical properties were also closely monitored. This approach enabled the development of a correlation between the chemical and physical changes and provided the opportunity for a holistic analysis. The present study revealed that there exist relationships between the physical properties and expansive behavior. The normal aging process of the cementitious systems involves dissolution of ettringite crystals finely distributed within the hardened cement paste and subsequent recrystallization as innocuous crystals in the largest accessible spaces. This process, known as Ostwald ripening, facilitates relaxation of any expansive pressure developed within the paste. The rate of Ostwald ripening is rather slow in a well-compacted, dense microstructure containing few flaws. Thus, an increase in mechanical strength accompanied by a reduction in diffusion rate by altering the mortar parameters increases the risk of DEF-related expansion and vice versa. Introduction of the Ostwald ripening process as a stress relief mechanism to the previously proposed paste expansion hypothesis provides a comprehensive description of the observed expansive behavior. Chemical analyses provided semi-quantitative information on the stability of ettringite during high
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318
Mihaila, Bogden; Zubelewicz, Aleksander; Stan, Marius; Ramirez, Juan
2008-01-01
We study the thermal expansion of UO{sub 2+x} nuclear fuel rod in the context of a model coupling heat transfer and oxygen diffusion discussed previously by J.C. Ramirez, M. Stan and P. Cristea [J. Nucl. Mat. 359 (2006) 174]. We report results of simulations performed for steady-state and time-dependent regimes in one-dimensional configurations. A variety of initial- and boundary-value scenarios are considered. We use material properties obtained from previously published correlations or from analysis of previously published data. All simulations were performed using the commercial code COMSOL Multiphysics{sup TM} and are readily extendable to include multidimensional effects.
NASA Astrophysics Data System (ADS)
Tomarov, G. V.; Nikolaev, A. E.; Semenov, V. N.; Shipkov, A. A.; Shepelev, S. V.
2015-06-01
The results of laboratory studies of material properties and of numerical and analytical investigations to assess the stress-strain state of the metal of the bellows expansion joints used in the district heating system pipelines at MOEK subjected to corrosion failure are presented. The main causes and the dominant mechanisms of failure of the expansion joints have been identified. The influence of the initial crevice defects and the operating conditions on the features and intensity of destruction processes in expansion joints used in the district heating system pipelines at MOEK has been established.
Nakanishi, Koichi; Kogure, Akinori; Deuchi, Keiji; Kuwana, Ritsuko; Takamatsu, Hiromu; Ito, Kiyoshi
2015-01-01
We previously developed a method for evaluating the heat resistance of microorganisms by measuring the transition temperature at which the coefficient of linear expansion of a cell changes. Here, we performed heat resistance measurements using a scanning probe microscope with a nano thermal analysis system. The microorganisms studied included six strains of the genus Bacillus or related genera, one strain each of the thermophilic obligate anaerobic bacterial genera Thermoanaerobacter and Moorella, two strains of heat-resistant mold, two strains of non-sporulating bacteria, and one strain of yeast. Both vegetative cells and spores were evaluated. The transition temperature at which the coefficient of linear expansion due to heating changed from a positive value to a negative value correlated strongly with the heat resistance of the microorganism as estimated from the D value. The microorganisms with greater heat resistance exhibited higher transition temperatures. There was also a strong negative correlation between the coefficient of linear expansion and heat resistance in bacteria and yeast, such that microorganisms with greater heat resistance showed lower coefficients of linear expansion. These findings suggest that our method could be useful for evaluating the heat resistance of microorganisms.
Nakanishi, Koichi; Kogure, Akinori; Deuchi, Keiji; Kuwana, Ritsuko; Takamatsu, Hiromu; Ito, Kiyoshi
2015-01-01
We previously developed a method for evaluating the heat resistance of microorganisms by measuring the transition temperature at which the coefficient of linear expansion of a cell changes. Here, we performed heat resistance measurements using a scanning probe microscope with a nano thermal analysis system. The microorganisms studied included six strains of the genus Bacillus or related genera, one strain each of the thermophilic obligate anaerobic bacterial genera Thermoanaerobacter and Moorella, two strains of heat-resistant mold, two strains of non-sporulating bacteria, and one strain of yeast. Both vegetative cells and spores were evaluated. The transition temperature at which the coefficient of linear expansion due to heating changed from a positive value to a negative value correlated strongly with the heat resistance of the microorganism as estimated from the D value. The microorganisms with greater heat resistance exhibited higher transition temperatures. There was also a strong negative correlation between the coefficient of linear expansion and heat resistance in bacteria and yeast, such that microorganisms with greater heat resistance showed lower coefficients of linear expansion. These findings suggest that our method could be useful for evaluating the heat resistance of microorganisms. PMID:26699861
Dalir, Nemat
2014-01-01
An exact analytical solution is obtained for the problem of three-dimensional transient heat conduction in the multilayered sphere. The sphere has multiple layers in the radial direction and, in each layer, time-dependent and spatially nonuniform volumetric internal heat sources are considered. To obtain the temperature distribution, the eigenfunction expansion method is used. An arbitrary combination of homogenous boundary condition of the first or second kind can be applied in the angular and azimuthal directions. Nevertheless, solution is valid for nonhomogeneous boundary conditions of the third kind (convection) in the radial direction. A case study problem for the three-layer quarter-spherical region is solved and the results are discussed. PMID:27433511
Ning, F L; Glavatskiy, K; Ji, Z; Kjelstrup, S; H Vlugt, T J
2015-01-28
Understanding the thermal and mechanical properties of CH4 and CO2 hydrates is essential for the replacement of CH4 with CO2 in natural hydrate deposits as well as for CO2 sequestration and storage. In this work, we present isothermal compressibility, isobaric thermal expansion coefficient and specific heat capacity of fully occupied single-crystal sI-CH4 hydrates, CO2 hydrates and hydrates of their mixture using molecular dynamics simulations. Eight rigid/nonpolarisable water interaction models and three CH4 and CO2 interaction potentials were selected to examine the atomic interactions in the sI hydrate structure. The TIP4P/2005 water model combined with the DACNIS united-atom CH4 potential and TraPPE CO2 rigid potential were found to be suitable molecular interaction models. Using these molecular models, the results indicate that both the lattice parameters and the compressibility of the sI hydrates agree with those from experimental measurements. The calculated bulk modulus for any mixture ratio of CH4 and CO2 hydrates varies between 8.5 GPa and 10.4 GPa at 271.15 K between 10 and 100 MPa. The calculated thermal expansion and specific heat capacities of CH4 hydrates are also comparable with experimental values above approximately 260 K. The compressibility and expansion coefficient of guest gas mixture hydrates increase with an increasing ratio of CO2-to-CH4, while the bulk modulus and specific heat capacity exhibit the opposite trend. The presented results for the specific heat capacities of 2220-2699.0 J kg(-1) K(-1) for any mixture ratio of CH4 and CO2 hydrates are the first reported so far. These computational results provide a useful database for practical natural gas recovery from CH4 hydrates in deep oceans where CO2 is considered to replace CH4, as well as for phase equilibrium and mechanical stability of gas hydrate-bearing sediments. The computational schemes also provide an appropriate balance between computational accuracy and cost for predicting
Ritchie, R.H.; Sakakura, A.Y.
1956-01-01
The formal solutions of problems involving transient heat conduction in infinite internally bounded cylindrical solids may be obtained by the Laplace transform method. Asymptotic series representing the solutions for large values of time are given in terms of functions related to the derivatives of the reciprocal gamma function. The results are applied to the case of the internally bounded infinite cylindrical medium with, (a) the boundary held at constant temperature; (b) with constant heat flow over the boundary; and (c) with the "radiation" boundary condition. A problem in the flow of gas through a porous medium is considered in detail.
Hedstrom, C S; Shearer, P W; Miller, J C; Walton, V M
2014-10-01
Halyomorpha halys Stål, the brown marmorated stink bug (Hemiptera: Pentatomidae), is an invasive pest with established populations in Oregon. The generalist feeding habits of H. halys suggest it has the potential to be a pest of many specialty crops grown in Oregon, including hazelnuts, Corylus avellana L. The objectives of this study were to: 1) characterize the damage to developing hazelnut kernels resulting from feeding by H. halys adults, 2) determine how the timing of feeding during kernel development influences damage to kernels, and 3) determine if hazelnut shell thickness has an effect on feeding frequency on kernels. Adult brown marmorated stink bugs were allowed to feed on developing nuts for 1-wk periods from initial kernel development (spring) until harvest (fall). Developing nuts not exposed to feeding by H. halys served as a control treatment. The degree of damage and diagnostic symptoms corresponded with the hazelnut kernels' physiological development. Our results demonstrated that when H. halys fed on hazelnuts before kernel expansion, development of the kernels could cease, resulting in empty shells. When stink bugs fed during kernel expansion, kernels appeared malformed. When stink bugs fed on mature nuts the kernels exhibited corky, necrotic areas. Although significant differences in shell thickness were observed among the cultivars, no significant differences occurred in the proportions of damaged kernels based on field tests and laboratory choice tests. The results of these studies demonstrated that commercial hazelnuts are susceptible to damage caused by the feeding of H. halys throughout the entire period of kernel development.
The role of turbulence in coronal heating and solar wind expansion.
Cranmer, Steven R; Asgari-Targhi, Mahboubeh; Miralles, Mari Paz; Raymond, John C; Strachan, Leonard; Tian, Hui; Woolsey, Lauren N
2015-05-13
Plasma in the Sun's hot corona expands into the heliosphere as a supersonic and highly magnetized solar wind. This paper provides an overview of our current understanding of how the corona is heated and how the solar wind is accelerated. Recent models of magnetohydrodynamic turbulence have progressed to the point of successfully predicting many observed properties of this complex, multi-scale system. However, it is not clear whether the heating in open-field regions comes mainly from the dissipation of turbulent fluctuations that are launched from the solar surface, or whether the chaotic 'magnetic carpet' in the low corona energizes the system via magnetic reconnection. To help pin down the physics, we also review some key observational results from ultraviolet spectroscopy of the collisionless outer corona. PMID:25848083
The role of turbulence in coronal heating and solar wind expansion
Cranmer, Steven R.; Asgari-Targhi, Mahboubeh; Miralles, Mari Paz; Raymond, John C.; Strachan, Leonard; Tian, Hui; Woolsey, Lauren N.
2015-01-01
Plasma in the Sun's hot corona expands into the heliosphere as a supersonic and highly magnetized solar wind. This paper provides an overview of our current understanding of how the corona is heated and how the solar wind is accelerated. Recent models of magnetohydrodynamic turbulence have progressed to the point of successfully predicting many observed properties of this complex, multi-scale system. However, it is not clear whether the heating in open-field regions comes mainly from the dissipation of turbulent fluctuations that are launched from the solar surface, or whether the chaotic ‘magnetic carpet’ in the low corona energizes the system via magnetic reconnection. To help pin down the physics, we also review some key observational results from ultraviolet spectroscopy of the collisionless outer corona. PMID:25848083
The role of turbulence in coronal heating and solar wind expansion.
Cranmer, Steven R; Asgari-Targhi, Mahboubeh; Miralles, Mari Paz; Raymond, John C; Strachan, Leonard; Tian, Hui; Woolsey, Lauren N
2015-05-13
Plasma in the Sun's hot corona expands into the heliosphere as a supersonic and highly magnetized solar wind. This paper provides an overview of our current understanding of how the corona is heated and how the solar wind is accelerated. Recent models of magnetohydrodynamic turbulence have progressed to the point of successfully predicting many observed properties of this complex, multi-scale system. However, it is not clear whether the heating in open-field regions comes mainly from the dissipation of turbulent fluctuations that are launched from the solar surface, or whether the chaotic 'magnetic carpet' in the low corona energizes the system via magnetic reconnection. To help pin down the physics, we also review some key observational results from ultraviolet spectroscopy of the collisionless outer corona.
Duff, I.
1994-12-31
This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.
NASA Astrophysics Data System (ADS)
Allen, Philip B.
2015-08-01
The quasiharmonic (QH) approximation uses harmonic vibrational frequencies ωQ ,H(V ) computed at volumes V near V0 where the Born-Oppenheimer (BO) energy Eel(V ) is minimum. When this is used in the harmonic free energy, QH approximation gives a good zeroth order theory of thermal expansion and first-order theory of bulk modulus, where nth-order means smaller than the leading term by ɛn, where ɛ =ℏ ωvib/Eel or kBT /Eel , and Eel is an electronic energy scale, typically 2 to 10 eV. Experiment often shows evidence for next-order corrections. When such corrections are needed, anharmonic interactions must be included. The most accessible measure of anharmonicity is the quasiparticle (QP) energy ωQ(V ,T ) seen experimentally by vibrational spectroscopy. However, this cannot just be inserted into the harmonic free energy FH. In this paper, a free energy is found that corrects the double-counting of anharmonic interactions that is made when F is approximated by FH( ωQ(V ,T ) ) . The term "QP thermodynamics" is used for this way of treating anharmonicity. It enables (n +1 ) -order corrections if QH theory is accurate to order n . This procedure is used to give corrections to the specific heat and volume thermal expansion. The QH formulas for isothermal (BT) and adiabatic (BS) bulk moduli are clarified, and the route to higher-order corrections is indicated.
Heating of the Upper Atmosphere and the Expansion of the Corona of Titan
NASA Astrophysics Data System (ADS)
Michael, M.; Johnson, R. E.; Shematovich, V. I.; La Haye, V. D.; Waite, H.; Wong, M. C.; Sittler, E. C.; Ledvina, S.; Luhmann, J. G.; Leblanc, F.
2005-12-01
The atmosphere of Titan and its plasma environment are of much interest due to the recent observations by the Cassini spacecraft. It is well established that the upper atmosphere of Titan is continuously bombarded by pick-up ions and deflected ambient magnetospheric ions (Shematovich et al 2003). The deposition of energy, escape of atoms and molecules, and heating of the upper atmosphere of Titan are studied using a Direct Simulation Monte Carlo method (Michael et al 2005). It is found that the globally averaged flux of deflected magnetospheric ions and pick-up ions deposit more energy in the exobase region of Titan than solar radiation. The energy deposition in this region determines the non-thermal corona, the atmospheric loss, and the production of a neutral torus. It is found that the inclusion of the molecular pickup ions is critical to accurately determining the amount of energy deposited close to the exobase (Michael and Johnson 2005). Depending on the nature of the local interaction with the magnetosphere, the plasma flow through the exobase region and heating of the exobase region can increase the content of the corona (Michael and Johnson 2005). We compare the model results with the observational data of a number of instruments onboard Cassini spacecraft. References Michael, M., Johnson, R.E., Leblanc, F., Liu, M., Luhmann, J.G., Shematovich, V. I., Ejection of Nitrogen from Titan's atmosphere by magnetospheric ions and pickup Ions. Icarus, 175, 263-267, 2005. Michael, M., Johnson, R.E., Energy deposition of pickup ions and heating of Titan's atmosphere, Planet. Space Sci., In press, 2005. Shematovich, V.I., Johnson, R.E., Michael, M., Luhmann, J.G., Nitrogen loss from Titan. J. Geophys. Res., 108, 5086, 10.1029/2003JE002096, 2003.
NASA Astrophysics Data System (ADS)
Artemov, V. I.; Minko, K. B.; Yan'kov, G. G.
2015-12-01
Homogeneous equilibrium and nonequilibrium (relaxation) models are used to simulate flash boiling flows in nozzles. The simulation were performed using the author's CFD-code ANES. Existing experimental data are used to test the realized mathematical model and the modified algorithms of ANES CFD-code. The results of test calculations are presented, together with data obtained for the nozzle and expansion unit of the steam generator and separator in the waste-heat system at ZAO NPVP Turbokon. The SIMPLE algorithm may be used for the transonic and supersonic flashing liquid flow. The relaxation model yields better agreement with experimental data regarding the distribution of void fraction along the nozzle axis. For the given class of flow, the difference between one- and two-dimensional models is slight.
NASA Astrophysics Data System (ADS)
Lee, Daehee
An experimental investigation was made of the turbulent heat transfer and fluid flow in separated, recirculating and reattached regions created by an axisymmetric and asymmetric abrupt expansions and by an abrupt expansion followed by an abrupt contraction in a circular tube at a uniform wall temperature. The flow just upstream of the expansion was unheated and proved to be fully developed hydrodynamically at the entrance to the heated abrupt expansion region. Measurements were made with small to large diameter ratios of 0.4 and 0.533 and over the Reynolds numbers range of 4100 to 21900. The mean velocity and temperature profiles were measured downstream of an axisymmetric abrupt expansion. Heat transfer coefficients were determined both around the circumference of the tube and along its length. General results indicate a substantial augmentation in the heat transfer coefficients downstream of the flow separation caused by the high turbulence and mixing action, in spite of the mean velocity in the recirculating region being only a few percent of the downstream core flow velocity in the large tube.
On the calculation of turbulent heat transport downstream from an abrupt pipe expansion
NASA Technical Reports Server (NTRS)
Chieng, C. C.; Launder, B. E.
1980-01-01
A numerical study of flow and heat transfer in the separated flow region produced by an abrupt pipe explosion is reported, with emphasis on the region in the immediate vicinity of the wall where turbulent transport gives way to molecular conduction and diffusion. The analysis is based on a modified TEACH-2E program with the standard k-epsilon model of turbulence. Predictions of the experimental data of Zemanick and Dougall (1970) for a diameter ratio of 0.54 show generally encouraging agreement with experiment. At a diameter ratio of 0.43 different trends are discernable between measurement and calculation, though this appears to be due to effects unconnected with the wall region studied here.
Internal Thermal Control System Hose Heat Transfer Fluid Thermal Expansion Evaluation Test Report
NASA Technical Reports Server (NTRS)
Wieland, P. O.; Hawk, H. D.
2001-01-01
During assembly of the International Space Station, the Internal Thermal Control Systems in adjacent modules are connected by jumper hoses referred to as integrated hose assemblies (IHAs). A test of an IHA has been performed at the Marshall Space Flight Center to determine whether the pressure in an IHA filled with heat transfer fluid would exceed the maximum design pressure when subjected to elevated temperatures (up to 60 C (140 F)) that may be experienced during storage or transportation. The results of the test show that the pressure in the IHA remains below 227 kPa (33 psia) (well below the 689 kPa (100 psia) maximum design pressure) even at a temperature of 71 C (160 F), with no indication of leakage or damage to the hose. Therefore, based on the results of this test, the IHA can safely be filled with coolant prior to launch. The test and results are documented in this Technical Memorandum.
Technology Transfer Automated Retrieval System (TEKTRAN)
The current US corn grading system accounts for the portion of damaged kernels, which is measured by time-consuming and inaccurate visual inspection. Near infrared spectroscopy (NIRS), a non-destructive and fast analytical method, was tested as a tool for discriminating corn kernels with heat and f...
Varga, Tamas
2011-09-01
Despite the fact that all chemical bonds expand on heating, a small class of materials shrinks when heated. These, so called negative thermal expansion (NTE) materials, are a unique class of materials with some exotic properties. The present chapter offers insight into the structural aspects of pressure- (or temperature-) induced phase transformations, and the energetics of those changes in these fascinating materials, in particular NTE compound cubic ZrW2O8, orthorhombic Sc2W3O12 and Sc2Mo3O12, as well as other members of the 'scandium tungstate family'. In subsequent sections, (i) combined in situ high-pressure synchrotron XRD and XAS studies of NTE material ZrW2O8; (ii) an in situ high-pressure synchrotron XRD study of Sc2W3O12, Sc2Mo3O12, and Al2W3O12; and (iii) thermochemical studies of the above materials are presented and discussed. In all of these studies, chemical bonds change, sometimes break and new ones form. Correlations between structure, chemistry, and energetics are revealed. It is also shown that (iv) NTE materials are good candidates as precursors to make novel solid state materials, such as the conducting Sc0.67WO4, using high-pressure, high-temperature synthesis, through modification of bonding and electronic structure, and thus provide vast opportunities for scientific exploration.
NASA Astrophysics Data System (ADS)
Kuboyama, Tetsuji; Hirata, Kouichi; Kashima, Hisashi; F. Aoki-Kinoshita, Kiyoko; Yasuda, Hiroshi
Learning from tree-structured data has received increasing interest with the rapid growth of tree-encodable data in the World Wide Web, in biology, and in other areas. Our kernel function measures the similarity between two trees by counting the number of shared sub-patterns called tree q-grams, and runs, in effect, in linear time with respect to the number of tree nodes. We apply our kernel function with a support vector machine (SVM) to classify biological data, the glycans of several blood components. The experimental results show that our kernel function performs as well as one exclusively tailored to glycan properties.
NASA Astrophysics Data System (ADS)
Ma, Hongyun; Shao, Haiyan; Song, Jie
2014-02-01
Rapid urbanization has intensified summer heat waves in recent decades in Beijing, China. In this study, effectiveness of applying high-reflectance roofs on mitigating the warming effects caused by urban expansion and foehn wind was simulated for a record-breaking heat wave occurred in Beijing during July 13-15, 2002. Simulation experiments were performed using the Weather Research and Forecast (WRF version 3.0) model coupled with an urban canopy model. The modeled diurnal air temperatures were compared well with station observations in the city and the wind convergence caused by urban heat island (UHI) effect could be simulated clearly. By increasing urban roof albedo, the simulated UHI effect was reduced due to decreased net radiation, and the simulated wind convergence in the urban area was weakened. Using WRF3.0 model, the warming effects caused by urban expansion and foehn wind were quantified separately, and were compared with the cooling effect due to the increased roof albedo. Results illustrated that the foehn warming effect under the northwesterly wind contributed greatly to this heat wave event in Beijing, while contribution from urban expansion accompanied by anthropogenic heating was secondary, and was mostly evident at night. Increasing roof albedo could reduce air temperature both in the day and at night, and could more than offset the urban expansion effect. The combined warming caused by the urban expansion and the foehn wind could be potentially offset with high-reflectance roofs by 58.8 % or cooled by 1.4 °C in the early afternoon on July 14, 2002, the hottest day during the heat wave.
Robotic Intelligence Kernel: Communications
Walton, Mike C.
2009-09-16
The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.
Robotic Intelligence Kernel: Driver
2009-09-16
The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.
Linearized Kernel Dictionary Learning
NASA Astrophysics Data System (ADS)
Golts, Alona; Elad, Michael
2016-06-01
In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.
Krsticevic, Flavia J.; Arce, Débora P.; Ezpeleta, Joaquín; Tapia, Elizabeth
2016-01-01
In plants, fruit maturation and oxidative stress can induce small heat shock protein (sHSP) synthesis to maintain cellular homeostasis. Although the tomato reference genome was published in 2012, the actual number and functionality of sHSP genes remain unknown. Using a transcriptomic (RNA-seq) and evolutionary genomic approach, putative sHSP genes in the Solanum lycopersicum (cv. Heinz 1706) genome were investigated. A sHSP gene family of 33 members was established. Remarkably, roughly half of the members of this family can be explained by nine independent tandem duplication events that determined, evolutionarily, their functional fates. Within a mitochondrial class subfamily, only one duplicated member, Solyc08g078700, retained its ancestral chaperone function, while the others, Solyc08g078710 and Solyc08g078720, likely degenerated under neutrality and lack ancestral chaperone function. Functional conservation occurred within a cytosolic class I subfamily, whose four members, Solyc06g076570, Solyc06g076560, Solyc06g076540, and Solyc06g076520, support ∼57% of the total sHSP RNAm in the red ripe fruit. Subfunctionalization occurred within a new subfamily, whose two members, Solyc04g082720 and Solyc04g082740, show heterogeneous differential expression profiles during fruit ripening. These findings, involving the birth/death of some genes or the preferential/plastic expression of some others during fruit ripening, highlight the importance of tandem duplication events in the expansion of the sHSP gene family in the tomato genome. Despite its evolutionary diversity, the sHSP gene family in the tomato genome seems to be endowed with a core set of four homeostasis genes: Solyc05g014280, Solyc03g082420, Solyc11g020330, and Solyc06g076560, which appear to provide a baseline protection during both fruit ripening and heat shock stress in different tomato tissues. PMID:27565886
Bergfeld, D.; Vaughan, R. Greg; Evans, William C.; Olsen, Eric
2015-01-01
The Long Valley hydrothermal system supports geothermal power production from 3 binary plants (Casa Diablo) near the town of Mammoth Lakes, California. Development and growth of thermal ground at sites west of Casa Diablo have created concerns over planned expansion of a new well field and the associated increases in geothermal fluid production. To ensure that all areas of ground heating are identified prior to new geothermal development, we obtained high-resolution aerial thermal infrared imagery across the region. The imagery covers the existing and proposed well fields and part of the town of Mammoth Lakes. Imagery results from a predawn flight on Oct. 9, 2014 readily identified the Shady Rest thermal area (SRST), one of two large areas of ground heating west of Casa Diablo, as well as other known thermal areas smaller in size. Maximum surface temperatures at 3 thermal areas were 26–28 °C. Numerous small areas with ground temperatures >16 °C were also identified and slated for field investigations in summer 2015. Some thermal anomalies in the town of Mammoth Lakes clearly reflect human activity.Previously established projects to monitor impacts from geothermal power production include yearly surveys of soil temperatures and diffuse CO2 emissions at SRST, and less regular surveys to collect samples from fumaroles and gas vents across the region. Soil temperatures at 20 cm depth at SRST are well correlated with diffuse CO2 flux, and both parameters show little variation during the 2011–14 field surveys. Maximum temperatures were between 55–67 °C and associated CO2 discharge was around 12–18 tonnes per day. The carbon isotope composition of CO2 is fairly uniform across the area ranging between –3.7 to –4.4 ‰. The gas composition of the Shady Rest fumarole however has varied with time, and H2S concentrations in the gas have been increasing since 2009.
NASA Astrophysics Data System (ADS)
Beets, Nathan; Wake Forest CenterNanotechnology; Molecular Materials Team; Fraunhofer Institute Collaboration
2015-11-01
Two major problems with many third generation photovoltaics is their complex structure and greater expense for increased efficiency. Spectral splitting devices have been used by many with varying degrees of success to collect more and more of the spectrum, but simple, efficient, and cost-effective setups that employ spectral splitting remain elusive. This study explores this problem, presenting a solar engine that employs stokes shifting via laser dyes to convert incident light to the wavelength bandgap of the solar cell and collects the resultant infrared radiation unused by the photovoltaic cell as heat in ethylene glycol or glycerin. When used in conjunction with micro turbines, fluid expansion creates mechanical work, and the temperature difference between the cell and the environment is made available for use. The effect of focusing is also observed as a means to boost efficiency via concentration. Experimental results from spectral scans, vibrational voltage analysis of the PV itself and temperature measurements from a thermocouple are all compared to theoretical results using a program in Mathematica written to model refraction and lensing in the devices used, a quantum efficiency test of the cells, the absorption and emission curves of the dues used to determine the spectrum shift, and the various equations for fill factor, efficiency, and current in different setups. An efficiency increase well over 50% from the control devices is observed, and a new solar engine proposed.
Expansion of a radial jet from a guillotine tube breach in a shell-and-tube heat exchanger
Velasco, F.J.S.; del Pra, C. Lopez; Herranz, Luis E.
2008-02-15
Aerodynamics of a particle-laden gas jet entering the secondary side of a shell-and-tube heat exchanger from a tube guillotine breach, determines to a large extent radioactive retention in the break stage of the steam generator (SG) during hypothetical SGTR accident sequences in pressurized nuclear water reactors (PWRs). These scenarios were shown to be risk-dominant in PWRs. The major insights gained from a set of experiments into such aerodynamics are summarized in this paper. A scaled-down mock-up with representative dimensions of a real SG was built. Two-dimensional (2D) PIV technique was used to characterize the flow field in the space between the breach and the neighbor tubes in the gas flow range investigated (Re{sub D} = 0.8-2.7 x 10{sup 5}). Pitot tube measurements and CFD simulations were used to discuss and complement PIV data. The results, reported mainly in terms of velocity and turbulent intensity profiles, show that jet penetration and gas entrainment are considerably enhanced when increasing Re{sub D}. The presence of tubes was observed to distort the jet shape and to foster gas entrainment with respect to a jet expansion free of tubes. Turbulence intensity level close to the breach increases linearly with Re{sub D}. Account of this information into aerosol modeling will enhance predictive capability of inertial impaction and turbulent deposition equations. (author)
Oil point pressure of Indian almond kernels
NASA Astrophysics Data System (ADS)
Aregbesola, O.; Olatunde, G.; Esuola, S.; Owolarafe, O.
2012-07-01
The effect of preprocessing conditions such as moisture content, heating temperature, heating time and particle size on oil point pressure of Indian almond kernel was investigated. Results showed that oil point pressure was significantly (P < 0.05) affected by above mentioned parameters. It was also observed that oil point pressure reduced with increase in heating temperature and heating time for both coarse and fine particles. Furthermore, an increase in moisture content resulted in increased oil point pressure for coarse particles while there was a reduction in oil point pressure with increase in moisture content for fine particles.
Borner, Arnaud; Li, Zheng; Levin, Deborah A
2013-02-14
Supersonic expansions to vacuum produce clusters of sufficiently small size that properties such as heat capacities and latent heat of evaporation cannot be described by bulk vapor thermodynamic values. In this work the Monte-Carlo Canonical-Ensemble (MCCE) method was used to provide potential energies and constant-volume heat capacities for small water clusters. The cluster structures obtained using the well-known simple point charge model were found to agree well with earlier simulations using more rigorous potentials. The MCCE results were used as the starting point for molecular dynamics simulations of the evaporation rate as a function of cluster temperature and size which were found to agree with unimolecular dissociation theory and classical nucleation theory. The heat capacities and latent heat obtained from the MCCE simulations were used in direct-simulation Monte-Carlo of two experiments that measured Rayleigh scattering and terminal dimer mole fraction of supersonic water-jet expansions. Water-cluster temperature and size were found to be influenced by the use of kinetic rather than thermodynamic heat-capacity and latent-heat values as well as the nucleation model.
LeFebvre, W.
1994-08-01
For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.
Weighted Bergman Kernels and Quantization}
NASA Astrophysics Data System (ADS)
Engliš, Miroslav
Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion
NASA Astrophysics Data System (ADS)
Terekhov, V. I.; Bogatko, T. V.
2008-03-01
Results of numerical investigation of the boundary layer thickness on turbulent separation and heat transfer in a tube with an abrupt expansion are shown. The Menter turbulence model of shear stress transfer implemented in Fluent package was used for calculations. The range of Reynolds numbers was from 5·103 to 105. The air was used as the working fluid. A degree of tube expansion was ( D 2/ D 1)2 = 1.78. A significant effect of thickness of the separated boundary layer both on dynamic and thermal characteristics of the flow is shown. In particular, it was found that with an increase in the boundary layer thickness the recirculation zone increases, and the maximum heat transfer coefficient decreases.
Robotic Intelligence Kernel: Visualization
2009-09-16
The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.
Robotic Intelligence Kernel: Architecture
2009-09-16
The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.
NASA Astrophysics Data System (ADS)
Hummel, Tobias; Pacheco-Vega, Arturo
2012-11-01
In the present study we use Karhunen-Loève (KL) expansions to model the dynamic behavior of a single-phase natural convection loop. The loop is filled with an incompressible fluid that exchanges heat through the walls of its toroidal shape. Influx and efflux of energy take place at different parts of the loop. The focus here is a sinusoidal variation of the heat flux exchanged with the environment for three different scenarios; i.e., stable, limit cycles and chaos. For the analysis, one-dimensional models, in which the tilt angle and the amplitude of the heat flux are used as parameters, were first developed under suitable assumptions and then solved numerically to generate the data from which the KL-based models could be constructed. The method of snapshots, along with a Galerkin projection, was then used to find the basis functions and corresponding constants of each expansion, thus producing the optimal representation of the system. Results from this study indicate that the dimension of the KL-based dynamical system depends on the linear stability of the steady states; the number of basis functions necessary to describe the system increases with increased complexity of the system operation. When compared to typical dynamical systems based on Fourier expansions the KL-based models are, in general, more compact and equally accurate in the dynamic description of the natural convection loop.
Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.
Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I
2016-03-01
The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor.
Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.
Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I
2016-03-01
The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor. PMID:27021084
Resummed memory kernels in generalized system-bath master equations
Mavros, Michael G.; Van Voorhis, Troy
2014-08-07
Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.
Hedstrom, C S; Shearer, P W; Miller, J C; Walton, V M
2014-10-01
Halyomorpha halys Stål, the brown marmorated stink bug (Hemiptera: Pentatomidae), is an invasive pest with established populations in Oregon. The generalist feeding habits of H. halys suggest it has the potential to be a pest of many specialty crops grown in Oregon, including hazelnuts, Corylus avellana L. The objectives of this study were to: 1) characterize the damage to developing hazelnut kernels resulting from feeding by H. halys adults, 2) determine how the timing of feeding during kernel development influences damage to kernels, and 3) determine if hazelnut shell thickness has an effect on feeding frequency on kernels. Adult brown marmorated stink bugs were allowed to feed on developing nuts for 1-wk periods from initial kernel development (spring) until harvest (fall). Developing nuts not exposed to feeding by H. halys served as a control treatment. The degree of damage and diagnostic symptoms corresponded with the hazelnut kernels' physiological development. Our results demonstrated that when H. halys fed on hazelnuts before kernel expansion, development of the kernels could cease, resulting in empty shells. When stink bugs fed during kernel expansion, kernels appeared malformed. When stink bugs fed on mature nuts the kernels exhibited corky, necrotic areas. Although significant differences in shell thickness were observed among the cultivars, no significant differences occurred in the proportions of damaged kernels based on field tests and laboratory choice tests. The results of these studies demonstrated that commercial hazelnuts are susceptible to damage caused by the feeding of H. halys throughout the entire period of kernel development. PMID:26309276
MC Kernel: Broadband Waveform Sensitivity Kernels for Seismic Tomography
NASA Astrophysics Data System (ADS)
Stähler, Simon C.; van Driel, Martin; Auer, Ludwig; Hosseini, Kasra; Sigloch, Karin; Nissen-Meyer, Tarje
2016-04-01
We present MC Kernel, a software implementation to calculate seismic sensitivity kernels on arbitrary tetrahedral or hexahedral grids across the whole observable seismic frequency band. Seismic sensitivity kernels are the basis for seismic tomography, since they map measurements to model perturbations. Their calculation over the whole frequency range was so far only possible with approximative methods (Dahlen et al. 2000). Fully numerical methods were restricted to the lower frequency range (usually below 0.05 Hz, Tromp et al. 2005). With our implementation, it's possible to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (www.axisem.info), and thus on spherically symmetric models. The advantage is that frequencies up to 0.2 Hz and higher can be accessed. Since the usage of irregular, adapted grids is an integral part of regularisation in seismic tomography, MC Kernel works in a inversion-grid-centred fashion: A Monte-Carlo integration method is used to project the kernel onto each basis function, which allows to control the desired precision of the kernel estimation. Also, it means that the code concentrates calculation effort on regions of interest without prior assumptions on the kernel shape. The code makes extensive use of redundancies in calculating kernels for different receivers or frequency-pass-bands for one earthquake, to facilitate its usage in large-scale global seismic tomography.
Guo, Yi; Gao, Junbin; Kwan, Paul W
2008-08-01
In most existing dimensionality reduction algorithms, the main objective is to preserve relational structure among objects of the input space in a low dimensional embedding space. This is achieved by minimizing the inconsistency between two similarity/dissimilarity measures, one for the input data and the other for the embedded data, via a separate matching objective function. Based on this idea, a new dimensionality reduction method called Twin Kernel Embedding (TKE) is proposed. TKE addresses the problem of visualizing non-vectorial data that is difficult for conventional methods in practice due to the lack of efficient vectorial representation. TKE solves this problem by minimizing the inconsistency between the similarity measures captured respectively by their kernel Gram matrices in the two spaces. In the implementation, by optimizing a nonlinear objective function using the gradient descent algorithm, a local minimum can be reached. The results obtained include both the optimal similarity preserving embedding and the appropriate values for the hyperparameters of the kernel. Experimental evaluation on real non-vectorial datasets confirmed the effectiveness of TKE. TKE can be applied to other types of data beyond those mentioned in this paper whenever suitable measures of similarity/dissimilarity can be defined on the input data. PMID:18566501
ERIC Educational Resources Information Center
Fakhruddin, Hasan
1993-01-01
Describes a paradox in the equation for thermal expansion. If the calculations for heating a rod and subsequently cooling a rod are determined, the new length of the cool rod is shorter than expected. (PR)
Kernel Phase and Kernel Amplitude in Fizeau Imaging
NASA Astrophysics Data System (ADS)
Pope, Benjamin J. S.
2016-09-01
Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent fhistory of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.
Bruemmer, David J.
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Nowicki, Dimitri; Siegelmann, Hava
2010-06-11
This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces.
Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Theodoridis, Sergios
2008-12-01
Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.
Enzymatic treatment of peanut kernels to reduce allergen levels
Technology Transfer Automated Retrieval System (TEKTRAN)
This study investigated the use of enzymatic treatment to reduce peanut allergens in peanut kernel by processing conditions, such as, pretreatment with heat and proteolysis at different enzyme concentrations and treatment times. Two major peanut allergens, Ara h 1 and Ara h 2, were used as indicator...
Popping the Kernel Modeling the States of Matter
ERIC Educational Resources Information Center
Hitt, Austin; White, Orvil; Hanson, Debbie
2005-01-01
This article discusses how to use popcorn to engage students in model building and to teach them about the nature of matter. Popping kernels is a simple and effective method to connect the concepts of heat, motion, and volume with the different phases of matter. Before proceeding with the activity the class should discuss the nature of scientific…
Uniqueness Result in the Cauchy Dirichlet Problem via Mehler Kernel
NASA Astrophysics Data System (ADS)
Dhungana, Bishnu P.
2014-09-01
Using the Mehler kernel, a uniqueness theorem in the Cauchy Dirichlet problem for the Hermite heat equation with homogeneous Dirichlet boundary conditions on a class P of bounded functions U( x, t) with certain growth on U x ( x, t) is established.
Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels.
Jayasumana, Sadeep; Hartley, Richard; Salzmann, Mathieu; Li, Hongdong; Harandi, Mehrtash
2015-12-01
In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. PMID:26539851
Shaikhislamov, I. F.; Khodachenko, M. L.; Sasunov, Yu. L.; Lammer, H.; Kislyakova, K. G.; Erkaev, N. V.
2014-11-10
In the present series of papers we propose a consistent description of the mass loss process. To study in a comprehensive way the effects of the intrinsic magnetic field of a close-orbit giant exoplanet (a so-called hot Jupiter) on atmospheric material escape and the formation of a planetary inner magnetosphere, we start with a hydrodynamic model of an upper atmosphere expansion in this paper. While considering a simple hydrogen atmosphere model, we focus on the self-consistent inclusion of the effects of radiative heating and ionization of the atmospheric gas with its consequent expansion in the outer space. Primary attention is paid to an investigation of the role of the specific conditions at the inner and outer boundaries of the simulation domain, under which different regimes of material escape (free and restricted flow) are formed. A comparative study is performed of different processes, such as X-ray and ultraviolet (XUV) heating, material ionization and recombination, H{sub 3}{sup +} cooling, adiabatic and Lyα cooling, and Lyα reabsorption. We confirm the basic consistency of the outcomes of our modeling with the results of other hydrodynamic models of expanding planetary atmospheres. In particular, we determine that, under the typical conditions of an orbital distance of 0.05 AU around a Sun-type star, a hot Jupiter plasma envelope may reach maximum temperatures up to ∼9000 K with a hydrodynamic escape speed of ∼9 km s{sup –1}, resulting in mass loss rates of ∼(4-7) · 10{sup 10} g s{sup –1}. In the range of the considered stellar-planetary parameters and XUV fluxes, that is close to the mass loss in the energy-limited case. The inclusion of planetary intrinsic magnetic fields in the model is a subject of the follow-up paper (Paper II).
NASA Astrophysics Data System (ADS)
Shaikhislamov, I. F.; Khodachenko, M. L.; Sasunov, Yu. L.; Lammer, H.; Kislyakova, K. G.; Erkaev, N. V.
2014-11-01
In the present series of papers we propose a consistent description of the mass loss process. To study in a comprehensive way the effects of the intrinsic magnetic field of a close-orbit giant exoplanet (a so-called hot Jupiter) on atmospheric material escape and the formation of a planetary inner magnetosphere, we start with a hydrodynamic model of an upper atmosphere expansion in this paper. While considering a simple hydrogen atmosphere model, we focus on the self-consistent inclusion of the effects of radiative heating and ionization of the atmospheric gas with its consequent expansion in the outer space. Primary attention is paid to an investigation of the role of the specific conditions at the inner and outer boundaries of the simulation domain, under which different regimes of material escape (free and restricted flow) are formed. A comparative study is performed of different processes, such as X-ray and ultraviolet (XUV) heating, material ionization and recombination, H_3^ + cooling, adiabatic and Lyα cooling, and Lyα reabsorption. We confirm the basic consistency of the outcomes of our modeling with the results of other hydrodynamic models of expanding planetary atmospheres. In particular, we determine that, under the typical conditions of an orbital distance of 0.05 AU around a Sun-type star, a hot Jupiter plasma envelope may reach maximum temperatures up to ~9000 K with a hydrodynamic escape speed of ~9 km s-1, resulting in mass loss rates of ~(4-7) · 1010 g s-1. In the range of the considered stellar-planetary parameters and XUV fluxes, that is close to the mass loss in the energy-limited case. The inclusion of planetary intrinsic magnetic fields in the model is a subject of the follow-up paper (Paper II).
Source identity and kernel functions for Inozemtsev-type systems
NASA Astrophysics Data System (ADS)
Langmann, Edwin; Takemura, Kouichi
2012-08-01
The Inozemtsev Hamiltonian is an elliptic generalization of the differential operator defining the BCN trigonometric quantum Calogero-Sutherland model, and its eigenvalue equation is a natural many-variable generalization of the Heun differential equation. We present kernel functions for Inozemtsev Hamiltonians and Chalykh-Feigin-Veselov-Sergeev-type deformations thereof. Our main result is a solution of a heat-type equation for a generalized Inozemtsev Hamiltonian which is the source of all these kernel functions. Applications are given, including a derivation of simple exact eigenfunctions and eigenvalues of the Inozemtsev Hamiltonian.
FUV Continuum in Flare Kernels Observed by IRIS
NASA Astrophysics Data System (ADS)
Daw, Adrian N.; Kowalski, Adam; Allred, Joel C.; Cauzzi, Gianna
2016-05-01
Fits to Interface Region Imaging Spectrograph (IRIS) spectra observed from bright kernels during the impulsive phase of solar flares are providing long-sought constraints on the UV/white-light continuum emission. Results of fits of continua plus numerous atomic and molecular emission lines to IRIS far ultraviolet (FUV) spectra of bright kernels are presented. Constraints on beam energy and cross sectional area are provided by cotemporaneous RHESSI, FERMI, ROSA/DST, IRIS slit-jaw and SDO/AIA observations, allowing for comparison of the observed IRIS continuum to calculations of non-thermal electron beam heating using the RADYN radiative-hydrodynamic loop model.
NASA Astrophysics Data System (ADS)
Cerdeiriña, C. A.; Troncoso, J.; Carballo, E.; Romaní, L.
2002-09-01
The heat capacity per unit volume Cp and density ρ of the nitromethane-1-butanol critical mixture near its upper consolute point are determined in this work. Cp data are obtained at atmospheric pressure as a function of temperature in the one-phase and two-phase regions, using a differential scanning calorimeter. The suitability of DSC for recording Cp as a function of T in the critical region is confirmed by measurements of the nitromethane-cyclohexane mixture, the results being quite consistent with reported data. By fitting the Cp data in the one-phase region, the critical exponent α is found to be 0.110+/-0.014-and hence consistent with the universal accepted value-and the critical amplitude A+=0.0606+/-0.0006 J K-1 cm-3. ρ data were only obtained in the one-phase region, using a vibrating tube densimeter. The amplitude of the density anomaly was found to be C+1=-0.017+/-0.003 g cm-3, which is moderately low in spite of the large difference between the densities of the pure liquids. The thermodynamic consistency of the A+ and C+1 values was examined in relation to the previously reported value for the slope of the critical line dTc/dp. The results of this analysis were consistent with previous work on this matter.
Boundary conditions for gas flow problems from anisotropic scattering kernels
NASA Astrophysics Data System (ADS)
To, Quy-Dong; Vu, Van-Huyen; Lauriat, Guy; Léonard, Céline
2015-10-01
The paper presents an interface model for gas flowing through a channel constituted of anisotropic wall surfaces. Using anisotropic scattering kernels and Chapman Enskog phase density, the boundary conditions (BCs) for velocity, temperature, and discontinuities including velocity slip and temperature jump at the wall are obtained. Two scattering kernels, Dadzie and Méolans (DM) kernel, and generalized anisotropic Cercignani-Lampis (ACL) are examined in the present paper, yielding simple BCs at the wall fluid interface. With these two kernels, we rigorously recover the analytical expression for orientation dependent slip shown in our previous works [Pham et al., Phys. Rev. E 86, 051201 (2012) and To et al., J. Heat Transfer 137, 091002 (2015)] which is in good agreement with molecular dynamics simulation results. More important, our models include both thermal transpiration effect and new equations for the temperature jump. While the same expression depending on the two tangential accommodation coefficients is obtained for slip velocity, the DM and ACL temperature equations are significantly different. The derived BC equations associated with these two kernels are of interest for the gas simulations since they are able to capture the direction dependent slip behavior of anisotropic interfaces.
Learning With Jensen-Tsallis Kernels.
Ghoshdastidar, Debarghya; Adsul, Ajay P; Dukkipati, Ambedkar
2016-10-01
Jensen-type [Jensen-Shannon (JS) and Jensen-Tsallis] kernels were first proposed by Martins et al. (2009). These kernels are based on JS divergences that originated in the information theory. In this paper, we extend the Jensen-type kernels on probability measures to define positive-definite kernels on Euclidean space. We show that the special cases of these kernels include dot-product kernels. Since Jensen-type divergences are multidistribution divergences, we propose their multipoint variants, and study spectral clustering and kernel methods based on these. We also provide experimental studies on benchmark image database and gene expression database that show the benefits of the proposed kernels compared with the existing kernels. The experiments on clustering also demonstrate the use of constructing multipoint similarities.
RTOS kernel in portable electrocardiograph
NASA Astrophysics Data System (ADS)
Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.
2011-12-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
2011-01-01
The chemical composition of small organic molecules is often very similar to amino acid side chains or the bases in nucleic acids, and hence there is no a priori reason why a molecular mechanics force field could not describe both organic liquids and biomolecules with a single parameter set. Here, we devise a benchmark for force fields in order to test the ability of existing force fields to reproduce some key properties of organic liquids, namely, the density, enthalpy of vaporization, the surface tension, the heat capacity at constant volume and pressure, the isothermal compressibility, the volumetric expansion coefficient, and the static dielectric constant. Well over 1200 experimental measurements were used for comparison to the simulations of 146 organic liquids. Novel polynomial interpolations of the dielectric constant (32 molecules), heat capacity at constant pressure (three molecules), and the isothermal compressibility (53 molecules) as a function of the temperature have been made, based on experimental data, in order to be able to compare simulation results to them. To compute the heat capacities, we applied the two phase thermodynamics method (Lin et al. J. Chem. Phys.2003, 119, 11792), which allows one to compute thermodynamic properties on the basis of the density of states as derived from the velocity autocorrelation function. The method is implemented in a new utility within the GROMACS molecular simulation package, named g_dos, and a detailed exposé of the underlying equations is presented. The purpose of this work is to establish the state of the art of two popular force fields, OPLS/AA (all-atom optimized potential for liquid simulation) and GAFF (generalized Amber force field), to find common bottlenecks, i.e., particularly difficult molecules, and to serve as a reference point for future force field development. To make for a fair playing field, all molecules were evaluated with the same parameter settings, such as thermostats and barostats
The NAS kernel benchmark program
NASA Technical Reports Server (NTRS)
Bailey, D. H.; Barton, J. T.
1985-01-01
A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.
Adaptive wiener image restoration kernel
Yuan, Ding
2007-06-05
A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.
Local Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
Point-Kernel Shielding Code System.
1982-02-17
Version 00 QAD-BSA is a three-dimensional, point-kernel shielding code system based upon the CCC-48/QAD series. It is designed to calculate photon dose rates and heating rates using exponential attenuation and infinite medium buildup factors. Calculational provisions include estimates of fast neutron penetration using data computed by the moments method. Included geometry routines can describe complicated source and shield geometries. An internal library contains data for many frequently used structural and shielding materials, enabling the codemore » to solve most problems with only source strengths and problem geometry required as input. This code system adapts especially well to problems requiring multiple sources and sources with asymmetrical geometry. In addition to being edited separately, the total interaction rates from many sources may be edited at each detector point. Calculated photon interaction rates agree closely with those obtained using QAD-P5A.« less
ERIC Educational Resources Information Center
McArdle, Heather K.
1997-01-01
Describes a week-long activity for general to honors-level students that addresses Hubble's law and the universal expansion theory. Uses a discrepant event-type activity to lead up to the abstract principles of the universal expansion theory. (JRH)
Wigner functions defined with Laplace transform kernels.
Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George
2011-10-24
We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton.
PERI - Auto-tuning Memory Intensive Kernels for Multicore
Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H
2008-06-24
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.
PERI - auto-tuning memory-intensive kernels for multicore
NASA Astrophysics Data System (ADS)
Williams, S.; Datta, K.; Carter, J.; Oliker, L.; Shalf, J.; Yelick, K.; Bailey, D.
2008-07-01
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to sparse matrix vector multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the high-performance computing literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4× improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.
Kernel Near Principal Component Analysis
MARTIN, SHAWN B.
2002-07-01
We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.
Haselton, H.T.; Hemingway, B.S.; Robie, R.A.
1984-01-01
Low-T heat capacities (5-380 K) have been measured by adiabatic calorimetry for synthetic CaAl2SiO6 glass and pyroxene. High-T unit cell parameters were measured for CaAl2SiO6 pyroxene by means of a Nonius Guinier-Lenne powder camera in order to determine the mean coefficient of thermal expansion in the T range 25-1200oC. -J.A.Z.
The flare kernel in the impulsive phase
NASA Technical Reports Server (NTRS)
Dejager, C.
1986-01-01
The impulsive phase of a flare is characterized by impulsive bursts of X-ray and microwave radiation, related to impulsive footpoint heating up to 50 or 60 MK, by upward gas velocities (150 to 400 km/sec) and by a gradual increase of the flare's thermal energy content. These phenomena, as well as non-thermal effects, are all related to the impulsive energy injection into the flare. The available observations are also quantitatively consistent with a model in which energy is injected into the flare by beams of energetic electrons, causing ablation of chromospheric gas, followed by convective rise of gas. Thus, a hole is burned into the chromosphere; at the end of impulsive phase of an average flare the lower part of that hole is situated about 1800 km above the photosphere. H alpha and other optical and UV line emission is radiated by a thin layer (approx. 20 km) at the bottom of the flare kernel. The upward rising and outward streaming gas cools down by conduction in about 45 s. The non-thermal effects in the initial phase are due to curtailing of the energy distribution function by escape of energetic electrons. The single flux tube model of a flare does not fit with these observations; instead we propose the spaghetti-bundle model. Microwave and gamma-ray observations suggest the occurrence of dense flare knots of approx. 800 km diameter, and of high temperature. Future observations should concentrate on locating the microwave/gamma-ray sources, and on determining the kernel's fine structure and the related multi-loop structure of the flaring area.
Nonlinear projection trick in kernel methods: an alternative to the kernel trick.
Kwak, Nojun
2013-12-01
In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.
Nonlinear projection trick in kernel methods: an alternative to the kernel trick.
Kwak, Nojun
2013-12-01
In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach. PMID:24805227
Stem kernels for RNA sequence analyses.
Sakakibara, Yasubumi; Popendorf, Kris; Ogawa, Nana; Asai, Kiyoshi; Sato, Kengo
2007-10-01
Several computational methods based on stochastic context-free grammars have been developed for modeling and analyzing functional RNA sequences. These grammatical methods have succeeded in modeling typical secondary structures of RNA, and are used for structural alignment of RNA sequences. However, such stochastic models cannot sufficiently discriminate member sequences of an RNA family from nonmembers and hence detect noncoding RNA regions from genome sequences. A novel kernel function, stem kernel, for the discrimination and detection of functional RNA sequences using support vector machines (SVMs) is proposed. The stem kernel is a natural extension of the string kernel, specifically the all-subsequences kernel, and is tailored to measure the similarity of two RNA sequences from the viewpoint of secondary structures. The stem kernel examines all possible common base pairs and stem structures of arbitrary lengths, including pseudoknots between two RNA sequences, and calculates the inner product of common stem structure counts. An efficient algorithm is developed to calculate the stem kernels based on dynamic programming. The stem kernels are then applied to discriminate members of an RNA family from nonmembers using SVMs. The study indicates that the discrimination ability of the stem kernel is strong compared with conventional methods. Furthermore, the potential application of the stem kernel is demonstrated by the detection of remotely homologous RNA families in terms of secondary structures. This is because the string kernel is proven to work for the remote homology detection of protein sequences. These experimental results have convinced us to apply the stem kernel in order to find novel RNA families from genome sequences. PMID:17933013
Predicting Protein Function Using Multiple Kernels.
Yu, Guoxian; Rangwala, Huzefa; Domeniconi, Carlotta; Zhang, Guoji; Zhang, Zili
2015-01-01
High-throughput experimental techniques provide a wide variety of heterogeneous proteomic data sources. To exploit the information spread across multiple sources for protein function prediction, these data sources are transformed into kernels and then integrated into a composite kernel. Several methods first optimize the weights on these kernels to produce a composite kernel, and then train a classifier on the composite kernel. As such, these approaches result in an optimal composite kernel, but not necessarily in an optimal classifier. On the other hand, some approaches optimize the loss of binary classifiers and learn weights for the different kernels iteratively. For multi-class or multi-label data, these methods have to solve the problem of optimizing weights on these kernels for each of the labels, which are computationally expensive and ignore the correlation among labels. In this paper, we propose a method called Predicting Protein Function using Multiple Kernels (ProMK). ProMK iteratively optimizes the phases of learning optimal weights and reduces the empirical loss of multi-label classifier for each of the labels simultaneously. ProMK can integrate kernels selectively and downgrade the weights on noisy kernels. We investigate the performance of ProMK on several publicly available protein function prediction benchmarks and synthetic datasets. We show that the proposed approach performs better than previously proposed protein function prediction approaches that integrate multiple data sources and multi-label multiple kernel learning methods. The codes of our proposed method are available at https://sites.google.com/site/guoxian85/promk.
Kernel earth mover's distance for EEG classification.
Daliri, Mohammad Reza
2013-07-01
Here, we propose a new kernel approach based on the earth mover's distance (EMD) for electroencephalography (EEG) signal classification. The EEG time series are first transformed into histograms in this approach. The distance between these histograms is then computed using the EMD in a pair-wise manner. We bring the distances into a kernel form called kernel EMD. The support vector classifier can then be used for the classification of EEG signals. The experimental results on the real EEG data show that the new kernel method is very effective, and can classify the data with higher accuracy than traditional methods.
Molecular Hydrodynamics from Memory Kernels.
Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin
2016-04-01
The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t^{-3/2}. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius. PMID:27104730
A reduced volumetric expansion factor plot
NASA Technical Reports Server (NTRS)
Hendricks, R. C.
1979-01-01
A reduced volumetric expansion factor plot was constructed for simple fluids which is suitable for engineering computations in heat transfer. Volumetric expansion factors were found useful in correlating heat transfer data over a wide range of operating conditions including liquids, gases and the near critical region.
A reduced volumetric expansion factor plot
NASA Technical Reports Server (NTRS)
Hendricks, R. C.
1979-01-01
A reduced volumetric expansion factor plot has been constructed for simple fluids which is suitable for engineering computations in heat transfer. Volumetric expansion factors have been found useful in correlating heat transfer data over a wide range of operating conditions including liquids, gases and the near critical region.
Improving the Bandwidth Selection in Kernel Equating
ERIC Educational Resources Information Center
Andersson, Björn; von Davier, Alina A.
2014-01-01
We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…
The context-tree kernel for strings.
Cuturi, Marco; Vert, Jean-Philippe
2005-10-01
We propose a new kernel for strings which borrows ideas and techniques from information theory and data compression. This kernel can be used in combination with any kernel method, in particular Support Vector Machines for string classification, with notable applications in proteomics. By using a Bayesian averaging framework with conjugate priors on a class of Markovian models known as probabilistic suffix trees or context-trees, we compute the value of this kernel in linear time and space while only using the information contained in the spectrum of the considered strings. This is ensured through an adaptation of a compression method known as the context-tree weighting algorithm. Encouraging classification results are reported on a standard protein homology detection experiment, showing that the context-tree kernel performs well with respect to other state-of-the-art methods while using no biological prior knowledge.
Online kernel slow feature analysis for temporal video segmentation and tracking.
Liwicki, Stephan; Zafeiriou, Stefanos P; Pantic, Maja
2015-10-01
Slow feature analysis (SFA) is a dimensionality reduction technique which has been linked to how visual brain cells work. In recent years, the SFA was adopted for computer vision tasks. In this paper, we propose an exact kernel SFA (KSFA) framework for positive definite and indefinite kernels in Krein space. We then formulate an online KSFA which employs a reduced set expansion. Finally, by utilizing a special kind of kernel family, we formulate exact online KSFA for which no reduced set is required. We apply the proposed system to develop a SFA-based change detection algorithm for stream data. This framework is employed for temporal video segmentation and tracking. We test our setup on synthetic and real data streams. When combined with an online learning tracking system, the proposed change detection approach improves upon tracking setups that do not utilize change detection.
Nonlocal energy-optimized kernel: Recovering second-order exchange in the homogeneous electron gas
NASA Astrophysics Data System (ADS)
Bates, Jefferson E.; Laricchia, Savio; Ruzsinszky, Adrienn
2016-01-01
In order to remedy some of the shortcomings of the random phase approximation (RPA) within adiabatic connection fluctuation-dissipation (ACFD) density functional theory, we introduce a short-ranged, exchange-like kernel that is one-electron self-correlation free and exact for two-electron systems in the high-density limit. By tuning a free parameter in our model to recover an exact limit of the homogeneous electron gas correlation energy, we obtain a nonlocal, energy-optimized kernel that reduces the errors of RPA for both homogeneous and inhomogeneous solids. Using wave-vector symmetrization for the kernel, we also implement RPA renormalized perturbation theory for extended systems, and demonstrate its capability to describe the dominant correlation effects with a low-order expansion in both metallic and nonmetallic systems. The comparison of ACFD structural properties with experiment is also shown to be limited by the choice of norm-conserving pseudopotential.
NASA Astrophysics Data System (ADS)
Zhang, Ning; Wang, Xuemei; Chen, Yan; Dai, Wei; Wang, Xueyuan
2015-08-01
Urbanization is an extreme way in which human being changes the land use/land cover of the earth surface, and anthropogenic heat release occurs at the same time. In this paper, the anthropogenic heat release parameterization scheme in the Weather Research and Forecasting model is modified to consider the spatial heterogeneity of the release; and the impacts of land use change and anthropogenic heat release on urban boundary layer structure in the Pearl River Delta, China, are studied with a series of numerical experiments. The results show that the anthropogenic heat release contributes nearly 75 % to the urban heat island intensity in our studied period. The impact of anthropogenic heat release on near-surface specific humidity is very weak, but that on relative humidity is apparent due to the near-surface air temperature change. The near-surface wind speed decreases after the local land use is changed to urban type due to the increased land surface roughness, but the anthropogenic heat release leads to increases of the low-level wind speed and decreases above in the urban boundary layer because the anthropogenic heat release reduces the boundary layer stability and enhances the vertical mixing.
Bayesian Kernel Mixtures for Counts
Canale, Antonio; Dunson, David B.
2011-01-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES
Dunson, David B.
2013-01-01
Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563
Functional diversity among seed dispersal kernels generated by carnivorous mammals.
González-Varo, Juan P; López-Bao, José V; Guitián, José
2013-05-01
1. Knowledge of the spatial scale of the dispersal service provided by important seed dispersers (i.e. common and/or keystone species) is essential to our understanding of their role on plant ecology, ecosystem functioning and, ultimately, biodiversity conservation. 2. Carnivores are the main mammalian frugivores and seed dispersers in temperate climate regions. However, information on the seed dispersal distances they generate is still very limited. We focused on two common temperate carnivores differing in body size and spatial ecology - red fox (Vulpes vulpes) and European pine marten (Martes martes) - for evaluating possible functional diversity in their seed dispersal kernels. 3. We measured dispersal distances using colour-coded seed mimics embedded in experimental fruits that were offered to the carnivores in feeding stations (simulating source trees). The exclusive colour code of each simulated tree allowed us to assign the exact origin of seed mimics found later in carnivore faeces. We further designed an explicit sampling strategy aiming to detect the longest dispersal events; as far we know, the most robust sampling scheme followed for tracking carnivore-dispersed seeds. 4. We found a marked functional heterogeneity among both species in their seed dispersal kernels according to their home range size: multimodality and long-distance dispersal in the case of the fox and unimodality and short-distance dispersal in the case of the marten (maximum distances = 2846 and 1233 m, respectively). As a consequence, emergent kernels at the guild level (overall and in two different years) were highly dependent on the relative contribution of each carnivore species. 5. Our results provide the first empirical evidence of functional diversity among seed dispersal kernels generated by carnivorous mammals. Moreover, they illustrate for the first time how seed dispersal kernels strongly depend on the relative contribution of different disperser species, thus on the
Robie, R.A.; Evans, H.T.; Hemingway, B.S.
1988-01-01
The heat capacity of ilvaite from Seriphos, Greece was measured by adiabatic shield calorimetry (6.4 to 380.7 K) and by differential scanning calorimetry (340 to 950 K). The thermal expansion of ilvaite was also investigated, by X-ray methods, between 308 and 853 K. At 298.15 K the standard molar heat capacity and entropy for ilvaite are 298.9??0.6 and 292.3??0.6 J/(mol. K) respectively. Between 333 and 343 K ilvaite changes from monoclinic to orthorhombic. The antiferromagnetic transition is shown by a hump in Cp0with a Ne??el temperature of 121.9??0.5 K. A rounded hump in Cp0between 330 and 400 K may possibily arise from the thermally activated electron delocalization (hopping) known to take place in this temperature region. ?? 1988 Springer-Verlag.
Putting Priors in Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
Ideal regularization for learning kernels from labels.
Pan, Binbin; Lai, Jianhuang; Shen, Lixin
2014-08-01
In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently.
Kernel score statistic for dependent data.
Malzahn, Dörthe; Friedrichs, Stefanie; Rosenberger, Albert; Bickeböller, Heike
2014-01-01
The kernel score statistic is a global covariance component test over a set of genetic markers. It provides a flexible modeling framework and does not collapse marker information. We generalize the kernel score statistic to allow for familial dependencies and to adjust for random confounder effects. With this extension, we adjust our analysis of real and simulated baseline systolic blood pressure for polygenic familial background. We find that the kernel score test gains appreciably in power through the use of sequencing compared to tag-single-nucleotide polymorphisms for very rare single nucleotide polymorphisms with <1% minor allele frequency.
Point Kernel Gamma-Ray Shielding Code With Geometric Progression Buildup Factors.
1990-11-30
Version 00 QADMOD-GP is a PC version of the mainframe code CCC-396/QADMOD-G, a point-kernel integration code for calculating gamma ray fluxes and dose rates or heating rates at specific detector locations within a three-dimensional shielding geometry configuration due to radiation from a volume-distributed source.
Load regulating expansion fixture
Wagner, L.M.; Strum, M.J.
1998-12-15
A free standing self contained device for bonding ultra thin metallic films, such as 0.001 inch beryllium foils is disclosed. The device will regulate to a predetermined load for solid state bonding when heated to a bonding temperature. The device includes a load regulating feature, whereby the expansion stresses generated for bonding are regulated and self adjusting. The load regulator comprises a pair of friction isolators with a plurality of annealed copper members located therebetween. The device, with the load regulator, will adjust to and maintain a stress level needed to successfully and economically complete a leak tight bond without damaging thin foils or other delicate components. 1 fig.
Load regulating expansion fixture
Wagner, Lawrence M.; Strum, Michael J.
1998-01-01
A free standing self contained device for bonding ultra thin metallic films, such as 0.001 inch beryllium foils. The device will regulate to a predetermined load for solid state bonding when heated to a bonding temperature. The device includes a load regulating feature, whereby the expansion stresses generated for bonding are regulated and self adjusting. The load regulator comprises a pair of friction isolators with a plurality of annealed copper members located therebetween. The device, with the load regulator, will adjust to and maintain a stress level needed to successfully and economically complete a leak tight bond without damaging thin foils or other delicate components.
A simple method for computing the relativistic Compton scattering kernel for radiative transfer
NASA Technical Reports Server (NTRS)
Prasad, M. K.; Kershaw, D. S.; Beason, J. D.
1986-01-01
Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2014-01-01 2014-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2012-01-01 2012-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2013-01-01 2013-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2011-01-01 2011-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...
KITTEN Lightweight Kernel 0.1 Beta
2007-12-12
The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten providesmore » unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.« less
Quantum kernel applications in medicinal chemistry.
Huang, Lulu; Massa, Lou
2012-07-01
Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design. PMID:22857535
Variational Dirichlet Blur Kernel Estimation.
Zhou, Xu; Mateos, Javier; Zhou, Fugen; Molina, Rafael; Katsaggelos, Aggelos K
2015-12-01
Blind image deconvolution involves two key objectives: 1) latent image and 2) blur estimation. For latent image estimation, we propose a fast deconvolution algorithm, which uses an image prior of nondimensional Gaussianity measure to enforce sparsity and an undetermined boundary condition methodology to reduce boundary artifacts. For blur estimation, a linear inverse problem with normalization and nonnegative constraints must be solved. However, the normalization constraint is ignored in many blind image deblurring methods, mainly because it makes the problem less tractable. In this paper, we show that the normalization constraint can be very naturally incorporated into the estimation process by using a Dirichlet distribution to approximate the posterior distribution of the blur. Making use of variational Dirichlet approximation, we provide a blur posterior approximation that considers the uncertainty of the estimate and removes noise in the estimated kernel. Experiments with synthetic and real data demonstrate that the proposed method is very competitive to the state-of-the-art blind image restoration methods. PMID:26390458
TICK: Transparent Incremental Checkpointing at Kernel Level
Petrini, Fabrizio; Gioiosa, Roberto
2004-10-25
TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5
Seal, R.R.; Robie, R.A.; Hemingway, B.S.; Evans, H.T.
1996-01-01
The heat capacity of synthetic Cu3AsS4 (enargite) was measured by quasi-adiabatic calorimetry from the temperatures 5 K to 355 K and by differential scanning calorimetry from T = 339 K to T = 720 K. Heat-capacity anomalies were observed at T = (58.5 ?? 0.5) K (??trsHom = 1.4??R??K; ??trsSom = 0.02??R) and at T = (66.5 ?? 0.5) K (??trsHom = 4.6??R??K; ??trsSom = 0.08??R), where R = 8.31451 J??K-1??mol-1. The causes of the anomalies are unknown. At T = 298.15 K, Cop,m and Som(T) are (190.4 ?? 0.2) J??K-1??mol-1 and (257.6 ?? 0.6) J??K-1??mol-1, respectively. The superambient heat capacities are described from T = 298.15 K to T = 944 K by the least-squares regression equation: Cop,m/(J??K-1??mol-1) = (196.7 ?? 1.2) + (0.0499 ?? 0.0016)??(T/K) -(1918 000 ?? 84 000)??(T/K)-2. The thermal expansion of synthetic enargite was measured from T = 298.15 K to T = 573 K by powder X-ray diffraction. The thermal expansion of the unit-cell volume (Z = 2) is described from T = 298.15 K to T = 573 K by the least-squares equation: V/pm3 = 106??(288.2 ?? 0.2) + 104??(1.49 ?? 0.04)??(T/K). ?? 1996 Academic Press Limited.
Hypothesis of a daemon kernel of the Earth
NASA Astrophysics Data System (ADS)
Drobyshevski, E. M.
2004-01-01
The paper considers the fate of the electrically charged (Ze 10e) Planckian elementary black holes, namely, daemons, making up the dark matter of the Galactic disc, which, as follows from our measurements, were trapped by the Earth during 4.5 Gyears in an amount equal to approximately 1024. Owing to their huge mass (about 2 x 10 kg), these particles settle down to the Earth's centre to form a kernel. Assuming that the excess flux of 10-20 TW over the heat flux level produced by known sources, which is quoted by many researchers, is due to the energy liberated in the outer kernel layers in daemon-stimulated proton decay of Fe nuclei, we have come to the conclusion that the Earth's kernel is at present a fraction of a metre in size. The observed mantle flux of 3He (and the limiting 3He to 4He ratio of about 10 4 itself) can be provided if at least one 3He (or 3T) nucleus is emitted in a daemon-stimulated decay of 102-103 Fe nuclei. This could actually remove the only objection to the hot origin of the Earth and to its original melting. The high energy liberation at the centre of the Earth drives two-phase two-dimensional convection in its inner core (IC), with rolls oriented along the rotation axis. This provides an explanation for the numerous features in the IC structure revealed in recent years (anisotropy in the seismic wave propagation, the existence of small irregularities, the strong damping of the P and S waves, ambiguities in the measurements of the IC rotation rate, etc.). The energy release in the kernel grows continuously as the number of daemons in it increases. Therefore the global tectonic activity, which had died out after the initial differentiation and cooling off of the Earth was reanimated 2 Gyears ago by the rearrangement and enhancement of convection in the mantle as a result of the increasing outward energy flow. It is pointed out that, as the kernel continues to grow, the tectonic activity will become intensified rather than die out, as was
A kernel autoassociator approach to pattern classification.
Zhang, Haihong; Huang, Weimin; Huang, Zhiyong; Zhang, Bailing
2005-06-01
Autoassociators are a special type of neural networks which, by learning to reproduce a given set of patterns, grasp the underlying concept that is useful for pattern classification. In this paper, we present a novel nonlinear model referred to as kernel autoassociators based on kernel methods. While conventional non-linear autoassociation models emphasize searching for the non-linear representations of input patterns, a kernel autoassociator takes a kernel feature space as the nonlinear manifold, and places emphasis on the reconstruction of input patterns from the kernel feature space. Two methods are proposed to address the reconstruction problem, using linear and multivariate polynomial functions, respectively. We apply the proposed model to novelty detection with or without novelty examples and study it on the promoter detection and sonar target recognition problems. We also apply the model to mclass classification problems including wine recognition, glass recognition, handwritten digit recognition, and face recognition. The experimental results show that, compared with conventional autoassociators and other recognition systems, kernel autoassociators can provide better or comparable performance for concept learning and recognition in various domains. PMID:15971928
A kernel autoassociator approach to pattern classification.
Zhang, Haihong; Huang, Weimin; Huang, Zhiyong; Zhang, Bailing
2005-06-01
Autoassociators are a special type of neural networks which, by learning to reproduce a given set of patterns, grasp the underlying concept that is useful for pattern classification. In this paper, we present a novel nonlinear model referred to as kernel autoassociators based on kernel methods. While conventional non-linear autoassociation models emphasize searching for the non-linear representations of input patterns, a kernel autoassociator takes a kernel feature space as the nonlinear manifold, and places emphasis on the reconstruction of input patterns from the kernel feature space. Two methods are proposed to address the reconstruction problem, using linear and multivariate polynomial functions, respectively. We apply the proposed model to novelty detection with or without novelty examples and study it on the promoter detection and sonar target recognition problems. We also apply the model to mclass classification problems including wine recognition, glass recognition, handwritten digit recognition, and face recognition. The experimental results show that, compared with conventional autoassociators and other recognition systems, kernel autoassociators can provide better or comparable performance for concept learning and recognition in various domains.
PET Image Reconstruction Using Kernel Method
Wang, Guobao; Qi, Jinyi
2014-01-01
Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249
NASA Technical Reports Server (NTRS)
Liu, Y.; Israelsson, U.; Larson, M.
2001-01-01
Presentation on the transition in 4He in the presence of a heat current (Q) provides an ideal system for the study of phase transitions under non-equlibrium, dynamical conditions. Many physical properties become nonlinear and Q-dependant near the transition temperature, T_Lambada.
Wang Qishan Bag, Jnanankur
2008-05-23
Formation of nuclear inclusions consisting of aggregates of a polyalanine expansion mutant of nuclear poly(A)-binding protein (PABPN1) is the hallmark of oculopharyngeal muscular dystrophy (OPMD). OPMD is a late onset autosomal dominant disease. Patients with this disorder exhibit progressive swallowing difficulty and drooping of their eye lids, which starts around the age of 50. Previously we have shown that treatment of cells expressing the mutant PABPN1 with a number of chemicals such as ibuprofen, indomethacin, ZnSO{sub 4}, and 8-hydroxy-quinoline induces HSP70 expression and reduces PABPN1 aggregation. In these studies we have shown that expression of additional HSPs including HSP27, HSP40, and HSP105 were induced in mutant PABPN1 expressing cells following exposure to the chemicals mentioned above. Furthermore, all three additional HSPs were translocated to the nucleus and probably helped to properly fold the mutant PABPN1 by co-localizing with this protein.
NASA Astrophysics Data System (ADS)
Cohl, Howard S.
2013-06-01
We develop complex Jacobi, Gegenbauer and Chebyshev polynomial expansions for the kernels associated with power-law fundamental solutions of the polyharmonic equation on d-dimensional Euclidean space. From these series representations we derive Fourier expansions in certain rotationally-invariant coordinate systems and Gegenbauer polynomial expansions in Vilenkin's polyspherical coordinates. We compare both of these expansions to generate addition theorems for the azimuthal Fourier coefficients.
Adaptive kernels for multi-fiber reconstruction.
Barmpoutis, Angelos; Jian, Bing; Vemuri, Baba C
2009-01-01
In this paper we present a novel method for multi-fiber reconstruction given a diffusion-weighted MRI dataset. There are several existing methods that employ various spherical deconvolution kernels for achieving this task. However the kernels in all of the existing methods rely on certain assumptions regarding the properties of the underlying fibers, which introduce inaccuracies and unnatural limitations in them. Our model is a non trivial generalization of the spherical deconvolution model, which unlike the existing methods does not make use of a fix-shaped kernel. Instead, the shape of the kernel is estimated simultaneously with the rest of the unknown parameters by employing a general adaptive model that can theoretically approximate any spherical deconvolution kernel. The performance of our model is demonstrated using simulated and real diffusion-weighed MR datasets and compared quantitatively with several existing techniques in literature. The results obtained indicate that our model has superior performance that is close to the theoretic limit of the best possible achievable result.
Analog forecasting with dynamics-adapted kernels
NASA Astrophysics Data System (ADS)
Zhao, Zhizhen; Giannakis, Dimitrios
2016-09-01
Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.
NASA Astrophysics Data System (ADS)
Martínez-Suástegui, Lorenzo; Barreto, Enrique; Treviño, César
2015-11-01
Transient laminar opposing mixed convection is studied experimentally in an open vertical rectangular channel with two discrete protruded heat sources subjected to uniform heat flux simulating electronic components. Experiments are performed for a Reynolds number of Re = 700, Prandtl number of Pr = 7, inclination angles with respect to the horizontal of γ =0o , 45o and 90o, and different values of buoyancy strength or modified Richardson number, Ri* =Gr* /Re2 . From the experimental measurements, the space averaged surface temperatures, overall Nusselt number of each simulated electronic chip, phase-space plots of the self-oscillatory system, characteristic times of temperature oscillations and spectral distribution of the fluctuating energy have been obtained. Results show that when a threshold in the buoyancy parameter is reached, strong three-dimensional secondary flow oscillations develop in the axial and spanwise directions. This research was supported by the Consejo Nacional de Ciencia y Tecnología (CONACYT), Grant number 167474 and by the Secretaría de Investigación y Posgrado del IPN, Grant number SIP 20141309.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Kernel bandwidth estimation for nonparametric modeling.
Bors, Adrian G; Nasios, Nikolaos
2009-12-01
Kernel density estimation is a nonparametric procedure for probability density modeling, which has found several applications in various fields. The smoothness and modeling ability of the functional approximation are controlled by the kernel bandwidth. In this paper, we describe a Bayesian estimation method for finding the bandwidth from a given data set. The proposed bandwidth estimation method is applied in three different computational-intelligence methods that rely on kernel density estimation: 1) scale space; 2) mean shift; and 3) quantum clustering. The third method is a novel approach that relies on the principles of quantum mechanics. This method is based on the analogy between data samples and quantum particles and uses the SchrOdinger potential as a cost function. The proposed methodology is used for blind-source separation of modulated signals and for terrain segmentation based on topography information.
Brodsky, N.S.; Riggins, M.; Connolly, J.; Ricci, P.
1997-09-01
Specimens were tested from four thermal-mechanical units, namely Tiva Canyon (TCw), Paintbrush Tuff (PTn), and two Topopah Spring units (TSw1 and TSw2), and from two lithologies, i.e., welded devitrified (TCw, TSw1, TSw2) and nonwelded vitric tuff (PTn). Thermal conductivities in W(mk){sup {minus}1} averaged over all boreholes, ranged (depending upon temperature and saturation state) from 1.2 to 1.9 for TCw, from 0.4 to 0.9 for PTn, from 1.0 to 1.7 for TSw1, and from 1.5 to 2.3 for TSw2. Mean coefficients of thermal expansion were highly temperature dependent and values, averaged over all boreholes, ranged (depending upon temperature and saturation state) from 6.6 {times} 10{sup {minus}6} to 49 {times} 10{sup {minus}6} C{sup {minus}1} for TCw, from the negative range to 16 {times} 10{sup {minus}6} {center_dot} {degree}C{sup {minus}1} for PTn, from 6.3 {times} 10{sup {minus}6} to 44 {times} 10{sup {minus}6} C{sup {minus}1} for TSw1, and from 6.7 {times} 10{sup {minus}6} to 37 {times} 10{sup {minus}6} {center_dot} {degree}C{sup {minus}1} for TSw2. Mean values of thermal capacitance in J/cm{sup 3}K (averaged overall specimens) ranged from 1.6 J to 2.1 for TSw1 and from 1.8 to 2.5 for TSw2. In general, the lithostratigraphic classifications of rock assigned by the USGS are consistent with the mineralogical data presented in this report.
Experimental study of turbulent flame kernel propagation
Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve
2008-07-15
Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{sub j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)
Volatile compound formation during argan kernel roasting.
El Monfalouti, Hanae; Charrouf, Zoubida; Giordano, Manuela; Guillaume, Dominique; Kartah, Badreddine; Harhar, Hicham; Gharby, Saïd; Denhez, Clément; Zeppa, Giuseppe
2013-01-01
Virgin edible argan oil is prepared by cold-pressing argan kernels previously roasted at 110 degrees C for up to 25 minutes. The concentration of 40 volatile compounds in virgin edible argan oil was determined as a function of argan kernel roasting time. Most of the volatile compounds begin to be formed after 15 to 25 minutes of roasting. This suggests that a strictly controlled roasting time should allow the modulation of argan oil taste and thus satisfy different types of consumers. This could be of major importance considering the present booming use of edible argan oil.
Reduced multiple empirical kernel learning machine.
Wang, Zhe; Lu, MingZhe; Gao, Daqi
2015-02-01
Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3
Utilizing Kernelized Advection Schemes in Ocean Models
NASA Astrophysics Data System (ADS)
Zadeh, N.; Balaji, V.
2008-12-01
There has been a recent effort in the ocean model community to use a set of generic FORTRAN library routines for advection of scalar tracers in the ocean. In a collaborative project called Hybrid Ocean Model Environement (HOME), vastly different advection schemes (space-differencing schemes for advection equation) become available to modelers in the form of subroutine calls (kernels). In this talk we explore the possibility of utilizing ESMF data structures in wrapping these kernels so that they can be readily used in ESMF gridded components.
Kernel abortion in maize. II. Distribution of /sup 14/C among kernel carboydrates
Hanft, J.M.; Jones, R.J.
1986-06-01
This study was designed to compare the uptake and distribution of /sup 14/C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35/sup 0/C were transferred to (/sup 14/C)sucrose media 10 days after pollination. Kernels cultured at 35/sup 0/C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on (/sup 14/C)sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35/sup 0/C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35/sup 0/C compared to kernels cultured at 30/sup 0/C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35/sup 0/C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30/sup 0/C (89%). Kernels cultured at 35/sup 0/C had a correspondingly higher proportion of /sup 14/C in endosperm fructose, glucose, and sucrose.
Kernel principal component analysis for stochastic input model generation
Ma Xiang; Zabaras, Nicholas
2011-08-10
Highlights: {yields} KPCA is used to construct a reduced order stochastic model of permeability. {yields} A new approach is proposed to solve the pre-image problem in KPCA. {yields} Polynomial chaos is used to provide a parametric stochastic input model. {yields} Flow in porous media with channelized permeability is considered. - Abstract: Stochastic analysis of random heterogeneous media provides useful information only if realistic input models of the material property variations are used. These input models are often constructed from a set of experimental samples of the underlying random field. To this end, the Karhunen-Loeve (K-L) expansion, also known as principal component analysis (PCA), is the most popular model reduction method due to its uniform mean-square convergence. However, it only projects the samples onto an optimal linear subspace, which results in an unreasonable representation of the original data if they are non-linearly related to each other. In other words, it only preserves the first-order (mean) and second-order statistics (covariance) of a random field, which is insufficient for reproducing complex structures. This paper applies kernel principal component analysis (KPCA) to construct a reduced-order stochastic input model for the material property variation in heterogeneous media. KPCA can be considered as a nonlinear version of PCA. Through use of kernel functions, KPCA further enables the preservation of higher-order statistics of the random field, instead of just two-point statistics as in the standard Karhunen-Loeve (K-L) expansion. Thus, this method can model non-Gaussian, non-stationary random fields. In this work, we also propose a new approach to solve the pre-image problem involved in KPCA. In addition, polynomial chaos (PC) expansion is used to represent the random coefficients in KPCA which provides a parametric stochastic input model. Thus, realizations, which are statistically consistent with the experimental data, can be
NASA Astrophysics Data System (ADS)
Lee, Keonhee; Oh, Jumi
2016-01-01
A notion of measure expansivity for flows was introduced by Carrasco-Olivera and Morales in [3] as a generalization of expansivity, and they proved that there were no measure expansive flows on closed surfaces. In this paper we introduce a concept of weak measure expansivity for flows which is really weaker than that of measure expansivity, and show that there is a weak measure expansive flow on a closed surface. Moreover we show that any C1 stably weak measure expansive flow on a C∞ closed manifold M is Ω-stable, and any C1 stably measure expansive flow on M satisfies both Axiom A and the quasi-transversality condition.
Accuracy of Reduced and Extended Thin-Wire Kernels
Burke, G J
2008-11-24
Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.
Kernel Partial Least Squares for Nonlinear Regression and Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.
Fabrication of Uranium Oxycarbide Kernels for HTR Fuel
Charles Barnes; CLay Richardson; Scott Nagley; John Hunn; Eric Shaber
2010-10-01
Babcock and Wilcox (B&W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-µm, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B&W produced 425-µm, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B&W also produced 500-µm, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B&W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... the separated half of a kernel with not more than one-eighth broken off....
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
NASA Astrophysics Data System (ADS)
Khodachenko, M. L.; Shaikhislamov, I. F.; Lammer, H.; Prokopov, P. A.
2015-11-01
This is the second paper in a series where we build a self-consistent model to simulate the mass-loss process of a close-orbit magnetized giant exoplanet, so-called hot Jupiter (HJ). In this paper we generalize the hydrodynamic (HD) model of an HJ's expanding hydrogen atmosphere, proposed in the first paper, to include the effects of intrinsic planetary magnetic field. The proposed self-consistent axisymmetric 2D magnetohydrodynamics model incorporates radiative heating and ionization of the atmospheric gas, basic hydrogen chemistry for the appropriate account of major species composing HJ's upper atmosphere and related radiative energy deposition, and {{{H}}}3+ and Lyα cooling processes. The model also takes into account a realistic solar-type X-ray/EUV spectrum for calculation of intensity and column density distribution of the radiative energy input, as well as gravitational and rotational forces acting in a tidally locked planet-star system. An interaction between the expanding atmospheric plasma and an intrinsic planetary magnetic dipole field leads to the formation of a current-carrying magnetodisk that plays an important role for topology and scaling of the planetary magnetosphere. A cyclic character of the magnetodisk behavior, composed of consequent phases of the disk formation followed by the magnetic reconnection with the ejection of a ring-type plasmoid, has been discovered and investigated. We found that the mass-loss rate of an HD 209458b analog planet is weakly affected by the equatorial surface field <0.3 G, but is suppressed by an order of magnitude at the field of 1 G.
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a..., packaging, transporting, or holding food, subject to the provisions of this section. (a) Tamarind...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
Heat kernel for Newton-Cartan trace anomalies
NASA Astrophysics Data System (ADS)
Auzzi, Roberto; Nardelli, Giuseppe
2016-07-01
We compute the leading part of the trace anomaly for a free non-relativistic scalar in 2 + 1 dimensions coupled to a background Newton-Cartan metric. The anomaly is proportional to 1 /m, where m is the mass of the scalar. We comment on the implications of a conjectured a-theorem for non-relativistic theories with boost invariance.
Chare kernel; A runtime support system for parallel computations
Shu, W. ); Kale, L.V. )
1991-03-01
This paper presents the chare kernel system, which supports parallel computations with irregular structure. The chare kernel is a collection of primitive functions that manage chares, manipulative messages, invoke atomic computations, and coordinate concurrent activities. Programs written in the chare kernel language can be executed on different parallel machines without change. Users writing such programs concern themselves with the creation of parallel actions but not with assigning them to specific processors. The authors describe the design and implementation of the chare kernel. Performance of chare kernel programs on two hypercube machines, the Intel iPSC/2 and the NCUBE, is also given.
Kernel weights optimization for error diffusion halftoning method
NASA Astrophysics Data System (ADS)
Fedoseev, Victor
2015-02-01
This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.
Online kernel principal component analysis: a reduced-order model.
Honeine, Paul
2012-09-01
Kernel principal component analysis (kernel-PCA) is an elegant nonlinear extension of one of the most used data analysis and dimensionality reduction techniques, the principal component analysis. In this paper, we propose an online algorithm for kernel-PCA. To this end, we examine a kernel-based version of Oja's rule, initially put forward to extract a linear principal axe. As with most kernel-based machines, the model order equals the number of available observations. To provide an online scheme, we propose to control the model order. We discuss theoretical results, such as an upper bound on the error of approximating the principal functions with the reduced-order model. We derive a recursive algorithm to discover the first principal axis, and extend it to multiple axes. Experimental results demonstrate the effectiveness of the proposed approach, both on synthetic data set and on images of handwritten digits, with comparison to classical kernel-PCA and iterative kernel-PCA.
Selection and properties of alternative forming fluids for TRISO fuel kernel production
Baker, M. P.; King, J. C.; Gorman, B. P.; Marshall, Doug W.
2013-01-01
Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1- bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.
Selection and properties of alternative forming fluids for TRISO fuel kernel production
NASA Astrophysics Data System (ADS)
Baker, M. P.; King, J. C.; Gorman, B. P.; Marshall, D. W.
2013-01-01
Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ˜10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.
A Novel Framework for Learning Geometry-Aware Kernels.
Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo
2016-05-01
The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.
Wolowodiuk, Walter
1976-01-06
A heat exchanger of the straight tube type in which different rates of thermal expansion between the straight tubes and the supply pipes furnishing fluid to those tubes do not result in tube failures. The supply pipes each contain a section which is of helical configuration.
Quark-hadron duality: Pinched kernel approach
NASA Astrophysics Data System (ADS)
Dominguez, C. A.; Hernandez, L. A.; Schilcher, K.; Spiesberger, H.
2016-08-01
Hadronic spectral functions measured by the ALEPH collaboration in the vector and axial-vector channels are used to study potential quark-hadron duality violations (DV). This is done entirely in the framework of pinched kernel finite energy sum rules (FESR), i.e. in a model independent fashion. The kinematical range of the ALEPH data is effectively extended up to s = 10 GeV2 by using an appropriate kernel, and assuming that in this region the spectral functions are given by perturbative QCD. Support for this assumption is obtained by using e+ e‑ annihilation data in the vector channel. Results in both channels show a good saturation of the pinched FESR, without further need of explicit models of DV.
Wilson Dslash Kernel From Lattice QCD Optimization
Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.
Searching and Indexing Genomic Databases via Kernelization
Gagie, Travis; Puglisi, Simon J.
2015-01-01
The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper, we survey the 20-year history of this idea and discuss its relation to kernelization in parameterized complexity. PMID:25710001
A Fast Reduced Kernel Extreme Learning Machine.
Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua
2016-04-01
In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.
Semi-Supervised Kernel Mean Shift Clustering.
Anand, Saket; Mittal, Sushil; Tuzel, Oncel; Meer, Peter
2014-06-01
Mean shift clustering is a powerful nonparametric technique that does not require prior knowledge of the number of clusters and does not constrain the shape of the clusters. However, being completely unsupervised, its performance suffers when the original distance metric fails to capture the underlying cluster structure. Despite recent advances in semi-supervised clustering methods, there has been little effort towards incorporating supervision into mean shift. We propose a semi-supervised framework for kernel mean shift clustering (SKMS) that uses only pairwise constraints to guide the clustering procedure. The points are first mapped to a high-dimensional kernel space where the constraints are imposed by a linear transformation of the mapped points. This is achieved by modifying the initial kernel matrix by minimizing a log det divergence-based objective function. We show the advantages of SKMS by evaluating its performance on various synthetic and real datasets while comparing with state-of-the-art semi-supervised clustering algorithms. PMID:26353281
Kernel methods for phenotyping complex plant architecture.
Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien
2014-02-01
The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.
A Fast Reduced Kernel Extreme Learning Machine.
Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua
2016-04-01
In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. PMID:26829605
NASA Astrophysics Data System (ADS)
Pope, Benjamin; Tuthill, Peter; Hinkley, Sasha; Ireland, Michael J.; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D.; Martinache, Frantz
2016-01-01
At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently developed adaptive optics post-processing technique, called kernel-phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here, we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the α Ophiuchi binary system near periastron, using the Palomar 200-Inch Telescope. This is the first case in which kernel phase has been used with a full aperture to resolve a system close to the diffraction limit with ground-based extreme adaptive optics observations. Excellent agreement in astrometric quantities is found between kernel phase and masking, and kernel phase significantly outperforms PSF fitting and bispectral analysis, demonstrating its viability as an alternative to conventional non-redundant masking under appropriate conditions.
Swenson, Paul F.; Moore, Paul B.
1979-01-01
An air heating and cooling system for a building includes an expansion-type refrigeration circuit and a heat engine. The refrigeration circuit includes two heat exchangers, one of which is communicated with a source of indoor air from the building and the other of which is communicated with a source of air from outside the building. The heat engine includes a heat rejection circuit having a source of rejected heat and a primary heat exchanger connected to the source of rejected heat. The heat rejection circuit also includes an evaporator in heat exchange relation with the primary heat exchanger, a heat engine indoor heat exchanger, and a heat engine outdoor heat exchanger. The indoor heat exchangers are disposed in series air flow relationship, with the heat engine indoor heat exchanger being disposed downstream from the refrigeration circuit indoor heat exchanger. The outdoor heat exchangers are also disposed in series air flow relationship, with the heat engine outdoor heat exchanger disposed downstream from the refrigeration circuit outdoor heat exchanger. A common fluid is used in both of the indoor heat exchangers and in both of the outdoor heat exchangers. In a first embodiment, the heat engine is a Rankine cycle engine. In a second embodiment, the heat engine is a non-Rankine cycle engine.
Swenson, Paul F.; Moore, Paul B.
1982-01-01
An air heating and cooling system for a building includes an expansion-type refrigeration circuit and a heat engine. The refrigeration circuit includes two heat exchangers, one of which is communicated with a source of indoor air from the building and the other of which is communicated with a source of air from outside the building. The heat engine includes a heat rejection circuit having a source of rejected heat and a primary heat exchanger connected to the source of rejected heat. The heat rejection circuit also includes an evaporator in heat exchange relation with the primary heat exchanger, a heat engine indoor heat exchanger, and a heat engine outdoor heat exchanger. The indoor heat exchangers are disposed in series air flow relationship, with the heat engine indoor heat exchanger being disposed downstream from the refrigeration circuit indoor heat exchanger. The outdoor heat exchangers are also disposed in series air flow relationship, with the heat engine outdoor heat exchanger disposed downstream from the refrigeration circuit outdoor heat exchanger. A common fluid is used in both of the indoor heat exchanges and in both of the outdoor heat exchangers. In a first embodiment, the heat engine is a Rankine cycle engine. In a second embodiment, the heat engine is a non-Rankine cycle engine.
Reaction Kernel Structure of a Slot Jet Diffusion Flame in Microgravity
NASA Technical Reports Server (NTRS)
Takahashi, F.; Katta, V. R.
2001-01-01
Diffusion flame stabilization in normal earth gravity (1 g) has long been a fundamental research subject in combustion. Local flame-flow phenomena, including heat and species transport and chemical reactions, around the flame base in the vicinity of condensed surfaces control flame stabilization and fire spreading processes. Therefore, gravity plays an important role in the subject topic because buoyancy induces flow in the flame zone, thus increasing the convective (and diffusive) oxygen transport into the flame zone and, in turn, reaction rates. Recent computations show that a peak reactivity (heat-release or oxygen-consumption rate) spot, or reaction kernel, is formed in the flame base by back-diffusion and reactions of radical species in the incoming oxygen-abundant flow at relatively low temperatures (about 1550 K). Quasi-linear correlations were found between the peak heat-release or oxygen-consumption rate and the velocity at the reaction kernel for cases including both jet and flat-plate diffusion flames in airflow. The reaction kernel provides a stationary ignition source to incoming reactants, sustains combustion, and thus stabilizes the trailing diffusion flame. In a quiescent microgravity environment, no buoyancy-induced flow exits and thus purely diffusive transport controls the reaction rates. Flame stabilization mechanisms in such purely diffusion-controlled regime remain largely unstudied. Therefore, it will be a rigorous test for the reaction kernel correlation if it can be extended toward zero velocity conditions in the purely diffusion-controlled regime. The objectives of this study are to reveal the structure of the flame-stabilizing region of a two-dimensional (2D) laminar jet diffusion flame in microgravity and develop a unified diffusion flame stabilization mechanism. This paper reports the recent progress in the computation and experiment performed in microgravity.
FABRICATION OF URANIUM OXYCARBIDE KERNELS AND COMPACTS FOR HTR FUEL
Dr. Jeffrey A. Phillips; Eric L. Shaber; Scott G. Nagley
2012-10-01
As part of the program to demonstrate tristructural isotropic (TRISO)-coated fuel for the Next Generation Nuclear Plant (NGNP), Advanced Gas Reactor (AGR) fuel is being irradiation tested in the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL). This testing has led to improved kernel fabrication techniques, the formation of TRISO fuel particles, and upgrades to the overcoating, compaction, and heat treatment processes. Combined, these improvements provide a fuel manufacturing process that meets the stringent requirements associated with testing in the AGR experimentation program. Researchers at Idaho National Laboratory (INL) are working in conjunction with a team from Babcock and Wilcox (B&W) and Oak Ridge National Laboratory (ORNL) to (a) improve the quality of uranium oxycarbide (UCO) fuel kernels, (b) deposit TRISO layers to produce a fuel that meets or exceeds the standard developed by German researches in the 1980s, and (c) develop a process to overcoat TRISO particles with the same matrix material, but applies it with water using equipment previously and successfully employed in the pharmaceutical industry. A primary goal of this work is to simplify the process, making it more robust and repeatable while relying less on operator technique than prior overcoating efforts. A secondary goal is to improve first-pass yields to greater than 95% through the use of established technology and equipment. In the first test, called “AGR-1,” graphite compacts containing approximately 300,000 coated particles were irradiated from December 2006 to November 2009. The AGR-1 fuel was designed to closely replicate many of the properties of German TRISO-coated particles, thought to be important for good fuel performance. No release of gaseous fission product, indicative of particle coating failure, was detected in the nearly 3-year irradiation to a peak burn up of 19.6% at a time-average temperature of 1038–1121°C. Before fabricating AGR-2 fuel, each
Alamaniotis, Miltiadis; Bargiotas, Dimitrios; Tsoukalas, Lefteri H
2016-01-01
Integration of energy systems with information technologies has facilitated the realization of smart energy systems that utilize information to optimize system operation. To that end, crucial in optimizing energy system operation is the accurate, ahead-of-time forecasting of load demand. In particular, load forecasting allows planning of system expansion, and decision making for enhancing system safety and reliability. In this paper, the application of two types of kernel machines for medium term load forecasting (MTLF) is presented and their performance is recorded based on a set of historical electricity load demand data. The two kernel machine models and more specifically Gaussian process regression (GPR) and relevance vector regression (RVR) are utilized for making predictions over future load demand. Both models, i.e., GPR and RVR, are equipped with a Gaussian kernel and are tested on daily predictions for a 30-day-ahead horizon taken from the New England Area. Furthermore, their performance is compared to the ARMA(2,2) model with respect to mean average percentage error and squared correlation coefficient. Results demonstrate the superiority of RVR over the other forecasting models in performing MTLF.
Alamaniotis, Miltiadis; Bargiotas, Dimitrios; Tsoukalas, Lefteri H
2016-01-01
Integration of energy systems with information technologies has facilitated the realization of smart energy systems that utilize information to optimize system operation. To that end, crucial in optimizing energy system operation is the accurate, ahead-of-time forecasting of load demand. In particular, load forecasting allows planning of system expansion, and decision making for enhancing system safety and reliability. In this paper, the application of two types of kernel machines for medium term load forecasting (MTLF) is presented and their performance is recorded based on a set of historical electricity load demand data. The two kernel machine models and more specifically Gaussian process regression (GPR) and relevance vector regression (RVR) are utilized for making predictions over future load demand. Both models, i.e., GPR and RVR, are equipped with a Gaussian kernel and are tested on daily predictions for a 30-day-ahead horizon taken from the New England Area. Furthermore, their performance is compared to the ARMA(2,2) model with respect to mean average percentage error and squared correlation coefficient. Results demonstrate the superiority of RVR over the other forecasting models in performing MTLF. PMID:26835237
Small convolution kernels for high-fidelity image restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1991-01-01
An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.
Multiple kernel learning for sparse representation-based classification.
Shrivastava, Ashish; Patel, Vishal M; Chellappa, Rama
2014-07-01
In this paper, we propose a multiple kernel learning (MKL) algorithm that is based on the sparse representation-based classification (SRC) method. Taking advantage of the nonlinear kernel SRC in efficiently representing the nonlinearities in the high-dimensional feature space, we propose an MKL method based on the kernel alignment criteria. Our method uses a two step training method to learn the kernel weights and sparse codes. At each iteration, the sparse codes are updated first while fixing the kernel mixing coefficients, and then the kernel mixing coefficients are updated while fixing the sparse codes. These two steps are repeated until a stopping criteria is met. The effectiveness of the proposed method is demonstrated using several publicly available image classification databases and it is shown that this method can perform significantly better than many competitive image classification algorithms. PMID:24835226
Visualization of nonlinear kernel models in neuroimaging by sensitivity maps.
Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Lund, Torben Ellegaard; Hansen, Lars Kai
2011-04-01
There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification models. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We show that the performance of linear models is reduced for certain scan labelings/categorizations in this data set, while the nonlinear models provide more flexibility. We show that the sensitivity map can be used to visualize nonlinear versions of kernel logistic regression, the kernel Fisher discriminant, and the SVM, and conclude that the sensitivity map is a versatile and computationally efficient tool for visualization of nonlinear kernel models in neuroimaging.
Reif, Maria M; Hünenberger, Philippe H
2011-04-14
The raw single-ion solvation free energies computed from atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions (finite or periodic system, system or box size) and treatment of electrostatic interactions (Coulombic, lattice-sum, or cutoff-based) used during these simulations. However, as shown by Kastenholz and Hünenberger [J. Chem. Phys. 124, 224501 (2006)], correction terms can be derived for the effects of: (A) an incorrect solvent polarization around the ion and an incomplete or/and inexact interaction of the ion with the polarized solvent due to the use of an approximate (not strictly Coulombic) electrostatic scheme; (B) the finite-size or artificial periodicity of the simulated system; (C) an improper summation scheme to evaluate the potential at the ion site, and the possible presence of a polarized air-liquid interface or of a constraint of vanishing average electrostatic potential in the simulated system; and (D) an inaccurate dielectric permittivity of the employed solvent model. Comparison with standard experimental data also requires the inclusion of appropriate cavity-formation and standard-state correction terms. In the present study, this correction scheme is extended by: (i) providing simple approximate analytical expressions (empirically-fitted) for the correction terms that were evaluated numerically in the above scheme (continuum-electrostatics calculations); (ii) providing correction terms for derivative thermodynamic single-ion solvation properties (and corresponding partial molar variables in solution), namely, the enthalpy, entropy, isobaric heat capacity, volume, isothermal compressibility, and isobaric expansivity (including appropriate standard-state correction terms). The ability of the correction scheme to produce methodology-independent single-ion solvation free energies based on atomistic simulations is tested in the case of Na(+) hydration, and the nature and magnitude of the correction terms for
NASA Astrophysics Data System (ADS)
Reif, Maria M.; Hünenberger, Philippe H.
2011-04-01
The raw single-ion solvation free energies computed from atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions (finite or periodic system, system or box size) and treatment of electrostatic interactions (Coulombic, lattice-sum, or cutoff-based) used during these simulations. However, as shown by Kastenholz and Hünenberger [J. Chem. Phys. 124, 224501 (2006)], 10.1529/biophysj.106.083667, correction terms can be derived for the effects of: (A) an incorrect solvent polarization around the ion and an incomplete or/and inexact interaction of the ion with the polarized solvent due to the use of an approximate (not strictly Coulombic) electrostatic scheme; (B) the finite-size or artificial periodicity of the simulated system; (C) an improper summation scheme to evaluate the potential at the ion site, and the possible presence of a polarized air-liquid interface or of a constraint of vanishing average electrostatic potential in the simulated system; and (D) an inaccurate dielectric permittivity of the employed solvent model. Comparison with standard experimental data also requires the inclusion of appropriate cavity-formation and standard-state correction terms. In the present study, this correction scheme is extended by: (i) providing simple approximate analytical expressions (empirically-fitted) for the correction terms that were evaluated numerically in the above scheme (continuum-electrostatics calculations); (ii) providing correction terms for derivative thermodynamic single-ion solvation properties (and corresponding partial molar variables in solution), namely, the enthalpy, entropy, isobaric heat capacity, volume, isothermal compressibility, and isobaric expansivity (including appropriate standard-state correction terms). The ability of the correction scheme to produce methodology-independent single-ion solvation free energies based on atomistic simulations is tested in the case of Na+ hydration, and the nature and magnitude
Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.
CHIBANI, OMAR
1999-05-12
Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twice the source particle range.
NASA Astrophysics Data System (ADS)
Lindemer, T. B.; Voit, S. L.; Silva, C. M.; Besmann, T. M.; Hunt, R. D.
2014-05-01
The US Department of Energy is developing a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with uranium nitride (UN) kernels with diameters near 825 μm. This effort explores factors involved in the conversion of uranium oxide-carbon microspheres into UN kernels. An analysis of previous studies with sufficient experimental details is provided. Thermodynamic calculations were made to predict pressures of carbon monoxide and other relevant gases for several reactions that can be involved in the conversion of uranium oxides and carbides into UN. Uranium oxide-carbon microspheres were heated in a microbalance with an attached mass spectrometer to determine details of calcining and carbothermic conversion in argon, nitrogen, and vacuum. A model was derived from experiments on the vacuum conversion to uranium oxide-carbide kernels. UN-containing kernels were fabricated using this vacuum conversion as part of the overall process. Carbonitride kernels of ∼89% of theoretical density were produced along with several observations concerning the different stages of the process.
A Kernel-based Account of Bibliometric Measures
NASA Astrophysics Data System (ADS)
Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji
The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.
Embedded real-time operating system micro kernel design
NASA Astrophysics Data System (ADS)
Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng
2005-12-01
Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.
Robust visual tracking via speedup multiple kernel ridge regression
NASA Astrophysics Data System (ADS)
Qian, Cheng; Breckon, Toby P.; Li, Hui
2015-09-01
Most of the tracking methods attempt to build up feature spaces to represent the appearance of a target. However, limited by the complex structure of the distribution of features, the feature spaces constructed in a linear manner cannot characterize the nonlinear structure well. We propose an appearance model based on kernel ridge regression for visual tracking. Dense sampling is fulfilled around the target image patches to collect the training samples. In order to obtain a kernel space in favor of describing the target appearance, multiple kernel learning is introduced into the selection of kernels. Under the framework, instead of a single kernel, a linear combination of kernels is learned from the training samples to create a kernel space. Resorting to the circulant property of a kernel matrix, a fast interpolate iterative algorithm is developed to seek coefficients that are assigned to these kernels so as to give an optimal combination. After the regression function is learned, all candidate image patches gathered are taken as the input of the function, and the candidate with the maximal response is regarded as the object image patch. Extensive experimental results demonstrate that the proposed method outperforms other state-of-the-art tracking methods.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
LFK. Livermore FORTRAN Kernel Computer Test
McMahon, F.H.
1990-05-01
LFK, the Livermore FORTRAN Kernels, is a computer performance test that measures a realistic floating-point performance range for FORTRAN applications. Informally known as the Livermore Loops test, the LFK test may be used as a computer performance test, as a test of compiler accuracy (via checksums) and efficiency, or as a hardware endurance test. The LFK test, which focuses on FORTRAN as used in computational physics, measures the joint performance of the computer CPU, the compiler, and the computational structures in units of Megaflops/sec or Mflops. A C language version of subroutine KERNEL is also included which executes 24 samples of C numerical computation. The 24 kernels are a hydrodynamics code fragment, a fragment from an incomplete Cholesky conjugate gradient code, the standard inner product function of linear algebra, a fragment from a banded linear equations routine, a segment of a tridiagonal elimination routine, an example of a general linear recurrence equation, an equation of state fragment, part of an alternating direction implicit integration code, an integrate predictor code, a difference predictor code, a first sum, a first difference, a fragment from a two-dimensional particle-in-cell code, a part of a one-dimensional particle-in-cell code, an example of how casually FORTRAN can be written, a Monte Carlo search loop, an example of an implicit conditional computation, a fragment of a two-dimensional explicit hydrodynamics code, a general linear recurrence equation, part of a discrete ordinates transport program, a simple matrix calculation, a segment of a Planckian distribution procedure, a two-dimensional implicit hydrodynamics fragment, and determination of the location of the first minimum in an array.
Anisotropic matching principle for the hydrodynamic expansion
NASA Astrophysics Data System (ADS)
Tinti, Leonardo
2016-10-01
Following the recent success of anisotropic hydrodynamics, I propose here a new, general prescription for the hydrodynamic expansion around an anisotropic background. The anisotropic distribution fixes exactly the complete energy-momentum tensor, just like the effective temperature fixes the proper energy density in the ordinary expansion around local equilibrium. This means that momentum anisotropies are already included at the leading order, allowing for large pressure anisotropies without the need of a next-to-leading-order treatment. The first moment of the Boltzmann equation (local four-momentum conservation) provides the time evolution of the proper energy density and the four-velocity. Differently from previous prescriptions, the dynamic equations for the pressure corrections are not derived from the zeroth or second moment of the Boltzmann equation, but they are taken directly from the exact evolution given by the Boltzmann equation. As known in the literature, the exact evolution of the pressure corrections involves higher moments of the Boltzmann distribution, which cannot be fixed by the anisotropic distribution alone. Neglecting the next-to-leading-order contributions corresponds to an approximation, which depends on the chosen form of the anisotropic distribution. I check the the effectiveness of the leading-order expansion around the generalized Romatschke-Stricklad distribution, comparing with the exact solution of the Boltzmann equation in the Bjorken limit with the collisional kernel treated in the relaxation-time approximation, finding an unprecedented agreement.
Verification of Chare-kernel programs
Bhansali, S.; Kale, L.V. )
1989-01-01
Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.
Prediction of kernel density of corn using single-kernel near infrared spectroscopy
Technology Transfer Automated Retrieval System (TEKTRAN)
Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...
Edgeworth expansions of stochastic trading time
NASA Astrophysics Data System (ADS)
Decamps, Marc; De Schepper, Ann
2010-08-01
Under most local and stochastic volatility models the underlying forward is assumed to be a positive function of a time-changed Brownian motion. It relates nicely the implied volatility smile to the so-called activity rate in the market. Following Young and DeWitt-Morette (1986) [8], we propose to apply the Duru-Kleinert process-cum-time transformation in path integral to formulate the transition density of the forward. The method leads to asymptotic expansions of the transition density around a Gaussian kernel corresponding to the average activity in the market conditional on the forward value. The approximation is numerically illustrated for pricing vanilla options under the CEV model and the popular normal SABR model. The asymptotics can also be used for Monte Carlo simulations or backward integration schemes.
Linear and kernel methods for multi- and hypervariate change detection
NASA Astrophysics Data System (ADS)
Nielsen, Allan A.; Canty, Morton J.
2010-10-01
The iteratively re-weighted multivariate alteration detection (IR-MAD) algorithm may be used both for unsuper- vised change detection in multi- and hyperspectral remote sensing imagery as well as for automatic radiometric normalization of multi- or hypervariate multitemporal image sequences. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA), kernel MAF and kernel MNF analyses handle nonlinearities by implicitly transforming data into high (even innite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In image analysis the Gram matrix is often prohibitively large (its size is the number of pixels in the image squared). In this case we may sub-sample the image and carry out the kernel eigenvalue analysis on a set of training data samples only. To obtain a transformed version of the entire image we then project all pixels, which we call the test data, mapped nonlinearly onto the primal eigenvectors. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric normalization and kernel PCA/MAF/MNF transformations have been written
Fructan metabolism in developing wheat (Triticum aestivum L.) kernels.
Verspreet, Joran; Cimini, Sara; Vergauwen, Rudy; Dornez, Emmie; Locato, Vittoria; Le Roy, Katrien; De Gara, Laura; Van den Ende, Wim; Delcour, Jan A; Courtin, Christophe M
2013-12-01
Although fructans play a crucial role in wheat kernel development, their metabolism during kernel maturation is far from being understood. In this study, all major fructan-metabolizing enzymes together with fructan content, fructan degree of polymerization and the presence of fructan oligosaccharides were examined in developing wheat kernels (Triticum aestivum L. var. Homeros) from anthesis until maturity. Fructan accumulation occurred mainly in the first 2 weeks after anthesis, and a maximal fructan concentration of 2.5 ± 0.3 mg fructan per kernel was reached at 16 days after anthesis (DAA). Fructan synthesis was catalyzed by 1-SST (sucrose:sucrose 1-fructosyltransferase) and 6-SFT (sucrose:fructan 6-fructosyltransferase), and to a lesser extent by 1-FFT (fructan:fructan 1-fructosyltransferase). Despite the presence of 6G-kestotriose in wheat kernel extracts, the measured 6G-FFT (fructan:fructan 6G-fructosyltransferase) activity levels were low. During kernel filling, which lasted from 2 to 6 weeks after anthesis, kernel fructan content decreased from 2.5 ± 0.3 to 1.31 ± 0.12 mg fructan per kernel (42 DAA) and the average fructan degree of polymerization decreased from 7.3 ± 0.4 (14 DAA) to 4.4 ± 0.1 (42 DAA). FEH (fructan exohydrolase) reached maximal activity between 20 and 28 DAA. No fructan-metabolizing enzyme activities were registered during the final phase of kernel maturation, and fructan content and structure remained unchanged. This study provides insight into the complex metabolism of fructans during wheat kernel development and relates fructan turnover to the general phases of kernel development.
Aligning Biomolecular Networks Using Modular Graph Kernels
NASA Astrophysics Data System (ADS)
Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant
Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.
Delimiting Areas of Endemism through Kernel Interpolation
Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971
Pareto-path multitask multiple kernel learning.
Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C
2015-01-01
A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches. PMID:25532155
Scientific Computing Kernels on the Cell Processor
Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine
2007-04-04
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.
Pareto-path multitask multiple kernel learning.
Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C
2015-01-01
A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.
Stable Local Volatility Calibration Using Kernel Splines
NASA Astrophysics Data System (ADS)
Coleman, Thomas F.; Li, Yuying; Wang, Cheng
2010-09-01
We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.
Transcriptome analysis of Ginkgo biloba kernels
He, Bing; Gu, Yincong; Xu, Meng; Wang, Jianwen; Cao, Fuliang; Xu, Li-an
2015-01-01
Ginkgo biloba is a dioecious species native to China with medicinally and phylogenetically important characteristics; however, genomic resources for this species are limited. In this study, we performed the first transcriptome sequencing for Ginkgo kernels at five time points using Illumina paired-end sequencing. Approximately 25.08-Gb clean reads were obtained, and 68,547 unigenes with an average length of 870 bp were generated by de novo assembly. Of these unigenes, 29,987 (43.74%) were annotated in publicly available plant protein database. A total of 3,869 genes were identified as significantly differentially expressed, and enrichment analysis was conducted at different time points. Furthermore, metabolic pathway analysis revealed that 66 unigenes were responsible for terpenoid backbone biosynthesis, with up to 12 up-regulated unigenes involved in the biosynthesis of ginkgolide and bilobalide. Differential gene expression analysis together with real-time PCR experiments indicated that the synthesis of bilobalide may have interfered with the ginkgolide synthesis process in the kernel. These data can remarkably expand the existing transcriptome resources of Ginkgo, and provide a valuable platform to reveal more on developmental and metabolic mechanisms of this species. PMID:26500663
Technology Transfer Automated Retrieval System (TEKTRAN)
Maize kernel density impacts milling quality of the grain due to kernel hardness. Harder kernels are correlated with higher test weight and are more resistant to breakage during harvest and transport. Softer kernels, in addition to being susceptible to mechanical damage, are also prone to pathogen ...
Comparison of Kernel Equating and Item Response Theory Equating Methods
ERIC Educational Resources Information Center
Meng, Yu
2012-01-01
The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…
Evidence-based kernels: fundamental units of behavioral influence.
Embry, Dennis D; Biglan, Anthony
2008-09-01
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior.
Evidence-based Kernels: Fundamental Units of Behavioral Influence
Biglan, Anthony
2008-01-01
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of...
Evidence-Based Kernels: Fundamental Units of Behavioral Influence
ERIC Educational Resources Information Center
Embry, Dennis D.; Biglan, Anthony
2008-01-01
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…
Optimal Bandwidth Selection in Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Häggström, Jenny; Wiberg, Marie
2014-01-01
The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…
Sugar uptake into kernels of tunicate tassel-seed maize
Thomas, P.A.; Felker, F.C.; Crawford, C.G. )
1990-05-01
A maize (Zea mays L.) strain expressing both the tassel-seed (Ts-5) and tunicate (Tu) characters was developed which produces glume-covered kernels on the tassel, often born on 7-10 mm pedicels. Vigorous plants produce up to 100 such kernels interspersed with additional sessile kernels. This floral unit provides a potentially valuable experimental system for studying sugar uptake into developing maize seeds. When detached kernels (with glumes and pedicel intact) are placed in incubation solution, fluid flows up the pedicel and into the glumes, entering the pedicel apoplast near the kernel base. The unusual anatomical features of this maize strain permit experimental access to the pedicel apoplast with much less possibility of kernel base tissue damage than with kernels excised from the cob. ({sup 14}C)Fructose incorporation into soluble and insoluble fractions of endosperm increased for 8 days. Endosperm uptake of sucrose, fructose, and D-glucose was significantly greater than that of L-glucose. Fructose uptake was significantly inhibited by CCCP, DNP, and PCMBS. These results suggest the presence of an active, non-diffusion component of sugar transport in maize kernels.
Introduction to Kernel Methods: Classification of Multivariate Data
NASA Astrophysics Data System (ADS)
Fauvel, M.
2016-05-01
In this chapter, kernel methods are presented for the classification of multivariate data. An introduction example is given to enlighten the main idea of kernel methods. Then emphasis is done on the Support Vector Machine. Structural risk minimization is presented, and linear and non-linear SVM are described. Finally, a full example of SVM classification is given on simulated hyperspectral data.
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
Femtosecond dynamics of cluster expansion
NASA Astrophysics Data System (ADS)
Gao, Xiaohui; Wang, Xiaoming; Shim, Bonggu; Arefiev, Alexey; Tushentsov, Mikhail; Breizman, Boris; Downer, Mike
2010-03-01
Noble gas clusters irradiated by intense ultrafast laser expand quickly and become typical plasma in picosecond time scale. During the expansion, the clustered plasma demonstrates unique optical properties such as strong absorption and positive contribution to the refractive index. Here we studied cluster expansion dynamics by fs-time-resolved refractive index and absorption measurements in cluster gas jets after ionization and heating by an intense pump pulse. The refractive index measured by frequency domain interferometry (FDI) shows the transient positive peak of refractive index due to clustered plasma. By separating it from the negative contribution of the monomer plasma, we are able to determine the cluster fraction. The absorption measured by a delayed probe shows the contribution from clusters of various sizes. The plasma resonances in the cluster explain the enhancement of the absorption in our isothermal expanding cluster model. The cluster size distribution can be determined. A complete understanding of the femtosecond dynamics of cluster expansion is essential in the accurate interpretation and control of laser-cluster experiments such as phase-matched harmonic generation in cluster medium.
Saw, C K; Siekhaus, W J
2004-07-12
The thermal expansion of AuIn{sub 2} gold is of great interest in soldering technology. Indium containing solders have been used to make gold wire interconnects at low soldering temperature and over time, AuIn{sub 2} is formed between the gold wire and the solder due to the high heat of formation and the high inter-metallic diffusion of indium. Hence, the thermal expansion of AuIn{sub 2} alloy in comparison with that of the gold wire and the indium-containing solder is critical in determining the integrity of the connection. We present the results of x-ray diffraction measurement of the coefficient of linear expansion of AuIn{sub 2} as well as the bulk expansion and density changes over the temperature range of 30 to 500 C.
Accumulation of storage products in oat during kernel development.
Banaś, A; Dahlqvist, A; Debski, H; Gummeson, P O; Stymne, S
2000-12-01
Lipids, proteins and starch are the main storage products in oat seeds. As a first step in elucidating the regulatory mechanisms behind the deposition of these compounds, two different oat varieties, 'Freja' and 'Matilda', were analysed during kernel development. In both cultivars, the majority of the lipids accumulated at very early stage of development but Matilda accumulated about twice the amount of lipids compared to Freja. Accumulation of proteins and starch started also in the early stage of kernel development but, in contrast to lipids, continued over a considerably longer period. The high-oil variety Matilda also accumulated higher amounts of proteins than Freja. The starch content in Freja kernels was higher than in Matilda kernels and the difference was most pronounced during the early stage of development when oil synthesis was most active. Oleosin accumulation continued during the whole period of kernel development.
Anatomically-aided PET reconstruction using the kernel method
NASA Astrophysics Data System (ADS)
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
Direct Measurement of Wave Kernels in Time-Distance Helioseismology
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.
2006-01-01
Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.
OSKI: A Library of Automatically Tuned Sparse Matrix Kernels
Vuduc, R; Demmel, J W; Yelick, K A
2005-07-19
The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.
PROPERTIES OF A SOLAR FLARE KERNEL OBSERVED BY HINODE AND SDO
Young, P. R.; Doschek, G. A.; Warren, H. P.; Hara, H.
2013-04-01
Flare kernels are compact features located in the solar chromosphere that are the sites of rapid heating and plasma upflow during the rise phase of flares. An example is presented from a M1.1 class flare in active region AR 11158 observed on 2011 February 16 07:44 UT for which the location of the upflow region seen by EUV Imaging Spectrometer (EIS) can be precisely aligned to high spatial resolution images obtained by the Atmospheric Imaging Assembly (AIA) and Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO). A string of bright flare kernels is found to be aligned with a ridge of strong magnetic field, and one kernel site is highlighted for which an upflow speed of Almost-Equal-To 400 km s{sup -1} is measured in lines formed at 10-30 MK. The line-of-sight magnetic field strength at this location is Almost-Equal-To 1000 G. Emission over a continuous range of temperatures down to the chromosphere is found, and the kernels have a similar morphology at all temperatures and are spatially coincident with sizes at the resolution limit of the AIA instrument ({approx}<400 km). For temperatures of 0.3-3.0 MK the EIS emission lines show multiple velocity components, with the dominant component becoming more blueshifted with temperature from a redshift of 35 km s{sup -1} at 0.3 MK to a blueshift of 60 km s{sup -1} at 3.0 MK. Emission lines from 1.5-3.0 MK show a weak redshifted component at around 60-70 km s{sup -1} implying multi-directional flows at the kernel site. Significant non-thermal broadening corresponding to velocities of Almost-Equal-To 120 km s{sup -1} is found at 10-30 MK, and the electron density in the kernel, measured at 2 MK, is 3.4 Multiplication-Sign 10{sup 10} cm{sup -3}. Finally, the Fe XXIV {lambda}192.03/{lambda}255.11 ratio suggests that the EIS calibration has changed since launch, with the long wavelength channel less sensitive than the short wavelength channel by around a factor two.
Bullough, B
1976-09-01
Several factors are influencing role expansion for registered nurses, among them the shortage of primary care physicians, the federal government, the physician's assistant movement, the growing complexity of acute hospital care, educational reform, and the women's liberation movement. As state licensure statutes are revised to allow for role expansion, the changing laws themselves become a factor supporting the movement.
NASA Astrophysics Data System (ADS)
Dong, Yadong; Jiao, Ziti; Zhang, Hu; Bai, Dongni; Zhang, Xiaoning; Li, Yang; He, Dandan
2016-10-01
The semi-empirical, kernel-driven Bidirectional Reflectance Distribution Function (BRDF) model has been widely used for many aspects of remote sensing. With the development of the kernel-driven model, there is a need to further assess the performance of newly developed kernels. The use of visualization tools can facilitate the analysis of model results and the assessment of newly developed kernels. However, the current version of the kernel-driven model does not contain a visualization function. In this study, a user-friendly visualization tool, named MaKeMAT, was developed specifically for the kernel-driven model. The POLDER-3 and CAR BRDF datasets were used to demonstrate the applicability of MaKeMAT. The visualization of inputted multi-angle measurements enhances understanding of multi-angle measurements and allows the choice of measurements with good representativeness. The visualization of modeling results facilitates the assessment of newly developed kernels. The study shows that the visualization tool MaKeMAT can promote the widespread application of the kernel-driven model.
Ernst, Donald M.
1984-10-23
A specially constructed heat pipe for use in fluidized bed combustors. Two distinct coatings are spray coated onto a heat pipe casing constructed of low thermal expansion metal, each coating serving a different purpose. The first coating forms aluminum oxide to prevent hydrogen permeation into the heat pipe casing, and the second coating contains stabilized zirconium oxide to provide abrasion resistance while not substantially affecting the heat transfer characteristics of the system.
Ernst, D.M.
1984-10-23
A specially constructed heat pipe is described for use in fluidized bed combustors. Two distinct coatings are spray coated onto a heat pipe casing constructed of low thermal expansion metal, each coating serving a different purpose. The first coating forms aluminum oxide to prevent hydrogen permeation into the heat pipe casing, and the second coating contains stabilized zirconium oxide to provide abrasion resistance while not substantially affecting the heat transfer characteristics of the system.
Privacy preserving RBF kernel support vector machine.
Li, Haoran; Xiong, Li; Ohno-Machado, Lucila; Jiang, Xiaoqian
2014-01-01
Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data. PMID:25013805
Kernel density estimation using graphical processing unit
NASA Astrophysics Data System (ADS)
Sunarko, Su'ud, Zaki
2015-09-01
Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.
Labeled Graph Kernel for Behavior Analysis.
Zhao, Ruiqi; Martinez, Aleix M
2016-08-01
Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data.
Hua, Wen-Yu; Ghosh, Debashis
2015-09-01
Associating genetic markers with a multidimensional phenotype is an important yet challenging problem. In this work, we establish the equivalence between two popular methods: kernel-machine regression (KMR), and kernel distance covariance (KDC). KMR is a semiparametric regression framework that models covariate effects parametrically and genetic markers non-parametrically, while KDC represents a class of methods that include distance covariance (DC) and Hilbert-Schmidt independence criterion (HSIC), which are nonparametric tests of independence. We show that the equivalence between the score test of KMR and the KDC statistic under certain conditions can lead to a novel generalization of the KDC test that incorporates covariates. Our contributions are 3-fold: (1) establishing the equivalence between KMR and KDC; (2) showing that the principles of KMR can be applied to the interpretation of KDC; (3) the development of a broader class of KDC statistics, where the class members are statistics corresponding to different kernel combinations. Finally, we perform simulation studies and an analysis of real data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. The ADNI study suggest that SNPs of FLJ16124 exhibit pairwise interaction effects that are strongly correlated to the changes of brain region volumes. PMID:25939365
Hua, Wen-Yu; Ghosh, Debashis
2015-09-01
Associating genetic markers with a multidimensional phenotype is an important yet challenging problem. In this work, we establish the equivalence between two popular methods: kernel-machine regression (KMR), and kernel distance covariance (KDC). KMR is a semiparametric regression framework that models covariate effects parametrically and genetic markers non-parametrically, while KDC represents a class of methods that include distance covariance (DC) and Hilbert-Schmidt independence criterion (HSIC), which are nonparametric tests of independence. We show that the equivalence between the score test of KMR and the KDC statistic under certain conditions can lead to a novel generalization of the KDC test that incorporates covariates. Our contributions are 3-fold: (1) establishing the equivalence between KMR and KDC; (2) showing that the principles of KMR can be applied to the interpretation of KDC; (3) the development of a broader class of KDC statistics, where the class members are statistics corresponding to different kernel combinations. Finally, we perform simulation studies and an analysis of real data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. The ADNI study suggest that SNPs of FLJ16124 exhibit pairwise interaction effects that are strongly correlated to the changes of brain region volumes.
Probability-confidence-kernel-based localized multiple kernel learning with lp norm.
Han, Yina; Liu, Guizhong
2012-06-01
Localized multiple kernel learning (LMKL) is an attractive strategy for combining multiple heterogeneous features in terms of their discriminative power for each individual sample. However, models excessively fitting to a specific sample would obstacle the extension to unseen data, while a more general form is often insufficient for diverse locality characterization. Hence, both learning sample-specific local models for each training datum and extending the learned models to unseen test data should be equally addressed in designing LMKL algorithm. In this paper, for an integrative solution, we propose a probability confidence kernel (PCK), which measures per-sample similarity with respect to probabilistic-prediction-based class attribute: The class attribute similarity complements the spatial-similarity-based base kernels for more reasonable locality characterization, and the predefined form of involved class probability density function facilitates the extension to the whole input space and ensures its statistical meaning. Incorporating PCK into support-vectormachine-based LMKL framework, we propose a new PCK-LMKL with arbitrary l(p)-norm constraint implied in the definition of PCKs, where both the parameters in PCK and the final classifier can be efficiently optimized in a joint manner. Evaluations of PCK-LMKL on both benchmark machine learning data sets (ten University of California Irvine (UCI) data sets) and challenging computer vision data sets (15-scene data set and Caltech-101 data set) have shown to achieve state-of-the-art performances.
Oil extraction from sheanut (Vitellaria paradoxa Gaertn C.F.) kernels assisted by microwaves.
Nde, Divine B; Boldor, Dorin; Astete, Carlos; Muley, Pranjali; Xu, Zhimin
2016-03-01
Shea butter, is highly solicited in cosmetics, pharmaceuticals, chocolates and biodiesel formulations. Microwave assisted extraction (MAE) of butter from sheanut kernels was carried using the Doehlert's experimental design. Factors studied were microwave heating time, temperature and solvent/solute ratio while the responses were the quantity of oil extracted and the acid number. Second order models were established to describe the influence of experimental parameters on the responses studied. Under optimum MAE conditions of heating time 23 min, temperature 75 °C and solvent/solute ratio 4:1 more than 88 % of the oil with a free fatty acid (FFA) value less than 2, was extracted compared to the 10 h and solvent/solute ratio of 10:1 required for soxhlet extraction. Scanning electron microscopy was used to elucidate the effect of microwave heating on the kernels' microstructure. Substantial reduction in extraction time and volumes of solvent used and oil of suitable quality are the main benefits derived from the MAE process.
Oil extraction from sheanut (Vitellaria paradoxa Gaertn C.F.) kernels assisted by microwaves.
Nde, Divine B; Boldor, Dorin; Astete, Carlos; Muley, Pranjali; Xu, Zhimin
2016-03-01
Shea butter, is highly solicited in cosmetics, pharmaceuticals, chocolates and biodiesel formulations. Microwave assisted extraction (MAE) of butter from sheanut kernels was carried using the Doehlert's experimental design. Factors studied were microwave heating time, temperature and solvent/solute ratio while the responses were the quantity of oil extracted and the acid number. Second order models were established to describe the influence of experimental parameters on the responses studied. Under optimum MAE conditions of heating time 23 min, temperature 75 °C and solvent/solute ratio 4:1 more than 88 % of the oil with a free fatty acid (FFA) value less than 2, was extracted compared to the 10 h and solvent/solute ratio of 10:1 required for soxhlet extraction. Scanning electron microscopy was used to elucidate the effect of microwave heating on the kernels' microstructure. Substantial reduction in extraction time and volumes of solvent used and oil of suitable quality are the main benefits derived from the MAE process. PMID:27570267
Effects of sample size on KERNEL home range estimates
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
1999-01-01
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
Gaussian kernel width optimization for sparse Bayesian learning.
Mohsenzadeh, Yalda; Sheikhzadeh, Hamid
2015-04-01
Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters. PMID:25794377
Preliminary thermal expansion screening data for tuffs
Lappin, A.R.
1980-03-01
A major variable in evaluating the potential of silicic tuffs for use in geologic disposal of heat-producing nuclear wastes is thermal expansion. Results of ambient-pressure linear expansion measurements on a group of tuffs that vary treatly in porosity and mineralogy are presente here. Thermal expansion of devitrified welded tuffs is generally linear with increasing temperature and independent of both porosity and heating rate. Mineralogic factors affecting behavior of these tuffs are limited to the presence or absence of cristobalite and altered biotite. The presence of cristobalite results in markedly nonlinear expansion above 200{sup 0}C. If biotite in biotite-hearing rocks alters even slightly to expandable clays, the behavior of these tuffs near the boiling point of water can be dominated by contraction of the expandable phase. Expansion of both high- and low-porosity tuffs containing hydrated silicic glass and/or expandable clays is complex. The behavior of these rocks appears to be completely dominated by dehydration of hydrous phases and, hence, should be critically dependent on fluid pressure. Valid extrapolation of the ambient-pressure results presented here to depths of interest for construction of a nuclear-waste repository will depend on a good understanding of the interaction of dehydration rates and fluid pressures, and of the effects of both micro- and macrofractures on the response of tuff masss.
Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D
2010-05-01
The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to
Microscale Regenerative Heat Exchanger
NASA Technical Reports Server (NTRS)
Moran, Matthew E.; Stelter, Stephan; Stelter, Manfred
2006-01-01
The device described herein is designed primarily for use as a regenerative heat exchanger in a miniature Stirling engine or Stirling-cycle heat pump. A regenerative heat exchanger (sometimes called, simply, a "regenerator" in the Stirling-engine art) is basically a thermal capacitor: Its role in the Stirling cycle is to alternately accept heat from, then deliver heat to, an oscillating flow of a working fluid between compression and expansion volumes, without introducing an excessive pressure drop. These volumes are at different temperatures, and conduction of heat between these volumes is undesirable because it reduces the energy-conversion efficiency of the Stirling cycle.
Bridging the gap between the KERNEL and RT-11
Hendra, R.G.
1981-06-01
A software package is proposed to allow users of the PL-11 language, and the LSI-11 KERNEL in general, to use their PL-11 programs under RT-11. Further, some general purpose extensions to the KERNEL are proposed that facilitate some number conversions and strong manipulations. A Floating Point Package of procedures to allow full use of the hardware floating point capability of the LSI-11 computers is proposed. Extensions to the KERNEL that allow a user to read, write and delete disc files in the manner of RT-11 is also proposed. A device directory listing routine is also included.
Spectrophotometric method for determination of phosphine residues in cashew kernels.
Rangaswamy, J R
1988-01-01
A spectrophotometric method reported for determination of phosphine (PH3) residues in wheat has been extended for determination of these residues in cashew kernels. Unlike the spectrum for wheat, the spectrum of PH3 residue-AgNO3 chromophore from cashew kernels does not show an absorption maximum at 400 nm; nevertheless, reading the absorbance at 400 nm afforded good recoveries of 90-98%. No interference occurred from crop materials, and crop controls showed low absorbance; the method can be applied for determinations as low as 0.01 ppm PH3 residue in cashew kernels.
Initial-state splitting kernels in cold nuclear matter
NASA Astrophysics Data System (ADS)
Ovanesyan, Grigory; Ringer, Felix; Vitev, Ivan
2016-09-01
We derive medium-induced splitting kernels for energetic partons that undergo interactions in dense QCD matter before a hard-scattering event at large momentum transfer Q2. Working in the framework of the effective theory SCETG, we compute the splitting kernels beyond the soft gluon approximation. We present numerical studies that compare our new results with previous findings. We expect the full medium-induced splitting kernels to be most relevant for the extension of initial-state cold nuclear matter energy loss phenomenology in both p+A and A+A collisions.
Kernel simplex growing algorithm for hyperspectral endmember extraction
NASA Astrophysics Data System (ADS)
Zhao, Liaoying; Zheng, Junpeng; Li, Xiaorun; Wang, Lijiao
2014-01-01
In order to effectively extract endmembers for hyperspectral imagery where linear mixing model may not be appropriate due to multiple scattering effects, this paper extends the simplex growing algorithm (SGA) to its kernel version. A new simplex volume formula without dimension reduction is used in SGA to form a new simplex growing algorithm (NSGA). The original data are nonlinearly mapped into a high-dimensional space where the scatters can be ignored. To avoid determining complex nonlinear mapping, a kernel function is used to extend the NSGA to kernel NSGA (KNSGA). Experimental results of simulated and real data prove that the proposed KNSGA approach outperforms SGA and NSGA.
Multitasking kernel for the C and Fortran programming languages
Brooks, E.D. III
1984-09-01
A multitasking kernel for the C and Fortran programming languages which runs on the Unix operating system is presented. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the coding, debugging and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessors. The performance evaluation features require no changes in the source code of the application and are implemented as a set of compile and run time options in the kernel.
Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.
1999-05-12
Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twicemore » the source particle range.« less
Nelson, E.A.; Christensen, E.J.; Mackey, H.E.; Sharitz, R.R.; Jensen, J.R.; Hodgson, M.E.
1984-02-01
Since 1954, cooling water discharges from K Reactor ({anti X} = 370 cfs {at} 59 C) to Pen Branch have altered vegetation and deposited sediment in the Savannah River Swamp forming the Pen Branch delta. Currently, the delta covers over 300 acres and continues to expand at a rate of about 16 acres/yr. Examination of delta expansion can provide important information on environmental impacts to wetlands exposed to elevated temperature and flow conditions. To assess the current status and predict future expansion of the Pen Branch delta, historic aerial photographs were analyzed using both basic photo interpretation and computer techniques to provide the following information: (1) past and current expansion rates; (2) location and changes of impacted areas; (3) total acreage presently affected. Delta acreage changes were then compared to historic reactor discharge temperature and flow data to see if expansion rate variations could be related to reactor operations.
Weakly relativistic plasma expansion
Fermous, Rachid Djebli, Mourad
2015-04-15
Plasma expansion is an important physical process that takes place in laser interactions with solid targets. Within a self-similar model for the hydrodynamical multi-fluid equations, we investigated the expansion of both dense and under-dense plasmas. The weakly relativistic electrons are produced by ultra-intense laser pulses, while ions are supposed to be in a non-relativistic regime. Numerical investigations have shown that relativistic effects are important for under-dense plasma and are characterized by a finite ion front velocity. Dense plasma expansion is found to be governed mainly by quantum contributions in the fluid equations that originate from the degenerate pressure in addition to the nonlinear contributions from exchange and correlation potentials. The quantum degeneracy parameter profile provides clues to set the limit between under-dense and dense relativistic plasma expansions at a given density and temperature.
Air expansion in a water rocket
NASA Astrophysics Data System (ADS)
Romanelli, Alejandro; Bove, Italo; González Madina, Federico
2013-10-01
We study the thermodynamics of a water rocket in the thrust phase, taking into account the expansion of the air with water vapor, vapor condensation, and the corresponding latent heat. We set up a simple experimental device with a stationary bottle and verify that the gas expansion in the bottle is well approximated by a polytropic process PVβ = constant, where the parameter β depends on the initial conditions. We find an analytical expression for β that depends only on the thermodynamic initial conditions and is in good agreement with the experimental results.
NASA Technical Reports Server (NTRS)
Widener, Edward L.
1992-01-01
The objective is to introduce some concepts of thermodynamics in existing heat-treating experiments using available items. The specific objectives are to define the thermal properties of materials and to visualize expansivity, conductivity, heat capacity, and the melting point of common metals. The experimental procedures are described.
Kernel-based Linux emulation for Plan 9.
Minnich, Ronald G.
2010-09-01
CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9. In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.
Inheritance of Kernel Color in Corn: Explanations and Investigations.
ERIC Educational Resources Information Center
Ford, Rosemary H.
2000-01-01
Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)
Intelligent classification methods of grain kernels using computer vision analysis
NASA Astrophysics Data System (ADS)
Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo
2011-06-01
In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.
Isolation and purification of D-mannose from palm kernel.
Zhang, Tao; Pan, Ziguo; Qian, Chao; Chen, Xinzhi
2009-09-01
An economically viable procedure for the isolation and purification of d-mannose from palm kernel was developed in this research. The palm kernel was catalytically hydrolyzed with sulfuric acid at 100 degrees C and then fermented by mannan-degrading enzymes. The solution after fermentation underwent filtration in a silica gel column, desalination by ion-exchange resin, and crystallization in ethanol to produce pure d-mannose in a total yield of 48.4% (based on the weight of the palm kernel). Different enzymes were investigated, and the results indicated that endo-beta-mannanase was the best enzyme to promote the hydrolysis of the oligosaccharides isolated from the palm kernel. The pure d-mannose sample was characterized by FTIR, (1)H NMR, and (13)C NMR spectra.
A kernel adaptive algorithm for quaternion-valued inputs.
Paul, Thomas K; Ogunfunmi, Tokunbo
2015-10-01
The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations. PMID:25594982
The Dynamic Kernel Scheduler-Part 1
NASA Astrophysics Data System (ADS)
Adelmann, Andreas; Locans, Uldis; Suter, Andreas
2016-10-01
Emerging processor architectures such as GPUs and Intel MICs provide a huge performance potential for high performance computing. However developing software that uses these hardware accelerators introduces additional challenges for the developer. These challenges may include exposing increased parallelism, handling different hardware designs, and using multiple development frameworks in order to utilise devices from different vendors. The Dynamic Kernel Scheduler (DKS) is being developed in order to provide a software layer between the host application and different hardware accelerators. DKS handles the communication between the host and the device, schedules task execution, and provides a library of built-in algorithms. Algorithms available in the DKS library will be written in CUDA, OpenCL, and OpenMP. Depending on the available hardware, the DKS can select the appropriate implementation of the algorithm. The first DKS version was created using CUDA for the Nvidia GPUs and OpenMP for Intel MIC. DKS was further integrated into OPAL (Object-oriented Parallel Accelerator Library) in order to speed up a parallel FFT based Poisson solver and Monte Carlo simulations for particle-matter interaction used for proton therapy degrader modelling. DKS was also used together with Minuit2 for parameter fitting, where χ2 and max-log-likelihood functions were offloaded to the hardware accelerator. The concepts of the DKS, first results, and plans for the future will be shown in this paper.
Protoribosome by quantum kernel energy method.
Huang, Lulu; Krupkin, Miri; Bashan, Anat; Yonath, Ada; Massa, Lou
2013-09-10
Experimental evidence suggests the existence of an RNA molecular prebiotic entity, called by us the "protoribosome," which may have evolved in the RNA world before evolution of the genetic code and proteins. This vestige of the RNA world, which possesses all of the capabilities required for peptide bond formation, seems to be still functioning in the heart of all of the contemporary ribosome. Within the modern ribosome this remnant includes the peptidyl transferase center. Its highly conserved nucleotide sequence is suggestive of its robustness under diverse environmental conditions, and hence on its prebiotic origin. Its twofold pseudosymmetry suggests that this entity could have been a dimer of self-folding RNA units that formed a pocket within which two activated amino acids might be accommodated, similar to the binding mode of modern tRNA molecules that carry amino acids or peptidyl moieties. Using quantum mechanics and crystal coordinates, this work studies the question of whether the putative protoribosome has properties necessary to function as an evolutionary precursor to the modern ribosome. The quantum model used in the calculations is density functional theory--B3LYP/3-21G*, implemented using the kernel energy method to make the computations practical and efficient. It occurs that the necessary conditions that would characterize a practicable protoribosome--namely (i) energetic structural stability and (ii) energetically stable attachment to substrates--are both well satisfied.
Local Kernel for Brains Classification in Schizophrenia
NASA Astrophysics Data System (ADS)
Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.
In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.
Kernel MAD Algorithm for Relative Radiometric Normalization
NASA Astrophysics Data System (ADS)
Bai, Yang; Tang, Ping; Hu, Changmiao
2016-06-01
The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.
Kernel spectral clustering with memory effect
NASA Astrophysics Data System (ADS)
Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.
2013-05-01
Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.
The Weighted Super Bergman Kernels Over the Supermatrix Spaces
NASA Astrophysics Data System (ADS)
Feng, Zhiming
2015-12-01
The purpose of this paper is threefold. Firstly, using Howe duality for , we obtain integral formulas of the super Schur functions with respect to the super standard Gaussian distributions. Secondly, we give explicit expressions of the super Szegö kernels and the weighted super Bergman kernels for the Cartan superdomains of type I. Thirdly, combining these results, we obtain duality relations of integrals over the unitary groups and the Cartan superdomains, and the marginal distributions of the weighted measure.
Kernel approximation for solving few-body integral equations
NASA Astrophysics Data System (ADS)
Christie, I.; Eyre, D.
1986-06-01
This paper investigates an approximate method for solving integral equations that arise in few-body problems. The method is to replace the kernel by a degenerate kernel defined on a finite dimensional subspace of piecewise Lagrange polynomials. Numerical accuracy of the method is tested by solving the two-body Lippmann-Schwinger equation with non-separable potentials, and the three-body Amado-Lovelace equation with separable two-body potentials.
Enzymatic treatment of peanut kernels to reduce allergen levels.
Yu, Jianmei; Ahmedna, Mohamed; Goktepe, Ipek; Cheng, Hsiaopo; Maleki, Soheila
2011-08-01
This study investigated the use of enzymatic treatment to reduce peanut allergens in peanut kernels as affected by processing conditions. Two major peanut allergens, Ara h 1 and Ara h 2, were used as indicators of process effectiveness. Enzymatic treatment effectively reduced Ara h 1 and Ara h 2 in roasted peanut kernels by up to 100% under optimal conditions. For instance, treatment of roasted peanut kernels with α-chymotrypsin and trypsin for 1-3h significantly increased the solubility of peanut protein while reducing Ara h 1 and Ara h 2 in peanut kernel extracts by 100% and 98%, respectively, based on ELISA readings. Ara h 1 and Ara h 2 levels in peanut protein extracts were inversely correlated with protein solubility in roasted peanut. Blanching of kernels enhanced the effectiveness of enzyme treatment in roasted peanuts but not in raw peanuts. The optimal concentration of enzyme was determined by response surface to be in the range of 0.1-0.2%. No consistent results were obtained for raw peanut kernels since Ara h 1 and Ara h 2 increased in peanut protein extracts under some treatment conditions and decreased in others. PMID:25214091
An Ensemble Approach to Building Mercer Kernels with Prior Information
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2005-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.
Tomlinson, John J.
2006-04-18
A water-heating dehumidifier includes a refrigerant loop including a compressor, at least one condenser, an expansion device and an evaporator including an evaporator fan. The condenser includes a water inlet and a water outlet for flowing water therethrough or proximate thereto, or is affixed to the tank or immersed into the tank to effect water heating without flowing water. The immersed condenser design includes a self-insulated capillary tube expansion device for simplicity and high efficiency. In a water heating mode air is drawn by the evaporator fan across the evaporator to produce cooled and dehumidified air and heat taken from the air is absorbed by the refrigerant at the evaporator and is pumped to the condenser, where water is heated. When the tank of water heater is full of hot water or a humidistat set point is reached, the water-heating dehumidifier can switch to run as a dehumidifier.
Accelerating the loop expansion
Ingermanson, R.
1986-07-29
This thesis introduces a new non-perturbative technique into quantum field theory. To illustrate the method, I analyze the much-studied phi/sup 4/ theory in two dimensions. As a prelude, I first show that the Hartree approximation is easy to obtain from the calculation of the one-loop effective potential by a simple modification of the propagator that does not affect the perturbative renormalization procedure. A further modification then susggests itself, which has the same nice property, and which automatically yields a convex effective potential. I then show that both of these modifications extend naturally to higher orders in the derivative expansion of the effective action and to higher orders in the loop-expansion. The net effect is to re-sum the perturbation series for the effective action as a systematic ''accelerated'' non-perturbative expansion. Each term in the accelerated expansion corresponds to an infinite number of terms in the original series. Each term can be computed explicitly, albeit numerically. Many numerical graphs of the various approximations to the first two terms in the derivative expansion are given. I discuss the reliability of the results and the problem of spontaneous symmetry-breaking, as well as some potential applications to more interesting field theories. 40 refs.
Optimal Electric Utility Expansion
1989-10-10
SAGE-WASP is designed to find the optimal generation expansion policy for an electrical utility system. New units can be automatically selected from a user-supplied list of expansion candidates which can include hydroelectric and pumped storage projects. The existing system is modeled. The calculational procedure takes into account user restrictions to limit generation configurations to an area of economic interest. The optimization program reports whether the restrictions acted as a constraint on the solution. All expansionmore » configurations considered are required to pass a user supplied reliability criterion. The discount rate and escalation rate are treated separately for each expansion candidate and for each fuel type. All expenditures are separated into local and foreign accounts, and a weighting factor can be applied to foreign expenditures.« less
Novel Foraminal Expansion Technique
Senturk, Salim; Ciplak, Mert; Oktenoglu, Tunc; Sasani, Mehdi; Egemen, Emrah; Yaman, Onur; Suzer, Tuncer
2016-01-01
The technique we describe was developed for cervical foraminal stenosis for cases in which a keyhole foraminotomy would not be effective. Many cervical stenosis cases are so severe that keyhole foraminotomy is not successful. However, the technique outlined in this study provides adequate enlargement of an entire cervical foraminal diameter. This study reports on a novel foraminal expansion technique. Linear drilling was performed in the middle of the facet joint. A small bone graft was placed between the divided lateral masses after distraction. A lateral mass stabilization was performed with screws and rods following the expansion procedure. A cervical foramen was linearly drilled medially to laterally, then expanded with small bone grafts, and a lateral mass instrumentation was added with surgery. The patient was well after the surgery. The novel foraminal expansion is an effective surgical method for severe foraminal stenosis. PMID:27559460
Novel Foraminal Expansion Technique.
Ozer, Ali Fahir; Senturk, Salim; Ciplak, Mert; Oktenoglu, Tunc; Sasani, Mehdi; Egemen, Emrah; Yaman, Onur; Suzer, Tuncer
2016-08-01
The technique we describe was developed for cervical foraminal stenosis for cases in which a keyhole foraminotomy would not be effective. Many cervical stenosis cases are so severe that keyhole foraminotomy is not successful. However, the technique outlined in this study provides adequate enlargement of an entire cervical foraminal diameter. This study reports on a novel foraminal expansion technique. Linear drilling was performed in the middle of the facet joint. A small bone graft was placed between the divided lateral masses after distraction. A lateral mass stabilization was performed with screws and rods following the expansion procedure. A cervical foramen was linearly drilled medially to laterally, then expanded with small bone grafts, and a lateral mass instrumentation was added with surgery. The patient was well after the surgery. The novel foraminal expansion is an effective surgical method for severe foraminal stenosis. PMID:27559460
Thermal expansion in nanoresonators
NASA Astrophysics Data System (ADS)
Mancardo Viotti, Agustín; Monastra, Alejandro G.; Moreno, Mariano F.; Florencia Carusela, M.
2016-08-01
Inspired by some recent experiments and numerical works related to nanoresonators, we perform classical molecular dynamics simulations to investigate the thermal expansion and the ability of the device to act as a strain sensor assisted by thermally-induced vibrations. The proposed model consists in a chain of atoms interacting anharmonically with both ends clamped to thermal reservoirs. We analyze the thermal expansion and resonant frequency shifts as a function of temperature and the applied strain. For the transversal modes the shift is approximately linear with strain. We also present analytical results from canonical calculations in the harmonic approximation showing that thermal expansion is uniform along the device. This prediction also works when the system operates in a nonlinear oscillation regime at moderate and high temperatures.
Novel Foraminal Expansion Technique.
Ozer, Ali Fahir; Senturk, Salim; Ciplak, Mert; Oktenoglu, Tunc; Sasani, Mehdi; Egemen, Emrah; Yaman, Onur; Suzer, Tuncer
2016-08-01
The technique we describe was developed for cervical foraminal stenosis for cases in which a keyhole foraminotomy would not be effective. Many cervical stenosis cases are so severe that keyhole foraminotomy is not successful. However, the technique outlined in this study provides adequate enlargement of an entire cervical foraminal diameter. This study reports on a novel foraminal expansion technique. Linear drilling was performed in the middle of the facet joint. A small bone graft was placed between the divided lateral masses after distraction. A lateral mass stabilization was performed with screws and rods following the expansion procedure. A cervical foramen was linearly drilled medially to laterally, then expanded with small bone grafts, and a lateral mass instrumentation was added with surgery. The patient was well after the surgery. The novel foraminal expansion is an effective surgical method for severe foraminal stenosis.
Simulating non-Newtonian flows with the moving particle semi-implicit method with an SPH kernel
NASA Astrophysics Data System (ADS)
Xiang, Hao; Chen, Bin
2015-02-01
The moving particle semi-implicit (MPS) method and smoothed particle hydrodynamics (SPH) are commonly used mesh-free particle methods for free surface flows. The MPS method has superiority in incompressible flow simulation and simple programing. However, the crude kernel function is not accurate enough for the discretization of the divergence of the shear stress tensor by the particle inconsistency when the MPS method is extended to non-Newtonian flows. This paper presents an improved MPS method with an SPH kernel to simulate non-Newtonian flows. To improve the consistency of the partial derivative, the SPH cubic spline kernel and the Taylor series expansion are combined with the MPS method. This approach is suitable for all non-Newtonian fluids that can be described with τ = μ(|γ|) Δ (where τ is the shear stress tensor, μ is the viscosity, |γ| is the shear rate, and Δ is the strain tensor), e.g., the Casson and Cross fluids. Two examples are simulated including the Newtonian Poiseuille flow and container filling process of the Cross fluid. The results of Poiseuille flow are more accurate than the traditional MPS method, and different filling processes are obtained with good agreement with previous results, which verified the validation of the new algorithm. For the Cross fluid, the jet fracture length can be correlated with We0.28Fr0.78 (We is the Weber number, Fr is the Froude number).
NASA Astrophysics Data System (ADS)
Bates, Jefferson; Laricchia, Savio; Ruzsinszky, Adrienn
The Random Phase Approximation (RPA) is quickly becoming a standard method beyond semi-local Density Functional Theory that naturally incorporates weak interactions and eliminates self-interaction error. RPA is not perfect, however, and suffers from self-correlation error as well as an incorrect description of short-ranged correlation typically leading to underbinding. To improve upon RPA we introduce a short-ranged, exchange-like kernel that is one-electron self-correlation free for one and two electron systems in the high-density limit. By tuning the one free parameter in our model to recover an exact limit of the homogeneous electron gas correlation energy we obtain a non-local, energy-optimized kernel that reduces the errors of RPA for both homogeneous and inhomogeneous solids. To reduce the computational cost of the standard kernel-corrected RPA, we also implement RPA renormalized perturbation theory for extended systems, and demonstrate its capability to describe the dominant correlation effects with a low-order expansion in both metallic and non-metallic systems. Furthermore we stress that for norm-conserving implementations the accuracy of RPA and beyond RPA structural properties compared to experiment is inherently limited by the choice of pseudopotential. Current affiliation: King's College London.
Kar, Arindam; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita; Kundu, Mahantapas
2012-01-01
In this paper a nonlinear Gabor Wavelet Transform (GWT) discriminant feature extraction approach for enhanced face recognition is proposed. Firstly, the low-energized blocks from Gabor wavelet transformed images are extracted. Secondly, the nonlinear discriminating features are analyzed and extracted from the selected low-energized blocks by the generalized Kernel Discriminative Common Vector (KDCV) method. The KDCV method is extended to include cosine kernel function in the discriminating method. The KDCV with the cosine kernels is then applied on the extracted low-energized discriminating feature vectors to obtain the real component of a complex quantity for face recognition. In order to derive positive kernel discriminative vectors, we apply only those kernel discriminative eigenvectors that are associated with nonzero eigenvalues. The feasibility of the low-energized Gabor-block-based generalized KDCV method with cosine kernel function models has been successfully tested for classification using the L1, L2 distance measures; and the cosine similarity measure on both frontal and pose-angled face recognition. Experimental results on the FRAV2D and the FERET database demonstrate the effectiveness of this new approach. PMID:23365559
Volcano clustering determination: Bivariate Gauss vs. Fisher kernels
NASA Astrophysics Data System (ADS)
Cañón-Tapia, Edgardo
2013-05-01
Underlying many studies of volcano clustering is the implicit assumption that vent distribution can be studied by using kernels originally devised for distribution in plane surfaces. Nevertheless, an important change in topology in the volcanic context is related to the distortion that is introduced when attempting to represent features found on the surface of a sphere that are being projected into a plane. This work explores the extent to which different topologies of the kernel used to study the spatial distribution of vents can introduce significant changes in the obtained density functions. To this end, a planar (Gauss) and a spherical (Fisher) kernels are mutually compared. The role of the smoothing factor in these two kernels is also explored with some detail. The results indicate that the topology of the kernel is not extremely influential, and that either type of kernel can be used to characterize a plane or a spherical distribution with exactly the same detail (provided that a suitable smoothing factor is selected in each case). It is also shown that there is a limitation on the resolution of the Fisher kernel relative to the typical separation between data that can be accurately described, because data sets with separations lower than 500 km are considered as a single cluster using this method. In contrast, the Gauss kernel can provide adequate resolutions for vent distributions at a wider range of separations. In addition, this study also shows that the numerical value of the smoothing factor (or bandwidth) of both the Gauss and Fisher kernels has no unique nor direct relationship with the relevant separation among data. In order to establish the relevant distance, it is necessary to take into consideration the value of the respective smoothing factor together with a level of statistical significance at which the contributions to the probability density function will be analyzed. Based on such reference level, it is possible to create a hierarchy of
Bounding the heat trace of a Calabi-Yau manifold
NASA Astrophysics Data System (ADS)
Fiset, Marc-Antoine; Walcher, Johannes
2015-09-01
The SCHOK bound states that the number of marginal deformations of certain two-dimensional conformal field theories is bounded linearly from above by the number of relevant operators. In conformal field theories defined via sigma models into Calabi-Yau manifolds, relevant operators can be estimated, in the point-particle approximation, by the low-lying spectrum of the scalar Laplacian on the manifold. In the strict large volume limit, the standard asymptotic expansion of Weyl and Minakshisundaram-Pleijel diverges with the higher-order curvature invariants. We propose that it would be sufficient to find an a priori uniform bound on the trace of the heat kernel for large but finite volume. As a first step in this direction, we then study the heat trace asymptotics, as well as the actual spectrum of the scalar Laplacian, in the vicinity of a conifold singularity. The eigenfunctions can be written in terms of confluent Heun functions, the analysis of which gives evidence that regions of large curvature will not prevent the existence of a bound of this type. This is also in line with general mathematical expectations about spectral continuity for manifolds with conical singularities. A sharper version of our results could, in combination with the SCHOK bound, provide a basis for a global restriction on the dimension of the moduli space of Calabi-Yau manifolds.
Gabor-based kernel PCA with fractional power polynomial models for face recognition.
Liu, Chengjun
2004-05-01
This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power
Thermal-to-visible face recognition using multiple kernel learning
NASA Astrophysics Data System (ADS)
Hu, Shuowen; Gurram, Prudhvi; Kwon, Heesung; Chan, Alex L.
2014-06-01
Recognizing faces acquired in the thermal spectrum from a gallery of visible face images is a desired capability for the military and homeland security, especially for nighttime surveillance and intelligence gathering. However, thermal-tovisible face recognition is a highly challenging problem, due to the large modality gap between thermal and visible imaging. In this paper, we propose a thermal-to-visible face recognition approach based on multiple kernel learning (MKL) with support vector machines (SVMs). We first subdivide the face into non-overlapping spatial regions or blocks using a method based on coalitional game theory. For comparison purposes, we also investigate uniform spatial subdivisions. Following this subdivision, histogram of oriented gradients (HOG) features are extracted from each block and utilized to compute a kernel for each region. We apply sparse multiple kernel learning (SMKL), which is a MKLbased approach that learns a set of sparse kernel weights, as well as the decision function of a one-vs-all SVM classifier for each of the subjects in the gallery. We also apply equal kernel weights (non-sparse) and obtain one-vs-all SVM models for the same subjects in the gallery. Only visible images of each subject are used for MKL training, while thermal images are used as probe images during testing. With subdivision generated by game theory, we achieved Rank-1 identification rate of 50.7% for SMKL and 93.6% for equal kernel weighting using a multimodal dataset of 65 subjects. With uniform subdivisions, we achieved a Rank-1 identification rate of 88.3% for SMKL, but 92.7% for equal kernel weighting.
Protein fold recognition using geometric kernel data fusion
Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves
2014-01-01
Motivation: Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. Results: We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. Availability and implementation: The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/ Contact: pooyapaydar@gmail.com or yves
ERIC Educational Resources Information Center
Ayoub, Ayoub B.
2006-01-01
In this article, the author takes up the special trinomial (1 + x + x[squared])[superscript n] and shows that the coefficients of its expansion are entries of a Pascal-like triangle. He also shows how to calculate these entries recursively and explicitly. This article could be used in the classroom for enrichment. (Contains 1 table.)
NASA Technical Reports Server (NTRS)
1985-01-01
Under an Egyptian government contract, PADCO studies urban growth in the Nile Area. They were assisted by LANDSAT survey maps and measurements provided by TAC. TAC had classified the raw LANDSAT data and processed it into various categories to detail urban expansion. PADCO crews spot checked the results, and correlations were established.
For the Long Island, New Jersey, and southern New England region, one facet of marsh drowning as a result of accelerated sea level rise is the expansion of salt marsh ponds and pannes. Over the past century, marsh ponds and pannes have formed and expanded in areas of poor drainag...
Physics suggests that the interplay of momentum, continuity, and geometry in outward radial flow must produce density and concomitant pressure reductions. In other words, this flow is intrinsically auto-expansive. It has been proposed that this process is the key to understanding...
Guzek, J.C.; Lujan, R.A.
1984-01-01
Disclosed is a cooler for television cameras and other temperature sensitive equipment. The cooler uses compressed gas ehich is accelerated to a high velocity by passing it through flow passageways having nozzle portions which expand the gas. This acceleration and expansion causes the gas to undergo a decrease in temperature thereby cooling the cooler body and adjacent temperature sensitive equipment.
Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)
NASA Astrophysics Data System (ADS)
Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.
2016-08-01
Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.
Kavallieratos, Nickolas G; Athanassiou, Christos G; Arthur, Frank H; Throne, James E
2012-01-01
Tests were conducted to determine whether the lesser grain borer, Rhyzopertha dominica (F.) (Coleoptera: Bostrychidae), selects rough rice (Oryza sativa L. (Poales: Poaceae)) kernels with cracked hulls for reproduction when these kernels are mixed with intact kernels. Differing amounts of kernels with cracked hulls (0, 5, 10, and 20%) of the varieties Francis and Wells were mixed with intact kernels, and the number of adult progeny emerging from intact kernels and from kernels with cracked hulls was determined. The Wells variety had been previously classified as tolerant to R. dominica, while the Francis variety was classified as moderately susceptible. Few F 1 progeny were produced in Wells regardless of the percentage of kernels with cracked hulls, few of the kernels with cracked hulls had emergence holes, and little firass was produced from feeding damage. At 10 and 20% kernels with cracked hulls, the progeny production, number of emergence holes in kernels with cracked hulls, and the amount of firass was greater in Francis than in Wells. The proportion of progeny emerging from kernels with cracked hulls increased as the proportion of kernels with cracked hulls increased. The results indicate that R. dominica select kernels with cracked hulls for reproduction.
Travel-time sensitivity kernels in long-range propagation.
Skarsoulis, E K; Cornuelle, B D; Dzieciuch, M A
2009-11-01
Wave-theoretic travel-time sensitivity kernels (TSKs) are calculated in two-dimensional (2D) and three-dimensional (3D) environments and their behavior with increasing propagation range is studied and compared to that of ray-theoretic TSKs and corresponding Fresnel-volumes. The differences between the 2D and 3D TSKs average out when horizontal or cross-range marginals are considered, which indicates that they are not important in the case of range-independent sound-speed perturbations or perturbations of large scale compared to the lateral TSK extent. With increasing range, the wave-theoretic TSKs expand in the horizontal cross-range direction, their cross-range extent being comparable to that of the corresponding free-space Fresnel zone, whereas they remain bounded in the vertical. Vertical travel-time sensitivity kernels (VTSKs)-one-dimensional kernels describing the effect of horizontally uniform sound-speed changes on travel-times-are calculated analytically using a perturbation approach, and also numerically, as horizontal marginals of the corresponding TSKs. Good agreement between analytical and numerical VTSKs, as well as between 2D and 3D VTSKs, is found. As an alternative method to obtain wave-theoretic sensitivity kernels, the parabolic approximation is used; the resulting TSKs and VTSKs are in good agreement with normal-mode results. With increasing range, the wave-theoretic VTSKs approach the corresponding ray-theoretic sensitivity kernels.
Characterization of the desiccation of wheat kernels by multivariate imaging.
Jaillais, B; Perrin, E; Mangavel, C; Bertrand, D
2011-06-01
Variations in the quality of wheat kernels can be an important problem in the cereal industry. In particular, desiccation conditions play an essential role in both the technological characteristics of the kernel and its ability to sprout. In planta desiccation constitutes a key stage in the determinism of the functional properties of seeds. The impact of desiccation on the endosperm texture of seed is presented in this work. A simple imaging system had previously been developed to acquire multivariate images to characterize the heterogeneity of food materials. A special algorithm for the use under principal component analysis (PCA) was developed to process the acquired multivariate images. Wheat grains were collected at physiological maturity, and were subjected to two types of drying conditions that induced different kinetics of water loss. A data set containing 24 images (dimensioned 702 × 524 pixels) corresponding to the different desiccation stages of wheat kernels was acquired at different wavelengths and then analyzed. A comparison of the images of kernel sections highlighted changes in kernel texture as a function of their drying conditions. Slow drying led to a floury texture, whereas fast drying caused a glassy texture. The automated imaging system thus developed is sufficiently rapid and economical to enable the characterization in large collections of grain texture as a function of time and water content.
[Utilizable value of wild economic plant resource--acron kernel].
He, R; Wang, K; Wang, Y; Xiong, T
2000-04-01
Peking whites breeding hens were selected. Using true metabolizable energy method (TME) to evaluate the available nutritive value of acorn kernel, while maize and rice were used as control. The results showed that the contents of gross energy (GE), apparent metabolizable energy (AME), true metabolizable energy (TME) and crude protein (CP) in the acorn kernel were 16.53 mg/kg-1, 11.13 mg.kg-1, 11.66 mg.kg-1 and 10.63%, respectively. The apparent availability and true availability of crude protein were 45.55% and 49.83%. The gross content of 17 amino acids, essential amino acids and semiessential amino acids were 9.23% and 4.84%. The true availability of amino acid and the content of true available amino acid were 60.85% and 6.09%. The contents of tannin and hydrocyanic acid were 4.55% and 0.98% in acorn kernel. The available nutritive value of acorn kernel is similar to maize or slightly lower, but slightly higher than that of rice. Acorn kernel is a wild economic plant resource to exploit and utilize but it contains higher tannin and hydrocyanic acid. PMID:11767593
Aleurone cell identity is suppressed following connation in maize kernels.
Geisler-Lee, Jane; Gallie, Daniel R
2005-09-01
Expression of the cytokinin-synthesizing isopentenyl transferase enzyme under the control of the Arabidopsis (Arabidopsis thaliana) SAG12 senescence-inducible promoter reverses the normal abortion of the lower floret from a maize (Zea mays) spikelet. Following pollination, the upper and lower floret pistils fuse, producing a connated kernel with two genetically distinct embryos and the endosperms fused along their abgerminal face. Therefore, ectopic synthesis of cytokinin was used to position two independent endosperms within a connated kernel to determine how the fused endosperm would affect the development of the two aleurone layers along the fusion plane. Examination of the connated kernel revealed that aleurone cells were present for only a short distance along the fusion plane whereas starchy endosperm cells were present along most of the remainder of the fusion plane, suggesting that aleurone development is suppressed when positioned between independent starchy endosperms. Sporadic aleurone cells along the fusion plane were observed and may have arisen from late or imperfect fusion of the endosperms of the connated kernel, supporting the observation that a peripheral position at the surface of the endosperm and not proximity to maternal tissues such as the testa and pericarp are important for aleurone development. Aleurone mosaicism was observed in the crown region of nonconnated SAG12-isopentenyl transferase kernels, suggesting that cytokinin can also affect aleurone development.
Kernel Methods for Mining Instance Data in Ontologies
NASA Astrophysics Data System (ADS)
Bloehdorn, Stephan; Sure, York
The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.
Insights from Classifying Visual Concepts with Multiple Kernel Learning
Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki
2012-01-01
Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970
Kernel Manifold Alignment for Domain Adaptation.
Tuia, Devis; Camps-Valls, Gustau
2016-01-01
The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors' knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational
Kernel Manifold Alignment for Domain Adaptation
Tuia, Devis; Camps-Valls, Gustau
2016-01-01
The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors’ knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational
Digestibility of solvent-treated Jatropha curcas kernel by broiler chickens in Senegal.
Nesseim, Thierry Daniel Tamsir; Dieng, Abdoulaye; Mergeai, Guy; Ndiaye, Saliou; Hornick, Jean-Luc
2015-12-01
Jatropha curcas is a drought-resistant shrub belonging to the Euphorbiaceae family. The kernel contains approximately 60 % lipid in dry matter, and the meal obtained after oil extraction could be an exceptional source of protein for family poultry farming, in the absence of curcin and, especially, some diterpene derivatives phorbol esters that are partially lipophilic. The nutrient digestibility of J. curcas kernel meal (JKM), obtained after partial physicochemical deoiling was thus evaluated in broiler chickens. Twenty broiler chickens, 6 weeks old, were maintained in individual metabolic cages and divided into four groups of five animals, according to a 4 × 4 Latin square design where deoiled JKM was incorporated into grinded corn at 0, 4, 8, and 12 % levels (diets 0, 4, 8, and 12 J), allowing measurement of nutrient digestibility by the differential method. The dry matter (DM) and organic matter (OM) digestibility of diets was affected to a low extent by JKM (85 and 86 % in 0 J and 81 % in 12 J, respectively) in such a way that DM and OM digestibility of JKM was estimated to be close to 50 %. The ether extract (EE) digestibility of JKM remained high, at about 90 %, while crude protein (CP) and crude fiber (CF) digestibility were largely impacted by JKM, with values closed to 40 % at the highest levels of incorporation. J. curcas kernel presents various nutrient digestibilities but has adverse effects on CP and CF digestibility of the diet. The effects of an additional heat or biological treatment on JKM remain to be assessed. PMID:26255184
Digestibility of solvent-treated Jatropha curcas kernel by broiler chickens in Senegal.
Nesseim, Thierry Daniel Tamsir; Dieng, Abdoulaye; Mergeai, Guy; Ndiaye, Saliou; Hornick, Jean-Luc
2015-12-01
Jatropha curcas is a drought-resistant shrub belonging to the Euphorbiaceae family. The kernel contains approximately 60 % lipid in dry matter, and the meal obtained after oil extraction could be an exceptional source of protein for family poultry farming, in the absence of curcin and, especially, some diterpene derivatives phorbol esters that are partially lipophilic. The nutrient digestibility of J. curcas kernel meal (JKM), obtained after partial physicochemical deoiling was thus evaluated in broiler chickens. Twenty broiler chickens, 6 weeks old, were maintained in individual metabolic cages and divided into four groups of five animals, according to a 4 × 4 Latin square design where deoiled JKM was incorporated into grinded corn at 0, 4, 8, and 12 % levels (diets 0, 4, 8, and 12 J), allowing measurement of nutrient digestibility by the differential method. The dry matter (DM) and organic matter (OM) digestibility of diets was affected to a low extent by JKM (85 and 86 % in 0 J and 81 % in 12 J, respectively) in such a way that DM and OM digestibility of JKM was estimated to be close to 50 %. The ether extract (EE) digestibility of JKM remained high, at about 90 %, while crude protein (CP) and crude fiber (CF) digestibility were largely impacted by JKM, with values closed to 40 % at the highest levels of incorporation. J. curcas kernel presents various nutrient digestibilities but has adverse effects on CP and CF digestibility of the diet. The effects of an additional heat or biological treatment on JKM remain to be assessed.
Thermal Expansion of Vacuum Plasma Sprayed Coatings
NASA Technical Reports Server (NTRS)
Raj, S V.; Palczer, A. R.
2010-01-01
Metallic Cu-8%Cr, Cu-26%Cr, Cu-8%Cr-1%Al, NiAl and NiCrAlY monolithic coatings were fabricated by vacuum plasma spray deposition processes for thermal expansion property measurements between 293 and 1223 K. The corrected thermal expansion, (DL/L(sub 0) varies with the absolute temperature, T, as (DL/L(sub 0) = A(T - 293)(sup 3) + BIT - 293)(sup 2) + C(T - 293) + D, where, A, B, C and D are thermal, regression constants. Excellent reproducibility was observed for all of the coatings except for data obtained on the Cu-8%Cr and Cu-26%Cr coatings in the first heat-up cycle, which deviated from those determined in the subsequent cycles. This deviation is attributed to the presence of residual stresses developed during the spraying of the coatings, which are relieved after the first heat-up cycle. In the cases of Cu-8%Cr and NiAl, the thermal expansion data were observed to be reproducible for three specimens. The linear expansion data for Cu-8% Cr and Cu-26%Cr agree extremely well with rule of mixture (ROM) predictions. Comparison of the data for the Cu-8%Cr coating with literature data for Cr and Cu revealed that the thermal expansion behavior of this alloy is determined by the Cu-rich matrix. The data for NiAl and NiCrAlY are in excellent agreement with published results irrespective of composition and the methods used for processing the materials. The implications of these results on coating GRCop-84 copper alloy combustor liners for reusable launch vehicles are discussed.
Expansion tube test time predictions
NASA Technical Reports Server (NTRS)
Gourlay, Christopher M.
1988-01-01
The interaction of an interface between two gases and strong expansion is investigated and the effect on flow in an expansion tube is examined. Two mechanisms for the unsteady Pitot-pressure fluctuations found in the test section of an expansion tube are proposed. The first mechanism depends on the Rayleigh-Taylor instability of the driver-test gas interface in the presence of a strong expansion. The second mechanism depends on the reflection of the strong expansion from the interface. Predictions compare favorably with experimental results. The theory is expected to be independent of the absolute values of the initial expansion tube filling pressures.
Accelerated expansion through interaction
Zimdahl, Winfried
2009-05-01
Interactions between dark matter and dark energy with a given equation of state are known to modify the cosmic dynamics. On the other hand, the strength of these interactions is subject to strong observational constraints. Here we discuss a model in which the transition from decelerated to accelerated expansion of the Universe arises as a pure interaction phenomenon. Various cosmological scenarios that describe a present stage of accelerated expansion, like the {lambda}CDM model or a (generalized) Chaplygin gas, follow as special cases for different interaction rates. This unifying view on the homogeneous and isotropic background level is accompanied by a non-adiabatic perturbation dynamics which can be seen as a consequence of a fluctuating interaction rate.
Weighted Feature Gaussian Kernel SVM for Emotion Recognition
Jia, Qingxuan
2016-01-01
Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443
Improved Online Support Vector Machines Spam Filtering Using String Kernels
NASA Astrophysics Data System (ADS)
Amayri, Ola; Bouguila, Nizar
A major bottleneck in electronic communications is the enormous dissemination of spam emails. Developing of suitable filters that can adequately capture those emails and achieve high performance rate become a main concern. Support vector machines (SVMs) have made a large contribution to the development of spam email filtering. Based on SVMs, the crucial problems in email classification are feature mapping of input emails and the choice of the kernels. In this paper, we present thorough investigation of several distance-based kernels and propose the use of string kernels and prove its efficiency in blocking spam emails. We detail a feature mapping variants in text classification (TC) that yield improved performance for the standard SVMs in filtering task. Furthermore, to cope for realtime scenarios we propose an online active framework for spam filtering.
Identification of nonlinear optical systems using adaptive kernel methods
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Zhang, Changjiang; Zhang, Haoran; Feng, Genliang; Xu, Xiuling
2005-12-01
An identification approach of nonlinear optical dynamic systems, based on adaptive kernel methods which are modified version of least squares support vector machine (LS-SVM), is presented in order to obtain the reference dynamic model for solving real time applications such as adaptive signal processing of the optical systems. The feasibility of this approach is demonstrated with the computer simulation through identifying a Bragg acoustic-optical bistable system. Unlike artificial neural networks, the adaptive kernel methods possess prominent advantages: over fitting is unlikely to occur by employing structural risk minimization criterion, the global optimal solution can be uniquely obtained owing to that its training is performed through the solution of a set of linear equations. Also, the adaptive kernel methods are still effective for the nonlinear optical systems with a variation of the system parameter. This method is robust with respect to noise, and it constitutes another powerful tool for the identification of nonlinear optical systems.
A method of smoothed particle hydrodynamics using spheroidal kernels
NASA Technical Reports Server (NTRS)
Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.
1995-01-01
We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562
Recurrent kernel machines: computing with infinite echo state networks.
Hermans, Michiel; Schrauwen, Benjamin
2012-01-01
Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks.
Compression loading behaviour of sunflower seeds and kernels
NASA Astrophysics Data System (ADS)
Selvam, Thasaiya A.; Manikantan, Musuvadi R.; Chand, Tarsem; Sharma, Rajiv; Seerangurayar, Thirupathi
2014-10-01
The present study was carried out to investigate the compression loading behaviour of five Indian sunflower varieties (NIRMAL-196, NIRMAL-303, CO-2, KBSH-41, and PSH- 996) under four different moisture levels (6-18% d.b). The initial cracking force, mean rupture force, and rupture energy were measured as a function of moisture content. The observed results showed that the initial cracking force decreased linearly with an increase in moisture content for all varieties. The mean rupture force also decreased linearly with an increase in moisture content. However, the rupture energy was found to be increasing linearly for seed and kernel with moisture content. NIRMAL-196 and PSH-996 had maximum and minimum values of all the attributes studied for both seed and kernel, respectively. The values of all the studied attributes were higher for seed than kernel of all the varieties at all moisture levels. There was a significant effect of moisture and variety on compression loading behaviour.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
China petrochemical expansion progressing
Not Available
1991-08-05
This paper reports on China's petrochemical expansion surge which is picking up speed. A worldscale petrochemical complex is emerging at Shanghai with an eye to expanding China's petrochemical exports, possibly through joint ventures with foreign companies, China Features reported. In other action, Beijing and Henan province have approved plans for a $1.2 billion chemical fibers complex at the proposed Luoyang refinery, China Daily reported.
Tissue expansion in perspective.
Sharpe, D. T.; Burd, R. M.
1989-01-01
Tissue expansion is a recent advance in skin cover technique. Its empirical use has enabled many previously difficult reconstructions to be completed without recourse to distant flaps. Its high complication rate and lack of basic scientific understanding at present restrict its use to selected cases, but the quality of repairs possible by this method encourage further serious scientific study. Images fig. 1 fig. 2 fig. 3 fig. 4 fig. 5 PMID:2589784
Broadband Waveform Sensitivity Kernels for Large-Scale Seismic Tomography
NASA Astrophysics Data System (ADS)
Nissen-Meyer, T.; Stähler, S. C.; van Driel, M.; Hosseini, K.; Auer, L.; Sigloch, K.
2015-12-01
Seismic sensitivity kernels, i.e. the basis for mapping misfit functionals to structural parameters in seismic inversions, have received much attention in recent years. Their computation has been conducted via ray-theory based approaches (Dahlen et al., 2000) or fully numerical solutions based on the adjoint-state formulation (e.g. Tromp et al., 2005). The core problem is the exuberant computational cost due to the large number of source-receiver pairs, each of which require solutions to the forward problem. This is exacerbated in the high-frequency regime where numerical solutions become prohibitively expensive. We present a methodology to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (abstract ID# 77891, www.axisem.info), and thus on spherically symmetric models. As a consequence of this method's numerical efficiency even in high-frequency regimes, kernels can be computed in a time- and frequency-dependent manner, thus providing the full generic mapping from perturbed waveform to perturbed structure. Such waveform kernels can then be used for a variety of misfit functions, structural parameters and refiltered into bandpasses without recomputing any wavefields. A core component of the kernel method presented here is the mapping from numerical wavefields to inversion meshes. This is achieved by a Monte-Carlo approach, allowing for convergent and controllable accuracy on arbitrarily shaped tetrahedral and hexahedral meshes. We test and validate this accuracy by comparing to reference traveltimes, show the projection onto various locally adaptive inversion meshes and discuss computational efficiency for ongoing tomographic applications in the range of millions of observed body-wave data between periods of 2-30s.
Single aflatoxin contaminated corn kernel analysis with fluorescence hyperspectral image
NASA Astrophysics Data System (ADS)
Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Ononye, Ambrose; Brown, Robert L.; Cleveland, Thomas E.
2010-04-01
Aflatoxins are toxic secondary metabolites of the fungi Aspergillus flavus and Aspergillus parasiticus, among others. Aflatoxin contaminated corn is toxic to domestic animals when ingested in feed and is a known carcinogen associated with liver and lung cancer in humans. Consequently, aflatoxin levels in food and feed are regulated by the Food and Drug Administration (FDA) in the US, allowing 20 ppb (parts per billion) limits in food and 100 ppb in feed for interstate commerce. Currently, aflatoxin detection and quantification methods are based on analytical tests including thin-layer chromatography (TCL) and high performance liquid chromatography (HPLC). These analytical tests require the destruction of samples, and are costly and time consuming. Thus, the ability to detect aflatoxin in a rapid, nondestructive way is crucial to the grain industry, particularly to corn industry. Hyperspectral imaging technology offers a non-invasive approach toward screening for food safety inspection and quality control based on its spectral signature. The focus of this paper is to classify aflatoxin contaminated single corn kernels using fluorescence hyperspectral imagery. Field inoculated corn kernels were used in the study. Contaminated and control kernels under long wavelength ultraviolet excitation were imaged using a visible near-infrared (VNIR) hyperspectral camera. The imaged kernels were chemically analyzed to provide reference information for image analysis. This paper describes a procedure to process corn kernels located in different images for statistical training and classification. Two classification algorithms, Maximum Likelihood and Binary Encoding, were used to classify each corn kernel into "control" or "contaminated" through pixel classification. The Binary Encoding approach had a slightly better performance with accuracy equals to 87% or 88% when 20 ppb or 100 ppb was used as classification threshold, respectively.
A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896
Optical remote sensor for peanut kernel abortion classification.
Ozana, Nisan; Buchsbaum, Stav; Bishitz, Yael; Beiderman, Yevgeny; Schmilovitch, Zeev; Schwarz, Ariel; Shemer, Amir; Keshet, Joseph; Zalevsky, Zeev
2016-05-20
In this paper, we propose a simple, inexpensive optical device for remote measurement of various agricultural parameters. The sensor is based on temporal tracking of backreflected secondary speckle patterns generated when illuminating a plant with a laser and while applying periodic acoustic-based pressure stimulation. By analyzing different parameters using a support-vector-machine-based algorithm, peanut kernel abortion can be detected remotely. This paper presents experimental tests which are the first step toward an implementation of a noncontact device for the detection of agricultural parameters such as kernel abortion. PMID:27411126
An information theoretic approach of designing sparse kernel adaptive filters.
Liu, Weifeng; Park, Il; Principe, José C
2009-12-01
This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented. PMID:19923047
An information theoretic approach of designing sparse kernel adaptive filters.
Liu, Weifeng; Park, Il; Principe, José C
2009-12-01
This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented.
Iris Image Blur Detection with Multiple Kernel Learning
NASA Astrophysics Data System (ADS)
Pan, Lili; Xie, Mei; Mao, Ling
In this letter, we analyze the influence of motion and out-of-focus blur on both frequency spectrum and cepstrum of an iris image. Based on their characteristics, we define two new discriminative blur features represented by Energy Spectral Density Distribution (ESDD) and Singular Cepstrum Histogram (SCH). To merge the two features for blur detection, a merging kernel which is a linear combination of two kernels is proposed when employing Support Vector Machine. Extensive experiments demonstrate the validity of our method by showing the improved blur detection performance on both synthetic and real datasets.
CORONAL LOOP EXPANSION PROPERTIES EXPLAINED USING SEPARATORS
Plowman, Joseph E.; Kankelborg, Charles C.; Longcope, Dana W.
2009-11-20
One puzzling observed property of coronal loops is that they are of roughly constant thickness along their length. Various studies have found no consistent pattern of width variation along the length of loops observed by TRACE and SOHO. This is at odds with expectations of magnetic flux tube expansion properties, which suggests that loops are widest at their tops, and significantly narrower at their footpoints. Coronal loops correspond to areas of the solar corona which have been preferentially heated by some process, so this observed property might be connected to the mechanisms that heat the corona. One means of energy deposition is magnetic reconnection, which occurs along field lines called separators. These field lines begin and end on magnetic null points, and loops forming near them can therefore be relatively wide at their bases. Thus, coronal energization by magnetic reconnection may replicate the puzzling expansion properties observed in coronal loops. We present results of a Monte Carlo survey of separator field line expansion properties, comparing them to the observed properties of coronal loops.
Expansion: A Plan for Success.
ERIC Educational Resources Information Center
Callahan, A.P.
This report provides selling brokers' guidelines for the successful expansion of their operations outlining a basic method of preparing an expansion plan. Topic headings are: The Pitfalls of Expansion (The Language of Business, Timely Financial Reporting, Regulatory Agencies of Government, Preoccupation with the Facade of Business, A Business Is a…
Operator product expansion algebra
Holland, Jan; Hollands, Stefan
2013-07-15
We establish conceptually important properties of the operator product expansion (OPE) in the context of perturbative, Euclidean φ{sup 4}-quantum field theory. First, we demonstrate, generalizing earlier results and techniques of hep-th/1105.3375, that the 3-point OPE,
Chakrabarti, J.; Sajjad Zahir, M.
1985-03-01
We show that the product of local current operators in quantum chromodynamics (QCD), when expanded in terms of condensates, such as psi-barpsi, G/sup a//sub munu/ G/sup a//sub munu/, psi-barGAMMA psipsi-barGAMMApsi, f/sub a/bcG/sup a//sub munu/G/sup b//sub nualpha/ x G/sup c//sub alphamu/, etc., yields a series in Planck's constant. This, however, provides no hint that the higher terms in such an expansion may be less significant.
Higher-order Lipatov kernels and the QCD Pomeron
White, A.R.
1994-08-12
Three closely related topics are covered. The derivation of O(g{sup 4}) Lipatov kernels in pure glue QCD. The significance of quarks for the physical Pomeron in QCD. The possible inter-relation of Pomeron dynamics with Electroweak symmetry breaking.
Metabolite identification through multiple kernel learning on fragmentation trees
Shen, Huibin; Dührkop, Kai; Böcker, Sebastian; Rousu, Juho
2014-01-01
Motivation: Metabolite identification from tandem mass spectrometric data is a key task in metabolomics. Various computational methods have been proposed for the identification of metabolites from tandem mass spectra. Fragmentation tree methods explore the space of possible ways in which the metabolite can fragment, and base the metabolite identification on scoring of these fragmentation trees. Machine learning methods have been used to map mass spectra to molecular fingerprints; predicted fingerprints, in turn, can be used to score candidate molecular structures. Results: Here, we combine fragmentation tree computations with kernel-based machine learning to predict molecular fingerprints and identify molecular structures. We introduce a family of kernels capturing the similarity of fragmentation trees, and combine these kernels using recently proposed multiple kernel learning approaches. Experiments on two large reference datasets show that the new methods significantly improve molecular fingerprint prediction accuracy. These improvements result in better metabolite identification, doubling the number of metabolites ranked at the top position of the candidates list. Contact: huibin.shen@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931979
Music emotion detection using hierarchical sparse kernel machines.
Chin, Yu-Hao; Lin, Chang-Hong; Siahaan, Ernestasia; Wang, Jia-Ching
2014-01-01
For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA) is implemented to reduce the dimension. The acoustical features are utilized to generate the first-level decision vector, which is a vector with each element being a significant value of an emotion. The significant values of eight main emotional classes are utilized in this paper. To calculate the significant value of an emotion, we construct its 2-class SVM with calm emotion as the global (non-target) side of the SVM. The probability distributions of the adopted acoustical features are calculated and the probability product kernel is applied in the first-level SVMs to obtain first-level decision vector feature. In the second level of the hierarchical system, we merely construct a 2-class relevance vector machine (RVM) with happiness as the target side and other emotions as the background side of the RVM. The first-level decision vector is used as the feature with conventional radial basis function kernel. The happiness verification threshold is built on the probability value. In the experimental results, the detection error tradeoff (DET) curve shows that the proposed system has a good performance on verifying if a music clip reveals happiness emotion.
High-Speed Tracking with Kernelized Correlation Filters.
Henriques, João F; Caseiro, Rui; Martins, Pedro; Batista, Jorge
2015-03-01
The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies-any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new kernelized correlation filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call dual correlation filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source. PMID:26353263
Notes on a storage manager for the Clouds kernel
NASA Technical Reports Server (NTRS)
Pitts, David V.; Spafford, Eugene H.
1986-01-01
The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.
High-Speed Tracking with Kernelized Correlation Filters.
Henriques, João F; Caseiro, Rui; Martins, Pedro; Batista, Jorge
2015-03-01
The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies-any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new kernelized correlation filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call dual correlation filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source.
Microwave moisture meter for in-shell peanut kernels
Technology Transfer Automated Retrieval System (TEKTRAN)
. A microwave moisture meter built with off-the-shelf components was developed, calibrated and tested in the laboratory and in the field for nondestructive and instantaneous in-shell peanut kernel moisture content determination from dielectric measurements on unshelled peanut pod samples. The meter ...
Estimating Filtering Errors Using the Peano Kernel Theorem
Jerome Blair
2008-03-01
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.
Stereotype Measurement and the "Kernel of Truth" Hypothesis.
ERIC Educational Resources Information Center
Gordon, Randall A.
1989-01-01
Describes a stereotype measurement suitable for classroom demonstration. Illustrates C. McCauley and C. L. Stitt's diagnostic ratio measure and examines the validity of the "kernel of truth" hypothesis. Uses this as a starting point for class discussion. Reports results and gives suggestions for discussion of related concepts. (Author/NL)
Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating
ERIC Educational Resources Information Center
Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen
2012-01-01
This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
Estimating Filtering Errors Using the Peano Kernel Theorem
Jerome Blair
2009-02-20
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.
Matrix kernels for MEG and EEG source localization and imaging
Mosher, J.C.; Lewis, P.S.; Leahy, R.M.
1994-12-31
The most widely used model for electroencephalography (EEG) and magnetoencephalography (MEG) assumes a quasi-static approximation of Maxwell`s equations and a piecewise homogeneous conductor model. Both models contain an incremental field element that linearly relates an incremental source element (current dipole) to the field or voltage at a distant point. The explicit form of the field element is dependent on the head modeling assumptions and sensor configuration. Proper characterization of this incremental element is crucial to the inverse problem. The field element can be partitioned into the product of a vector dependent on sensor characteristics and a matrix kernel dependent only on head modeling assumptions. We present here the matrix kernels for the general boundary element model (BEM) and for MEG spherical models. We show how these kernels are easily interchanged in a linear algebraic framework that includes sensor specifics such as orientation and gradiometer configuration. We then describe how this kernel is easily applied to ``gain`` or ``transfer`` matrices used in multiple dipole and source imaging models.
The Stokes problem for the ellipsoid using ellipsoidal kernels
NASA Technical Reports Server (NTRS)
Zhu, Z.
1981-01-01
A brief review of Stokes' problem for the ellipsoid as a reference surface is given. Another solution of the problem using an ellipsoidal kernel, which represents an iterative form of Stokes' integral, is suggested with a relative error of the order of the flattening. On studying of Rapp's method in detail the procedures of improving its convergence are discussed.
Expansible quantum secret sharing network
NASA Astrophysics Data System (ADS)
Sun, Ying; Xu, Sheng-Wei; Chen, Xiu-Bo; Niu, Xin-Xin; Yang, Yi-Xian
2013-08-01
In the practical applications, member expansion is a usual demand during the development of a secret sharing network. However, there are few consideration and discussion on network expansibility in the existing quantum secret sharing schemes. We propose an expansible quantum secret sharing scheme with relatively simple and economical quantum resources and show how to split and reconstruct the quantum secret among an expansible user group in our scheme. Its trait, no requirement of any agent's assistant during the process of member expansion, can help to prevent potential menaces of insider cheating. We also give a discussion on the security of this scheme from three aspects.
Working fluids and expansion machines for ORC
NASA Astrophysics Data System (ADS)
Richter, Lukáš; Linhart, Jiří
2016-06-01
This paper discusses the key technical aspects of the Organic Rankin - Clausius cycle (ORC), unconventional technology with great potential for the use of low-potential heat and the use of geothermal and solar energy, and in connection with the burning of biomass. The principle of ORC has been known since the late 19th century. The development of new organic substances and improvements to the expansion device now allows full commercial exploitation of ORC. The right choice of organic working substances has the most important role in the design of ORC, depending on the specific application. The chosen working substance and achieved operating parameters will affect the selection and construction of the expansion device. For this purpose the screw engine, inversion of the screw compressor, can be used.
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
Genome Mapping of Kernel Characteristics in Hard Red Spring Wheat Breeding Lines
Technology Transfer Automated Retrieval System (TEKTRAN)
Kernel characteristics, particularly kernel weight, kernel size, and grain protein content, are important components of grain yield and quality in wheat. Development of high performing wheat cultivars, with high grain yield and quality, is a major focus in wheat breeding programs worldwide. Here, we...
Low Cost Real-Time Sorting of in Shell Pistachio Nuts from Kernels
Technology Transfer Automated Retrieval System (TEKTRAN)
A high speed sorter for separating pistachio nuts with (in shell) and without (kernels) shells is reported. Testing indicates 95% accuracy in removing kernels from the in shell stream with no false positive results out of 1000 kernels tested. Testing with 1000 each of in shell, shell halves, and ker...
High-Throughput Sequencing Reveals Single Nucleotide Variants in Longer-Kernel Bread Wheat
Chen, Feng; Zhu, Zibo; Zhou, Xiaobian; Yan, Yan; Dong, Zhongdong; Cui, Dangqun
2016-01-01
The transcriptomes of bread wheat Yunong 201 and its ethyl methanesulfonate derivative Yunong 3114 were obtained by next-sequencing technology. Single nucleotide variants (SNVs) in the wheat strains were explored and compared. A total of 5907 and 6287 non-synonymous SNVs were acquired for Yunong 201 and 3114, respectively. A total of 4021 genes with SNVs were obtained. The genes that underwent non-synonymous SNVs were significantly involved in ATP binding, protein phosphorylation, and cellular protein metabolic process. The heat map analysis also indicated that most of these mutant genes were significantly differentially expressed at different developmental stages. The SNVs in these genes possibly contribute to the longer kernel length of Yunong 3114. Our data provide useful information on wheat transcriptome for future studies on wheat functional genomics. This study could also help in illustrating the gene functions of the non-synonymous SNVs of Yunong 201 and 3114. PMID:27551288
High-Throughput Sequencing Reveals Single Nucleotide Variants in Longer-Kernel Bread Wheat.
Chen, Feng; Zhu, Zibo; Zhou, Xiaobian; Yan, Yan; Dong, Zhongdong; Cui, Dangqun
2016-01-01
The transcriptomes of bread wheat Yunong 201 and its ethyl methanesulfonate derivative Yunong 3114 were obtained by next-sequencing technology. Single nucleotide variants (SNVs) in the wheat strains were explored and compared. A total of 5907 and 6287 non-synonymous SNVs were acquired for Yunong 201 and 3114, respectively. A total of 4021 genes with SNVs were obtained. The genes that underwent non-synonymous SNVs were significantly involved in ATP binding, protein phosphorylation, and cellular protein metabolic process. The heat map analysis also indicated that most of these mutant genes were significantly differentially expressed at different developmental stages. The SNVs in these genes possibly contribute to the longer kernel length of Yunong 3114. Our data provide useful information on wheat transcriptome for future studies on wheat functional genomics. This study could also help in illustrating the gene functions of the non-synonymous SNVs of Yunong 201 and 3114. PMID:27551288
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
Kernel-based least squares policy iteration for reinforcement learning.
Xu, Xin; Hu, Dewen; Lu, Xicheng
2007-07-01
In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating
Direct discriminant locality preserving projection with Hammerstein polynomial expansion.
Chen, Xi; Zhang, Jiashu; Li, Defang
2012-12-01
Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.
Huang, Jian; Yuen, Pong C; Chen, Wen-Sheng; Lai, Jian Huang
2007-08-01
This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.
Pallaver, Carl B.; Morgan, Michael W.
1978-01-01
A cryogenic expansion engine includes intake and exhaust poppet valves each controlled by a cam having adjustable dwell, the valve seats for the valves being threaded inserts in the valve block. Each cam includes a cam base and a ring-shaped cam insert disposed at an exterior corner of the cam base, the cam base and cam insert being generally circular but including an enlarged cam dwell, the circumferential configuration of the cam base and cam dwell being identical, the cam insert being rotatable with respect to the cam base. GI CONTRACTUAL ORIGIN OF THE INVENTION The invention described herein was made in the course of, or under, a contract with the UNITED STATES ENERGY RESEARCH AND DEVELOPMENT ADMINISTRATION.
Optical imaging. Expansion microscopy.
Chen, Fei; Tillberg, Paul W; Boyden, Edward S
2015-01-30
In optical microscopy, fine structural details are resolved by using refraction to magnify images of a specimen. We discovered that by synthesizing a swellable polymer network within a specimen, it can be physically expanded, resulting in physical magnification. By covalently anchoring specific labels located within the specimen directly to the polymer network, labels spaced closer than the optical diffraction limit can be isotropically separated and optically resolved, a process we call expansion microscopy (ExM). Thus, this process can be used to perform scalable superresolution microscopy with diffraction-limited microscopes. We demonstrate ExM with apparent ~70-nanometer lateral resolution in both cultured cells and brain tissue, performing three-color superresolution imaging of ~10(7) cubic micrometers of the mouse hippocampus with a conventional confocal microscope.
Operation of Mammoth Pacific`s MP1-100 turbine with metastable, supersaturated expansions
Mines, G.L.
1996-01-01
INEL`s Heat Cycle Research project continues to develop a technology base for increasing use of moderate-temperature hydrothermal resources to generate electrical power. One concept is the use of metastable, supersaturated turbine expansions. These expansions support a supersaturated working fluid vapor; at equilibrium conditions, liquid condensate would be present during the turbine expansion process. Studies suggest that if these expansions do not adversely affect the turbine performance, up to 8-10% more power could be produced from a given geothermal fluid. Determining the impact of these expansions on turbine performance is the focus of the project investigations being reported.
Burial Ground Expansion Hydrogeologic Characterization
Gaughan , T.F.
1999-02-26
Sirrine Environmental Consultants provided technical oversight of the installation of eighteen groundwater monitoring wells and six exploratory borings around the location of the Burial Ground Expansion.
Effective face recognition using bag of features with additive kernels
NASA Astrophysics Data System (ADS)
Yang, Shicai; Bebis, George; Chu, Yongjie; Zhao, Lindu
2016-01-01
In past decades, many techniques have been used to improve face recognition performance. The most common and well-studied ways are to use the whole face image to build a subspace based on the reduction of dimensionality. Differing from methods above, we consider face recognition as an image classification problem. The face images of the same person are considered to fall into the same category. Each category and each face image could be both represented by a simple pyramid histogram. Spatial dense scale-invariant feature transform features and bag of features method are used to build categories and face representations. In an effort to make the method more efficient, a linear support vector machine solver, Pegasos, is used for the classification in the kernel space with additive kernels instead of nonlinear SVMs. Our experimental results demonstrate that the proposed method can achieve very high recognition accuracy on the ORL, YALE, and FERET databases.
Some physical properties of ginkgo nuts and kernels
NASA Astrophysics Data System (ADS)
Ch'ng, P. E.; Abdullah, M. H. R. O.; Mathai, E. J.; Yunus, N. A.
2013-12-01
Some data of the physical properties of ginkgo nuts at a moisture content of 45.53% (±2.07) (wet basis) and of their kernels at 60.13% (± 2.00) (wet basis) are presented in this paper. It consists of the estimation of the mean length, width, thickness, the geometric mean diameter, sphericity, aspect ratio, unit mass, surface area, volume, true density, bulk density, and porosity measures. The coefficient of static friction for nuts and kernels was determined by using plywood, glass, rubber, and galvanized steel sheet. The data are essential in the field of food engineering especially dealing with design and development of machines, and equipment for processing and handling agriculture products.
Reproducing kernel particle method for free and forced vibration analysis
NASA Astrophysics Data System (ADS)
Zhou, J. X.; Zhang, H. Y.; Zhang, L.
2005-01-01
A reproducing kernel particle method (RKPM) is presented to analyze the natural frequencies of Euler-Bernoulli beams as well as Kirchhoff plates. In addition, RKPM is also used to predict the forced vibration responses of buried pipelines due to longitudinal travelling waves. Two different approaches, Lagrange multipliers as well as transformation method , are employed to enforce essential boundary conditions. Based on the reproducing kernel approximation, the domain of interest is discretized by a set of particles without the employment of a structured mesh, which constitutes an advantage over the finite element method. Meanwhile, RKPM also exhibits advantages over the classical Rayleigh-Ritz method and its counterparts. Numerical results presented here demonstrate the effectiveness of this novel approach for both free and forced vibration analysis.
Undersampled dynamic magnetic resonance imaging using kernel principal component analysis.
Wang, Yanhua; Ying, Leslie
2014-01-01
Compressed sensing (CS) is a promising approach to accelerate dynamic magnetic resonance imaging (MRI). Most existing CS methods employ linear sparsifying transforms. The recent developments in non-linear or kernel-based sparse representations have been shown to outperform the linear transforms. In this paper, we present an iterative non-linear CS dynamic MRI reconstruction framework that uses the kernel principal component analysis (KPCA) to exploit the sparseness of the dynamic image sequence in the feature space. Specifically, we apply KPCA to represent the temporal profiles of each spatial location and reconstruct the images through a modified pre-image problem. The underlying optimization algorithm is based on variable splitting and fixed-point iteration method. Simulation results show that the proposed method outperforms conventional CS method in terms of aliasing artifact reduction and kinetic information preservation. PMID:25570262
Hydroxocobalamin treatment of acute cyanide poisoning from apricot kernels.
Cigolini, Davide; Ricci, Giogio; Zannoni, Massimo; Codogni, Rosalia; De Luca, Manuela; Perfetti, Paola; Rocca, Giampaolo
2011-05-24
Clinical experience with hydroxocobalamin in acute cyanide poisoning via ingestion remains limited. This case concerns a 35-year-old mentally ill woman who consumed more than 20 apricot kernels. Published literature suggests each kernel would have contained cyanide concentrations ranging from 0.122 to 4.09 mg/g (average 2.92 mg/g). On arrival, the woman appeared asymptomatic with a raised pulse rate and slight metabolic acidosis. Forty minutes after admission (approximately 70 min postingestion), the patient experienced headache, nausea and dyspnoea, and was hypotensive, hypoxic and tachypnoeic. Following treatment with amyl nitrite and sodium thiosulphate, her methaemoglobin level was 10%. This prompted the administration of oxygen, which evoked a slight improvement in her vital signs. Hydroxocobalamin was then administered. After 24 h, she was completely asymptomatic with normalised blood pressure and other haemodynamic parameters. This case reinforces the safety and effectiveness of hydroxocobalamin in acute cyanide poisoning by ingestion.
Hydroxocobalamin treatment of acute cyanide poisoning from apricot kernels.
Cigolini, Davide; Ricci, Giogio; Zannoni, Massimo; Codogni, Rosalia; De Luca, Manuela; Perfetti, Paola; Rocca, Giampaolo
2011-09-01
Clinical experience with hydroxocobalamin in acute cyanide poisoning via ingestion remains limited. This case concerns a 35-year-old mentally ill woman who consumed more than 20 apricot kernels. Published literature suggests each kernel would have contained cyanide concentrations ranging from 0.122 to 4.09 mg/g (average 2.92 mg/g). On arrival, the woman appeared asymptomatic with a raised pulse rate and slight metabolic acidosis. Forty minutes after admission (approximately 70 min postingestion), the patient experienced headache, nausea and dyspnoea, and was hypotensive, hypoxic and tachypnoeic. Following treatment with amyl nitrite and sodium thiosulphate, her methaemoglobin level was 10%. This prompted the administration of oxygen, which evoked a slight improvement in her vital signs. Hydroxocobalamin was then administered. After 24 h, she was completely asymptomatic with normalised blood pressure and other haemodynamic parameters. This case reinforces the safety and effectiveness of hydroxocobalamin in acute cyanide poisoning by ingestion.
Cumulant expansions for atmospheric flows
NASA Astrophysics Data System (ADS)
Ait-Chaalal, Farid; Schneider, Tapio; Meyer, Bettina; Marston, J. B.
2016-02-01
Atmospheric flows are governed by the equations of fluid dynamics. These equations are nonlinear, and consequently the hierarchy of cumulant equations is not closed. But because atmospheric flows are inhomogeneous and anisotropic, the nonlinearity may manifest itself only weakly through interactions of nontrivial mean fields with disturbances such as thermals or eddies. In such situations, truncations of the hierarchy of cumulant equations hold promise as a closure strategy. Here we show how truncations at second order can be used to model and elucidate the dynamics of turbulent atmospheric flows. Two examples are considered. First, we study the growth of a dry convective boundary layer, which is heated from below, leading to turbulent upward energy transport and growth of the boundary layer. We demonstrate that a quasilinear truncation of the equations of motion, in which interactions of disturbances among each other are neglected but interactions with mean fields are taken into account, can capture the growth of the convective boundary layer. However, it does not capture important turbulent transport terms in the turbulence kinetic energy budget. Second, we study the evolution of two-dimensional large-scale waves, which are representative of waves seen in Earth's upper atmosphere. We demonstrate that a cumulant expansion truncated at second order (CE2) can capture the evolution of such waves and their nonlinear interaction with the mean flow in some circumstances, for example, when the wave amplitude is small enough or the planetary rotation rate is large enough. However, CE2 fails to capture the flow evolution when strongly nonlinear eddy-eddy interactions that generate small-scale filaments in surf zones around critical layers become important. Higher-order closures can capture these missing interactions. The results point to new ways in which the dynamics of turbulent boundary layers may be represented in climate models, and they illustrate different classes
Realistic dispersion kernels applied to cohabitation reaction dispersion equations
NASA Astrophysics Data System (ADS)
Isern, Neus; Fort, Joaquim; Pérez-Losada, Joaquim
2008-10-01
We develop front spreading models for several jump distance probability distributions (dispersion kernels). We derive expressions for a cohabitation model (cohabitation of parents and children) and a non-cohabitation model, and apply them to the Neolithic using data from real human populations. The speeds that we obtain are consistent with observations of the Neolithic transition. The correction due to the cohabitation effect is up to 38%.
Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications
Jones, Terry R
2011-01-01
This paper describes a kernel scheduling algorithm that is based on co-scheduling principles and that is intended for parallel applications running on 1000 cores or more where inter-node scalability is key. Experimental results for a Linux implementation on a Cray XT5 machine are presented.1 The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.
Multilevel image recognition using discriminative patches and kernel covariance descriptor
NASA Astrophysics Data System (ADS)
Lu, Le; Yao, Jianhua; Turkbey, Evrim; Summers, Ronald M.
2014-03-01
Computer-aided diagnosis of medical images has emerged as an important tool to objectively improve the performance, accuracy and consistency for clinical workflow. To computerize the medical image diagnostic recognition problem, there are three fundamental problems: where to look (i.e., where is the region of interest from the whole image/volume), image feature description/encoding, and similarity metrics for classification or matching. In this paper, we exploit the motivation, implementation and performance evaluation of task-driven iterative, discriminative image patch mining; covariance matrix based descriptor via intensity, gradient and spatial layout; and log-Euclidean distance kernel for support vector machine, to address these three aspects respectively. To cope with often visually ambiguous image patterns for the region of interest in medical diagnosis, discovery of multilabel selective discriminative patches is desired. Covariance of several image statistics summarizes their second order interactions within an image patch and is proved as an effective image descriptor, with low dimensionality compared with joint statistics and fast computation regardless of the patch size. We extensively evaluate two extended Gaussian kernels using affine-invariant Riemannian metric or log-Euclidean metric with support vector machines (SVM), on two medical image classification problems of degenerative disc disease (DDD) detection on cortical shell unwrapped CT maps and colitis detection on CT key images. The proposed approach is validated with promising quantitative results on these challenging tasks. Our experimental findings and discussion also unveil some interesting insights on the covariance feature composition with or without spatial layout for classification and retrieval, and different kernel constructions for SVM. This will also shed some light on future work using covariance feature and kernel classification for medical image analysis.
Cassane diterpenes from the seed kernels of Caesalpinia sappan.
Nguyen, Hai Xuan; Nguyen, Nhan Trung; Dang, Phu Hoang; Thi Ho, Phuoc; Nguyen, Mai Thanh Thi; Van Can, Mao; Dibwe, Dya Fita; Ueda, Jun-Ya; Awale, Suresh
2016-02-01
Eight structurally diverse cassane diterpenes named tomocins A-H were isolated from the seed kernels of Vietnamese Caesalpinia sappan Linn. Their structures were determined by extensive NMR and CD spectroscopic analysis. Among the isolated compounds, tomocin A, phanginin A, F, and H exhibited mild preferential cytotoxicity against PANC-1 human pancreatic cancer cells under nutrition-deprived condition without causing toxicity in normal nutrient-rich conditions.
Instantaneous Bethe-Salpeter kernel for the lightest pseudoscalar mesons
NASA Astrophysics Data System (ADS)
Lucha, Wolfgang; Schöberl, Franz F.
2016-05-01
Starting from a phenomenologically successful, numerical solution of the Dyson-Schwinger equation that governs the quark propagator, we reconstruct in detail the interaction kernel that has to enter the instantaneous approximation to the Bethe-Salpeter equation to allow us to describe the lightest pseudoscalar mesons as quark-antiquark bound states exhibiting the (almost) masslessness necessary for them to be interpretable as the (pseudo) Goldstone bosons related to the spontaneous chiral symmetry breaking of quantum chromodynamics.
Benchmarking NWP Kernels on Multi- and Many-core Processors
NASA Astrophysics Data System (ADS)
Michalakes, J.; Vachharajani, M.
2008-12-01
Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.
Mapping quantitative trait loci for kernel composition in almond
2012-01-01
Background Almond breeding is increasingly taking into account kernel quality as a breeding objective. Information on the parameters to be considered in evaluating almond quality, such as protein and oil content, as well as oleic acid and tocopherol concentration, has been recently compiled. The genetic control of these traits has not yet been studied in almond, although this information would improve the efficiency of almond breeding programs. Results A map with 56 simple sequence repeat or microsatellite (SSR) markers was constructed for an almond population showing a wide range of variability for the chemical components of the almond kernel. A total of 12 putative quantitative trait loci (QTL) controlling these chemical traits have been detected in this analysis, corresponding to seven genomic regions of the eight almond linkage groups (LG). Some QTL were clustered in the same region or shared the same molecular markers, according to the correlations already found between the chemical traits. The logarithm of the odds (LOD) values for any given trait ranged from 2.12 to 4.87, explaining from 11.0 to 33.1 % of the phenotypic variance of the trait. Conclusions The results produced in the study offer the opportunity to include the new genetic information in almond breeding programs. Increases in the positive traits of kernel quality may be looked for simultaneously whenever they are genetically independent, even if they are negatively correlated. We have provided the first genetic framework for the chemical components of the almond kernel, with twelve QTL in agreement with the large number of genes controlling their metabolism. PMID:22720975
Equilibrium studies of copper ion adsorption onto palm kernel fibre.
Ofomaja, Augustine E
2010-07-01
The equilibrium sorption of copper ions from aqueous solution using a new adsorbent, palm kernel fibre, has been studied. Palm kernel fibre is obtained in large amounts as a waste product of palm oil production. Batch equilibrium studies were carried out and system variables such as solution pH, sorbent dose, and sorption temperature were varied. The equilibrium sorption data was then analyzed using the Langmuir, Freundlich, Dubinin-Radushkevich (D-R) and Temkin isotherms. The fit of these isotherm models to the equilibrium sorption data was determined, using the linear coefficient of determination, r(2), and the non-linear Chi-square, chi(2) error analysis. The results revealed that sorption was pH dependent and increased with increasing solution pH above the pH(PZC) of the palm kernel fibre with an optimum dose of 10g/dm(3). The equilibrium data were found to fit the Langmuir isotherm model best, with a monolayer capacity of 3.17 x 10(-4)mol/g at 339K. The sorption equilibrium constant, K(a), increased with increasing temperature, indicating that bond strength between sorbate and sorbent increased with temperature and sorption was endothermic. This was confirmed by the increase in the values of the Temkin isotherm constant, B(1), with increasing temperature. The Dubinin-Radushkevich (D-R) isotherm parameter, free energy, E, was in the range of 15.7-16.7kJ/mol suggesting that the sorption mechanism was ion exchange. Desorption studies showed that a high percentage of the copper was desorbed from the adsorbent using acid solutions (HCl, HNO(3) and CH(3)COOH) and the desorption percentage increased with acid concentration. The thermodynamics of the copper ions/palm kernel fibre system indicate that the process is spontaneous and endothermic. PMID:20346574
Deproteinated palm kernel cake-derived oligosaccharides: A preliminary study
NASA Astrophysics Data System (ADS)
Fan, Suet Pin; Chia, Chin Hua; Fang, Zhen; Zakaria, Sarani; Chee, Kah Leong
2014-09-01
Preliminary study on microwave-assisted hydrolysis of deproteinated palm kernel cake (DPKC) to produce oligosaccharides using succinic acid was performed. Three important factors, i.e., temperature, acid concentration and reaction time, were selected to carry out the hydrolysis processes. Results showed that the highest yield of DPKC-derived oligosaccharides can be obtained at a parameter 170 °C, 0.2 N SA and 20 min of reaction time.
Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism
Jones, Terry R
2012-01-01
This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.
Equilibrium studies of copper ion adsorption onto palm kernel fibre.
Ofomaja, Augustine E
2010-07-01
The equilibrium sorption of copper ions from aqueous solution using a new adsorbent, palm kernel fibre, has been studied. Palm kernel fibre is obtained in large amounts as a waste product of palm oil production. Batch equilibrium studies were carried out and system variables such as solution pH, sorbent dose, and sorption temperature were varied. The equilibrium sorption data was then analyzed using the Langmuir, Freundlich, Dubinin-Radushkevich (D-R) and Temkin isotherms. The fit of these isotherm models to the equilibrium sorption data was determined, using the linear coefficient of determination, r(2), and the non-linear Chi-square, chi(2) error analysis. The results revealed that sorption was pH dependent and increased with increasing solution pH above the pH(PZC) of the palm kernel fibre with an optimum dose of 10g/dm(3). The equilibrium data were found to fit the Langmuir isotherm model best, with a monolayer capacity of 3.17 x 10(-4)mol/g at 339K. The sorption equilibrium constant, K(a), increased with increasing temperature, indicating that bond strength between sorbate and sorbent increased with temperature and sorption was endothermic. This was confirmed by the increase in the values of the Temkin isotherm constant, B(1), with increasing temperature. The Dubinin-Radushkevich (D-R) isotherm parameter, free energy, E, was in the range of 15.7-16.7kJ/mol suggesting that the sorption mechanism was ion exchange. Desorption studies showed that a high percentage of the copper was desorbed from the adsorbent using acid solutions (HCl, HNO(3) and CH(3)COOH) and the desorption percentage increased with acid concentration. The thermodynamics of the copper ions/palm kernel fibre system indicate that the process is spontaneous and endothermic.
Heat pipe array heat exchanger
Reimann, Robert C.
1987-08-25
A heat pipe arrangement for exchanging heat between two different temperature fluids. The heat pipe arrangement is in a ounterflow relationship to increase the efficiency of the coupling of the heat from a heat source to a heat sink.
Rowold, Daine J; Perez-Benedico, David; Stojkovic, Oliver; Garcia-Bertrand, Ralph; Herrera, Rene J
2016-11-15
Here we report the results of fine resolution Y chromosomal analyses (Y-SNP and Y-STR) of 267 Bantu-speaking males from three populations located in the southeast region of Africa. In an effort to determine the relative Y chromosomal affinities of these three genotyped populations, the findings are interpreted in the context of 74 geographically and ethnically targeted African reference populations representing four major ethno-linguistic groups (Afro-Asiatic, Niger Kordofanin, Khoisan and Pygmoid). In this investigation, we detected a general similarity in the Y chromosome lineages among the geographically dispersed Bantu-speaking populations suggesting a shared heritage and the shallow time depth of the Bantu Expansion. Also, micro-variations in the Bantu Y chromosomal composition across the continent highlight location-specific gene flow patterns with non-Bantu-speaking populations (Khoisan, Pygmy, Afro-Asiatic). Our Y chromosomal results also indicate that the three Bantu-speaking Southeast populations genotyped exhibit unique gene flow patterns involving Eurasian populations but fail to reveal a prevailing genetic affinity to East or Central African Bantu-speaking groups. In addition, the Y-SNP data underscores a longitudinal partitioning in sub-Sahara Africa of two R1b1 subgroups, R1b1-P25* (west) and R1b1a2-M269 (east). No evidence was observed linking the B2a haplogroup detected in the genotyped Southeast African Bantu-speaking populations to gene flow from contemporary Khoisan groups. PMID:27451076
Rowold, Daine J; Perez-Benedico, David; Stojkovic, Oliver; Garcia-Bertrand, Ralph; Herrera, Rene J
2016-11-15
Here we report the results of fine resolution Y chromosomal analyses (Y-SNP and Y-STR) of 267 Bantu-speaking males from three populations located in the southeast region of Africa. In an effort to determine the relative Y chromosomal affinities of these three genotyped populations, the findings are interpreted in the context of 74 geographically and ethnically targeted African reference populations representing four major ethno-linguistic groups (Afro-Asiatic, Niger Kordofanin, Khoisan and Pygmoid). In this investigation, we detected a general similarity in the Y chromosome lineages among the geographically dispersed Bantu-speaking populations suggesting a shared heritage and the shallow time depth of the Bantu Expansion. Also, micro-variations in the Bantu Y chromosomal composition across the continent highlight location-specific gene flow patterns with non-Bantu-speaking populations (Khoisan, Pygmy, Afro-Asiatic). Our Y chromosomal results also indicate that the three Bantu-speaking Southeast populations genotyped exhibit unique gene flow patterns involving Eurasian populations but fail to reveal a prevailing genetic affinity to East or Central African Bantu-speaking groups. In addition, the Y-SNP data underscores a longitudinal partitioning in sub-Sahara Africa of two R1b1 subgroups, R1b1-P25* (west) and R1b1a2-M269 (east). No evidence was observed linking the B2a haplogroup detected in the genotyped Southeast African Bantu-speaking populations to gene flow from contemporary Khoisan groups.
Knowledge Driven Image Mining with Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Oza, Nikunj
2004-01-01
This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels; which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper, we present the theory of Mercer Kernels, describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.
KNBD: A Remote Kernel Block Server for Linux
NASA Technical Reports Server (NTRS)
Becker, Jeff
1999-01-01
I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.
Biodiesel from Siberian apricot (Prunus sibirica L.) seed kernel oil.
Wang, Libing; Yu, Haiyan
2012-05-01
In this paper, Siberian apricot (Prunus sibirica L.) seed kernel oil was investigated for the first time as a promising non-conventional feedstock for preparation of biodiesel. Siberian apricot seed kernel has high oil content (50.18 ± 3.92%), and the oil has low acid value (0.46 mg g(-1)) and low water content (0.17%). The fatty acid composition of the Siberian apricot seed kernel oil includes a high percentage of oleic acid (65.23 ± 4.97%) and linoleic acid (28.92 ± 4.62%). The measured fuel properties of the Siberian apricot biodiesel, except cetane number and oxidative stability, were conformed to EN 14214-08, ASTM D6751-10 and GB/T 20828-07 standards, especially the cold flow properties were excellent (Cold filter plugging point -14°C). The addition of 500 ppm tert-butylhydroquinone (TBHQ) resulted in a higher induction period (7.7h) compliant with all the three biodiesel standards. PMID:22440572
Hyperspectral-imaging-based techniques applied to wheat kernels characterization
NASA Astrophysics Data System (ADS)
Serranti, Silvia; Cesare, Daniela; Bonifazi, Giuseppe
2012-05-01
Single kernels of durum wheat have been analyzed by hyperspectral imaging (HSI). Such an approach is based on the utilization of an integrated hardware and software architecture able to digitally capture and handle spectra as an image sequence, as they results along a pre-defined alignment on a surface sample properly energized. The study was addressed to investigate the possibility to apply HSI techniques for classification of different types of wheat kernels: vitreous, yellow berry and fusarium-damaged. Reflectance spectra of selected wheat kernels of the three typologies have been acquired by a laboratory device equipped with an HSI system working in near infrared field (1000-1700 nm). The hypercubes were analyzed applying principal component analysis (PCA) to reduce the high dimensionality of data and for selecting some effective wavelengths. Partial least squares discriminant analysis (PLS-DA) was applied for classification of the three wheat typologies. The study demonstrated that good classification results were obtained not only considering the entire investigated wavelength range, but also selecting only four optimal wavelengths (1104, 1384, 1454 and 1650 nm) out of 121. The developed procedures based on HSI can be utilized for quality control purposes or for the definition of innovative sorting logics of wheat.
Reduced-size kernel models for nonlinear hybrid system identification.
Le, Van Luong; Bloch, Grard; Lauer, Fabien
2011-12-01
This brief paper focuses on the identification of nonlinear hybrid dynamical systems, i.e., systems switching between multiple nonlinear dynamical behaviors. Thus the aim is to learn an ensemble of submodels from a single set of input-output data in a regression setting with no prior knowledge on the grouping of the data points into similar behaviors. To be able to approximate arbitrary nonlinearities, kernel submodels are considered. However, in order to maintain efficiency when applying the method to large data sets, a preprocessing step is required in order to fix the submodel sizes and limit the number of optimization variables. This brief paper proposes four approaches, respectively inspired by the fixed-size least-squares support vector machines, the feature vector selection method, the kernel principal component regression and a modification of the latter, in order to deal with this issue and build sparse kernel submodels. These are compared in numerical experiments, which show that the proposed approach achieves the simultaneous classification of data points and approximation of the nonlinear behaviors in an efficient and accurate manner.
Fast metabolite identification with Input Output Kernel Regression
Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho
2016-01-01
Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628
Dimensionality reduction of hyperspectral images using kernel ICA
NASA Astrophysics Data System (ADS)
Khan, Asif; Kim, Intaek; Kong, Seong G.
2009-05-01
Computational burden due to high dimensionality of Hyperspectral images is an obstacle in efficient analysis and processing of Hyperspectral images. In this paper, we use Kernel Independent Component Analysis (KICA) for dimensionality reduction of Hyperspectraql images based on band selection. Commonly used ICA and PCA based dimensionality reduction methods do not consider non linear transformations and assumes that data has non-gaussian distribution. When the relation of source signals (pure materials) and observed Hyperspectral images is nonlinear then these methods drop a lot of information during dimensionality reduction process. Recent research shows that kernel-based methods are effective in nonlinear transformations. KICA is robust technique of blind source separation and can even work on near-gaussina data. We use Kernel Independent Component Analysis (KICA) for the selection of minimum number of bands that contain maximum information for detection in Hyperspectral images. The reduction of bands is basd on the evaluation of weight matrix generated by KICA. From the selected lower number of bands, we generate a new spectral image with reduced dimension and use it for hyperspectral image analysis. We use this technique as preprocessing step in detection and classification of poultry skin tumors. The hyperspectral iamge samples of chicken tumors used contain 65 spectral bands of fluorescence in the visible region of the spectrum. Experimental results show that KICA based band selection has high accuracy than that of fastICA based band selection for dimensionality reduction and analysis for Hyperspectral images.
Noise Level Estimation for Model Selection in Kernel PCA Denoising.
Varon, Carolina; Alzate, Carlos; Suykens, Johan A K
2015-11-01
One of the main challenges in unsupervised learning is to find suitable values for the model parameters. In kernel principal component analysis (kPCA), for example, these are the number of components, the kernel, and its parameters. This paper presents a model selection criterion based on distance distributions (MDDs). This criterion can be used to find the number of components and the σ(2) parameter of radial basis function kernels by means of spectral comparison between information and noise. The noise content is estimated from the statistical moments of the distribution of distances in the original dataset. This allows for a type of randomization of the dataset, without actually having to permute the data points or generate artificial datasets. After comparing the eigenvalues computed from the estimated noise with the ones from the input dataset, information is retained and maximized by a set of model parameters. In addition to the model selection criterion, this paper proposes a modification to the fixed-size method and uses the incomplete Cholesky factorization, both of which are used to solve kPCA in large-scale applications. These two approaches, together with the model selection MDD, were tested in toy examples and real life applications, and it is shown that they outperform other known algorithms. PMID:25608316
Predicting activity approach based on new atoms similarity kernel function.
Abu El-Atta, Ahmed H; Moussa, M I; Hassanien, Aboul Ella
2015-07-01
Drug design is a high cost and long term process. To reduce time and costs for drugs discoveries, new techniques are needed. Chemoinformatics field implements the informational techniques and computer science like machine learning and graph theory to discover the chemical compounds properties, such as toxicity or biological activity. This is done through analyzing their molecular structure (molecular graph). To overcome this problem there is an increasing need for algorithms to analyze and classify graph data to predict the activity of molecules. Kernels methods provide a powerful framework which combines machine learning with graph theory techniques. These kernels methods have led to impressive performance results in many several chemoinformatics problems like biological activity prediction. This paper presents a new approach based on kernel functions to solve activity prediction problem for chemical compounds. First we encode all atoms depending on their neighbors then we use these codes to find a relationship between those atoms each other. Then we use relation between different atoms to find similarity between chemical compounds. The proposed approach was compared with many other classification methods and the results show competitive accuracy with these methods.
Initial Kernel Timing Using a Simple PIM Performance Model
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Block, Gary L.; Springer, Paul L.; Sterling, Thomas; Brockman, Jay B.; Callahan, David
2005-01-01
This presentation will describe some initial results of paper-and-pencil studies of 4 or 5 application kernels applied to a processor-in-memory (PIM) system roughly similar to the Cascade Lightweight Processor (LWP). The application kernels are: * Linked list traversal * Sun of leaf nodes on a tree * Bitonic sort * Vector sum * Gaussian elimination The intent of this work is to guide and validate work on the Cascade project in the areas of compilers, simulators, and languages. We will first discuss the generic PIM structure. Then, we will explain the concepts needed to program a parallel PIM system (locality, threads, parcels). Next, we will present a simple PIM performance model that will be used in the remainder of the presentation. For each kernel, we will then present a set of codes, including codes for a single PIM node, and codes for multiple PIM nodes that move data to threads and move threads to data. These codes are written at a fairly low level, between assembly and C, but much closer to C than to assembly. For each code, we will present some hand-drafted timing forecasts, based on the simple PIM performance model. Finally, we will conclude by discussing what we have learned from this work, including what programming styles seem to work best, from the point-of-view of both expressiveness and performance.
Hyperspectral anomaly detection using sparse kernel-based ensemble learning
NASA Astrophysics Data System (ADS)
Gurram, Prudhvi; Han, Timothy; Kwon, Heesung
2011-06-01
In this paper, sparse kernel-based ensemble learning for hyperspectral anomaly detection is proposed. The proposed technique is aimed to optimize an ensemble of kernel-based one class classifiers, such as Support Vector Data Description (SVDD) classifiers, by estimating optimal sparse weights. In this method, hyperspectral signatures are first randomly sub-sampled into a large number of spectral feature subspaces. An enclosing hypersphere that defines the support of spectral data, corresponding to the normalcy/background data, in the Reproducing Kernel Hilbert Space (RKHS) of each respective feature subspace is then estimated using regular SVDD. The enclosing hypersphere basically represents the spectral characteristics of the background data in the respective feature subspace. The joint hypersphere is learned by optimally combining the hyperspheres from the individual RKHS, while imposing the l1 constraint on the combining weights. The joint hypersphere representing the most optimal compact support of the local hyperspectral data in the joint feature subspaces is then used to test each pixel in hyperspectral image data to determine if it belongs to the local background data or not. The outliers are considered to be targets. The performance comparison between the proposed technique and the regular SVDD is provided using the HYDICE hyperspectral images.
Kernel Averaged Predictors for Spatio-Temporal Regression Models.
Heaton, Matthew J; Gelfand, Alan E
2012-12-01
In applications where covariates and responses are observed across space and time, a common goal is to quantify the effect of a change in the covariates on the response while adequately accounting for the spatio-temporal structure of the observations. The most common approach for building such a model is to confine the relationship between a covariate and response variable to a single spatio-temporal location. However, oftentimes the relationship between the response and predictors may extend across space and time. In other words, the response may be affected by levels of predictors in spatio-temporal proximity to the response location. Here, a flexible modeling framework is proposed to capture such spatial and temporal lagged effects between a predictor and a response. Specifically, kernel functions are used to weight a spatio-temporal covariate surface in a regression model for the response. The kernels are assumed to be parametric and non-stationary with the data informing the parameter values of the kernel. The methodology is illustrated on simulated data as well as a physical data set of ozone concentrations to be explained by temperature. PMID:24010051
Open-cluster density profiles derived using a kernel estimator
NASA Astrophysics Data System (ADS)
Seleznev, Anton F.
2016-03-01
Surface and spatial radial density profiles in open clusters are derived using a kernel estimator method. Formulae are obtained for the contribution of every star into the spatial density profile. The evaluation of spatial density profiles is tested against open-cluster models from N-body experiments with N = 500. Surface density profiles are derived for seven open clusters (NGC 1502, 1960, 2287, 2516, 2682, 6819 and 6939) using Two-Micron All-Sky Survey data and for different limiting magnitudes. The selection of an optimal kernel half-width is discussed. It is shown that open-cluster radius estimates hardly depend on the kernel half-width. Hints of stellar mass segregation and structural features indicating cluster non-stationarity in the regular force field are found. A comparison with other investigations shows that the data on open-cluster sizes are often underestimated. The existence of an extended corona around the open cluster NGC 6939 was confirmed. A combined function composed of the King density profile for the cluster core and the uniform sphere for the cluster corona is shown to be a better approximation of the surface radial density profile.The King function alone does not reproduce surface density profiles of sample clusters properly. The number of stars, the cluster masses and the tidal radii in the Galactic gravitational field for the sample clusters are estimated. It is shown that NGC 6819 and 6939 are extended beyond their tidal surfaces.
Fast reactor power plant design having heat pipe heat exchanger
Huebotter, P.R.; McLennan, G.A.
1984-08-30
The invention relates to a pool-type fission reactor power plant design having a reactor vessel containing a primary coolant (such as liquid sodium), and a steam expansion device powered by a pressurized water/steam coolant system. Heat pipe means are disposed between the primary and water coolants to complete the heat transfer therebetween. The heat pipes are vertically oriented, penetrating the reactor deck and being directly submerged in the primary coolant. A U-tube or line passes through each heat pipe, extended over most of the length of the heat pipe and having its walls spaced from but closely proximate to and generally facing the surrounding walls of the heat pipe. The water/steam coolant loop includes each U-tube and the steam expansion device. A heat transfer medium (such as mercury) fills each of the heat pipes. The thermal energy from the primary coolant is transferred to the water coolant by isothermal evaporation-condensation of the heat transfer medium between the heat pipe and U-tube walls, the heat transfer medium moving within the heat pipe primarily transversely between these walls.
Fast reactor power plant design having heat pipe heat exchanger
Huebotter, Paul R.; McLennan, George A.
1985-01-01
The invention relates to a pool-type fission reactor power plant design having a reactor vessel containing a primary coolant (such as liquid sodium), and a steam expansion device powered by a pressurized water/steam coolant system. Heat pipe means are disposed between the primary and water coolants to complete the heat transfer therebetween. The heat pipes are vertically oriented, penetrating the reactor deck and being directly submerged in the primary coolant. A U-tube or line passes through each heat pipe, extended over most of the length of the heat pipe and having its walls spaced from but closely proximate to and generally facing the surrounding walls of the heat pipe. The water/steam coolant loop includes each U-tube and the steam expansion device. A heat transfer medium (such as mercury) fills each of the heat pipes. The thermal energy from the primary coolant is transferred to the water coolant by isothermal evaporation-condensation of the heat transfer medium between the heat pipe and U-tube walls, the heat transfer medium moving within the heat pipe primarily transversely between these walls.
NASA Astrophysics Data System (ADS)
Fasshauer, Detlef W.; Chatterjee, Niranjan D.; Cemic, Ladislav
Heat capacity, thermal expansion, and compressibility data have been obtained for a number of selected phases of the system NaAlSiO4-LiAlSiO4-Al2O3-SiO2-H2O. All Cp measurements have been executed by DSC in the temperature range 133-823K. The data for T>=223K have been fitted to the function Cp(T)=a+cT -2+dT -0.5+fT -3, the fit parameters being The thermal expansion data (up to 525°C) have been fitted to the function V0(T)=V0(T) [1+v1 (T-T0)+v2 (T-T0)2], with T0=298.15K. The room-temperature compressibility data (up to 6 GPa) have been smoothed by the Murnaghan equation of state. The resulting parameters are These data, along with other phase property and reaction reversal data from the literature, have been simultaneously processed by the Bayes method to derive an internally consistent thermodynamic dataset (see Tables 6 and 7) for the NaAlSiO4-LiAlSiO4-Al2O3-SiO2-H2O quinary. Phase diagrams generated from this dataset are compatible with cookeite-, ephesite-, and paragonite-bearing assemblages observed in metabauxites and common metasediments. Phase diagrams obtained from the same database are also in agreement with the cookeite-free, petalite-, spodumene-, eucryptite-, and bikitaite-bearing assemblages known to develop in the subsolidus phase of recrystallization of lithium-bearing pegmatites. It is gratifying to note that the cookeite phase relations predicted earlier by Vidal and Goffé (1991) in the context of the system Li2O-Al2O3-SiO2-H2O agree with our results in a general way.
Isotropic Negative Thermal Expansion Metamaterials.
Wu, Lingling; Li, Bo; Zhou, Ji
2016-07-13
Negative thermal expansion materials are important and desirable in science and engineering applications. However, natural materials with isotropic negative thermal expansion are rare and usually unsatisfied in performance. Here, we propose a novel method to achieve two- and three-dimensional negative thermal expansion metamaterials via antichiral structures. The two-dimensional metamaterial is constructed with unit cells that combine bimaterial strips and antichiral structures, while the three-dimensional metamaterial is fabricated by a multimaterial 3D printing process. Both experimental and simulation results display isotropic negative thermal expansion property of the samples. The effective coefficient of negative thermal expansion of the proposed models is demonstrated to be dependent on the difference between the thermal expansion coefficient of the component materials, as well as on the circular node radius and the ligament length in the antichiral structures. The measured value of the linear negative thermal expansion coefficient of the three-dimensional sample is among the largest achieved in experiments to date. Our findings provide an easy and practical approach to obtaining materials with tunable negative thermal expansion on any scale.
Isotropic Negative Thermal Expansion Metamaterials.
Wu, Lingling; Li, Bo; Zhou, Ji
2016-07-13
Negative thermal expansion materials are important and desirable in science and engineering applications. However, natural materials with isotropic negative thermal expansion are rare and usually unsatisfied in performance. Here, we propose a novel method to achieve two- and three-dimensional negative thermal expansion metamaterials via antichiral structures. The two-dimensional metamaterial is constructed with unit cells that combine bimaterial strips and antichiral structures, while the three-dimensional metamaterial is fabricated by a multimaterial 3D printing process. Both experimental and simulation results display isotropic negative thermal expansion property of the samples. The effective coefficient of negative thermal expansion of the proposed models is demonstrated to be dependent on the difference between the thermal expansion coefficient of the component materials, as well as on the circular node radius and the ligament length in the antichiral structures. The measured value of the linear negative thermal expansion coefficient of the three-dimensional sample is among the largest achieved in experiments to date. Our findings provide an easy and practical approach to obtaining materials with tunable negative thermal expansion on any scale. PMID:27333052
NASA Astrophysics Data System (ADS)
Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas
2015-05-01
Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.
General-form 3-3-3 interpolation kernel and its simplified frequency-response derivation
NASA Astrophysics Data System (ADS)
Deng, Tian-Bo
2016-11-01
An interpolation kernel is required in a wide variety of signal processing applications such as image interpolation and timing adjustment in digital communications. This article presents a general-form interpolation kernel called 3-3-3 interpolation kernel and derives its frequency response in a closed-form by using a simple derivation method. This closed-form formula is preliminary to designing various 3-3-3 interpolation kernels subject to a set of design constraints. The 3-3-3 interpolation kernel is formed through utilising the third-degree piecewise polynomials, and it is an even-symmetric function. Thus, it will suffice to consider only its right-hand side when deriving its frequency response. Since the right-hand side of the interpolation kernel contains three piecewise polynomials of the third degree, i.e. the degrees of the three piecewise polynomials are (3,3,3), we call it the 3-3-3 interpolation kernel. Once the general-form frequency-response formula is derived, we can systematically formulate the design of various 3-3-3 interpolation kernels subject to a set of design constraints, which are targeted for different interpolation applications. Therefore, the closed-form frequency-response expression is preliminary to the optimal design of various 3-3-3 interpolation kernels. We will use an example to show the optimal design of a 3-3-3 interpolation kernel based on the closed-form frequency-response expression.
Greiner, Leonard
1980-01-01
A chemical heat pump system is disclosed for use in heating and cooling structures such as residences or commercial buildings. The system is particularly adapted to utilizing solar energy, but also increases the efficiency of other forms of thermal energy when solar energy is not available. When solar energy is not available for relatively short periods of time, the heat storage capacity of the chemical heat pump is utilized to heat the structure as during nighttime hours. The design also permits home heating from solar energy when the sun is shining. The entire system may be conveniently rooftop located. In order to facilitate installation on existing structures, the absorber and vaporizer portions of the system may each be designed as flat, thin wall, thin pan vessels which materially increase the surface area available for heat transfer. In addition, this thin, flat configuration of the absorber and its thin walled (and therefore relatively flexible) construction permits substantial expansion and contraction of the absorber material during vaporization and absorption without generating voids which would interfere with heat transfer. The heat pump part of the system heats or cools a house or other structure through a combination of evaporation and absorption or, conversely, condensation and desorption, in a pair of containers. A set of automatic controls change the system for operation during winter and summer months and for daytime and nighttime operation to satisfactorily heat and cool a house during an entire year. The absorber chamber is subjected to solar heating during regeneration cycles and is covered by one or more layers of glass or other transparent material. Daytime home air used for heating the home is passed at appropriate flow rates between the absorber container and the first transparent cover layer in heat transfer relationship in a manner that greatly reduce eddies and resultant heat loss from the absorbant surface to ambient atmosphere.
Wardlaw, Ian F
2002-10-01
Wheat plants (Triticum aestivum L. 'Lyallpur'), limited to a single culm, were grown at day/night temperatures of either 18/13 degrees C (moderate temperature), or 27/22 degrees C (chronic high temperature) from the time of anthesis. Plants were either non-droughted or subjected to two post-anthesis water stresses by withholding water from plants grown in different volumes of potting mix. In selected plants the demand for assimilates by the ear was reduced by removal of all but the five central spikelets. In non-droughted plants, it was confirmed that shading following anthesis (source limitation) reduced kernel dry weight at maturity, with a compensating increase in the dry weight of the remaining kernels when the total number of kernels was reduced (small sink). Reducing kernel number did not alter the effect of high temperature following anthesis on the dry weight of the remaining kernels at maturity, but reducing the number of kernels did result in a greater dry weight of the remaining kernels of droughted plants. However, the relationship between the response to drought and kernel number was confounded by a reduction in the extent of water stress associated with kernel removal. Data on the effect of water stress on kernel dry weight at maturity of plants with either the full complement or reduced numbers of kernels, and subjected to low and high temperatures following anthesis, indicate that the effect of drought on kernel dry weight may be reduced, in both absolute and relative terms, rather than enhanced, at high temperature. It is suggested that where high temperature and drought occur concurrently after anthesis there may be a degree of drought escape associated with chronic high temperature due to the reduction in the duration of kernel filling, even though the rate of water use may be enhanced by high temperature. PMID:12324270
Kinetic models in n-dimensional Euclidean spaces: From the Maxwellian to the Poisson kernel.
Zadehgol, Abed
2015-06-01
In this work, minimal kinetic theories based on unconventional entropy functions, H∼ln f (Burg entropy) for 2D and H∼f(1-2/n) (Tsallis entropy) for nD with n≥3, are studied. These entropy functions were originally derived by Boghosian et al. [Phys. Rev. E 68, 025103 (2003)] as a basis for discrete-velocity and lattice Boltzmann models for incompressible fluid dynamics. The present paper extends the entropic models of Boghosian et al. and shows that the explicit form of the equilibrium distribution function (EDF) of their models, in the continuous-velocity limit, can be identified with the Poisson kernel of the Poisson integral formula. The conservation and Navier-Stokes equations are recovered at low Mach numbers, and it is shown that rest particles can be used to rectify the speed of sound of the extended models. Fourier series expansion of the EDF is used to evaluate the discretization errors of the model. It is shown that the expansion coefficients of the Fourier series coincide with the velocity moments of the model. Employing two-, three-, and four-dimensional (2D, 3D, and 4D) complex systems, the real velocity space is mapped into the hypercomplex spaces and it is shown that the velocity moments can be evaluated, using the Poisson integral formula, in the hypercomplex space. For the practical applications, a 3D projection of the 4D model is presented, and the existence of an H theorem for the discrete model is investigated. The theoretical results have been verified by simulating the following benchmark problems: (1) the Kelvin-Helmholtz instability of thin shear layers in a doubly periodic domain and (2) the 3D flow of incompressible fluid in a lid-driven cubic cavity. The present results are in agreement with the previous works, while they show better stability of the proposed kinetic model, as compared with the BGK type (with single relaxation time) lattice Boltzmann models. PMID:26172826
Robinson, Brian S; Song, Dong; Berger, Theodore W
2014-01-01
This paper presents a methodology to estimate a learning rule that governs activity-dependent plasticity from behaviorally recorded spiking events. To demonstrate this framework, we simulate a probabilistic spiking neuron with spike-timing-dependent plasticity (STDP) and estimate all model parameters from the simulated spiking data. In the neuron model, output spiking activity is generated by the combination of noise, feedback from the output, and an input-feedforward component whose magnitude is modulated by synaptic weight. The synaptic weight is calculated with STDP with the following features: (1) weight change based on the relative timing of input-output spike pairs, (2) prolonged plasticity induction, and (3) considerations for system stability. Estimation of all model parameters is achieved iteratively by formulating the model as a generalized linear model with Volterra kernels and basis function expansion. Successful estimation of all model parameters in this study demonstrates the feasibility of this approach for in-vivo experimental studies. Furthermore, the consideration of system stability and prolonged plasticity induction enhances the ability to capture how STDP affects a neural population's signal transformation properties over a realistic time course. Plasticity characterization with this estimation method could yield insights into functional implications of STDP and be incorporated into a cortical prosthesis.
Beabout, R.W.
1986-09-02
Most of the power consumed in the gaseous diffusion process is converted into heat of compression, which is removed from the process gas and rejected into the atmosphere by recirculating cooling water over cooling towers. The water being handled through the X-333 and X-330 Process Buildings can be heated to 140 to 150/sup 0/F for heating use. The Gas Centrifuge Enrichment Plant is provided with a recirculating heating water (RHW) system which uses X-330 water and wasted heat. The RHW flow is diagrammed. (DLC)
Swenson, Paul F.; Moore, Paul B.
1983-01-01
An air heating and cooling system for a building includes an expansion type refrigeration circuit and a vapor power circuit. The refrigeration circuit includes two heat exchangers, one of which is communicated with a source of indoor air from the building and the other of which is communicated with a source of air from outside the building. The vapor power circuit includes two heat exchangers, one of which is disposed in series air flow relationship with the indoor refrigeration circuit heat exchanger and the other of which is disposed in series air flow relationship with the outdoor refrigeration circuit heat exchanger. Fans powered by electricity generated by a vapor power circuit alternator circulate indoor air through the two indoor heat exchangers and circulate outside air through the two outdoor heat exchangers. The system is assembled as a single roof top unit, with a vapor power generator and turbine and compressor thermally insulated from the heat exchangers, and with the indoor heat exchangers thermally insulated from the outdoor heat exchangers.
Swenson, Paul F.; Moore, Paul B.
1983-06-21
An air heating and cooling system for a building includes an expansion type refrigeration circuit and a vapor power circuit. The refrigeration circuit includes two heat exchangers, one of which is communicated with a source of indoor air from the building and the other of which is communicated with a source of air from outside the building. The vapor power circuit includes two heat exchangers, one of which is disposed in series air flow relationship with the indoor refrigeration circuit heat exchanger and the other of which is disposed in series air flow relationship with the outdoor refrigeration circuit heat exchanger. Fans powered by electricity generated by a vapor power circuit alternator circulate indoor air through the two indoor heat exchangers and circulate outside air through the two outdoor heat exchangers. The system is assembled as a single roof top unit, with a vapor power generator and turbine and compressor thermally insulated from the heat exchangers, and with the indoor heat exchangers thermally insulated from the outdoor heat exchangers.
Swenson, Paul F.; Moore, Paul B.
1977-01-01
An air heating and cooling system for a building includes an expansion type refrigeration circuit and a vapor power circuit. The refrigeration circuit includes two heat exchangers, one of which is communicated with a source of indoor air from the building and the other of which is communicated with a source of air from outside the building. The vapor power circuit includes two heat exchangers, one of which is disposed in series air flow relationship with the indoor refrigeration circuit heat exchanger and the other of which is disposed in series air flow relationship with the outdoor refrigeration circuit heat exchanger. Fans powered by electricity generated by a vapor power circuit alternator circulate indoor air through the two indoor heat exchangers and circulate outside air through the two outdoor heat exchangers. The system is assembled as a single roof top unit, with a vapor power generator and turbine and compressor thermally insulated from the heat exchangers, and with the indoor heat exchangers thermally insulated from the outdoor heat exchangers.
High frequency-heated air turbojet
NASA Technical Reports Server (NTRS)
Miron, J. H. D.
1986-01-01
A description is given of a method to heat air coming from a turbojet compressor to a temperature necessary to produce required expansion without requiring fuel. This is done by high frequency heating, which heats the walls corresponding to the combustion chamber in existing jets, by mounting high frequency coils in them. The current transformer and high frequency generator to be used are discussed.
Isothermal expansion of a spherical layer with a given areal density into vacuum
NASA Astrophysics Data System (ADS)
Gus'kov, S. Yu.
2016-04-01
An analytical solution has been obtained for the spherical isothermal expansion of the outer layer of a ball whose mass increases at a constant areal density of the heated layer, which is equal to the product of the initial values of the depth of heating and the density of the layer for the entire time of expansion into vacuum. This solution differs from the known solution for the isothermal spherical expansion of a given mass of a material in a slower decrease in the density and, as a result, in the pressure of the expanding material with the time. In particular, it describes the expansion of the boundary layer of the ball heated by a flow of fast electrons in application to the problem of the ignition of an inertial confinement fusion target by a shock wave induced because of the heating of the target by the flow of laser-accelerated fast electrons (shock ignition).
Using a Michelson Interferometer to Measure Coefficient of Thermal Expansion of Copper
ERIC Educational Resources Information Center
Scholl, Ryan; Liby, Bruce W.
2009-01-01
When most materials are heated they expand. This concept is usually demonstrated using some type of mechanical measurement of the linear expansion of a metal rod. We have developed an alternative laboratory method for measuring thermal expansion by using a Michelson interferometer. Using the method presented, interference, interferometry, and the…
Genus expansion of HOMFLY polynomials
NASA Astrophysics Data System (ADS)
Mironov, A. D.; Morozov, A. Yu.; Sleptsov, A. V.
2013-11-01
In the planar limit of the' t Hooft expansion, the Wilson-loop vacuum average in the three-dimensional Chern-Simons theory (in other words, the HOMFLY polynomial) depends very simply on the representation (Young diagram), HR(A|q)|q=1 = (σ1(A)|R|. As a result, the (knot-dependent) Ooguri-Vafa partition function becomes a trivial τ -function of the Kadomtsev-Petviashvili hierarchy. We study higher-genus corrections to this formula for HR in the form of an expansion in powers of z = q - q-1. The expansion coefficients are expressed in terms of the eigenvalues of cut-and-join operators, i.e., symmetric group characters. Moreover, the z-expansion is naturally written in a product form. The representation in terms of cut-and-join operators relates to the Hurwitz theory and its sophisticated integrability. The obtained relations describe the form of the genus expansion for the HOMFLY polynomials, which for the corresponding matrix model is usually given using Virasoro-like constraints and the topological recursion. The genus expansion differs from the better-studied weak-coupling expansion at a finite number N of colors, which is described in terms of Vassiliev invariants and the Kontsevich integral.
Atom cooling by nonadiabatic expansion
Chen Xi; Muga, J. G.; Campo, A. del; Ruschhaupt, A.
2009-12-15
Motivated by the recent discovery that a reflecting wall moving with a square-root-in-time trajectory behaves as a universal stopper of classical particles regardless of their initial velocities, we compare linear-in-time and square-root-in-time expansions of a box to achieve efficient atom cooling. For the quantum single-atom wave functions studied the square-root-in-time expansion presents important advantages: asymptotically it leads to zero average energy whereas any linear-in-time (constant box-wall velocity) expansion leaves a nonzero residual energy, except in the limit of an infinitely slow expansion. For finite final times and box lengths we set a number of bounds and cooling principles which again confirm the superior performance of the square-root-in-time expansion, even more clearly for increasing excitation of the initial state. Breakdown of adiabaticity is generally fatal for cooling with the linear expansion but not so with the square-root-in-time expansion.
Design of a multiple kernel learning algorithm for LS-SVM by convex programming.
Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou
2011-06-01
As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed.
Kernel-aligned multi-view canonical correlation analysis for image recognition
NASA Astrophysics Data System (ADS)
Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao
2016-09-01
Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.
NASA Astrophysics Data System (ADS)
Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin
2015-10-01
The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.