Covariant Perturbation Expansion of Off-Diagonal Heat Kernel
NASA Astrophysics Data System (ADS)
Gou, Yu-Zi; Li, Wen-Du; Zhang, Ping; Dai, Wu-Sheng
2016-07-01
Covariant perturbation expansion is an important method in quantum field theory. In this paper an expansion up to arbitrary order for off-diagonal heat kernels in flat space based on the covariant perturbation expansion is given. In literature, only diagonal heat kernels are calculated based on the covariant perturbation expansion.
Heat kernel expansion in the background field formalism
NASA Astrophysics Data System (ADS)
Barvinsky, Andrei O.
2015-06-01
Heat kernel expansion and background field formalism represent the combination of two calculational methods within the functional approach to quantum field theory. This approach implies construction of generating functionals for matrix elements and expectation values of physical observables. These are functionals of arbitrary external sources or the mean field of a generic configuration -- the background field. Exact calculation of quantum effects on a generic background is impossible. However, a special integral (proper time) representation for the Green's function of the wave operator -- the propagator of the theory -- and its expansion in the ultraviolet and infrared limits of respectively short and late proper time parameter allow one to construct approximations which are valid on generic background fields. Current progress of quantum field theory, its renormalization properties, model building in unification of fundamental physical interactions and QFT applications in high energy physics, gravitation and cosmology critically rely on efficiency of the heat kernel expansion and background field formalism.
Heat kernel asymptotic expansions for the Heisenberg sub-Laplacian and the Grushin operator
Chang, Der-Chen; Li, Yutian
2015-01-01
The sub-Laplacian on the Heisenberg group and the Grushin operator are typical examples of sub-elliptic operators. Their heat kernels are both given in the form of Laplace-type integrals. By using Laplace's method, the method of stationary phase and the method of steepest descent, we derive the small-time asymptotic expansions for these heat kernels, which are related to the geodesic structure of the induced geometries. PMID:25792966
Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.
2014-01-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435
Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K
2015-05-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. PMID:25791435
Heat kernel for flat generalized Laplacians with anisotropic scaling
NASA Astrophysics Data System (ADS)
Mamiya, A.; Pinzul, A.
2014-06-01
We calculate the closed analytic form of the solution of heat kernel equation for the anisotropic generalizations of flat Laplacian. We consider a UV as well as UV/IR interpolating generalizations. In all cases, the result can be expressed in terms of Fox-Wright psi-functions. We perform different consistency checks, analytically reproducing some of the previous numerical or qualitative results, such as spectral dimension flow. Our study should be considered as a first step towards the construction of a heat kernel for curved Hořava-Lifshitz geometries, which is an essential ingredient in the spectral action approach to the construction of the Hořava-Lifshitz gravity.
Frostless heat pump having thermal expansion valves
Chen, Fang C [Knoxville, TN; Mei, Viung C [Oak Ridge, TN
2002-10-22
A heat pump system having an operable relationship for transferring heat between an exterior atmosphere and an interior atmosphere via a fluid refrigerant and further having a compressor, an interior heat exchanger, an exterior heat exchanger, a heat pump reversing valve, an accumulator, a thermal expansion valve having a remote sensing bulb disposed in heat transferable contact with the refrigerant piping section between said accumulator and said reversing valve, an outdoor temperature sensor, and a first means for heating said remote sensing bulb in response to said outdoor temperature sensor thereby opening said thermal expansion valve to raise suction pressure in order to mitigate defrosting of said exterior heat exchanger wherein said heat pump continues to operate in a heating mode.
Multi-scale Heat Kernel based Volumetric Morphology Signature
Wang, Gang; Wang, Yalin
2015-01-01
Here we introduce a novel multi-scale heat kernel based regional shape statistical approach that may improve statistical power on the structural analysis. The mechanism of this analysis is driven by the graph spectrum and the heat kernel theory, to capture the volumetric geometry information in the constructed tetrahedral mesh. In order to capture profound volumetric changes, we first use the volumetric Laplace-Beltrami operator to determine the point pair correspondence between two boundary surfaces by computing the streamline in the tetrahedral mesh. Secondly, we propose a multi-scale volumetric morphology signature to describe the transition probability by random walk between the point pairs, which reflects the inherent geometric characteristics. Thirdly, a point distribution model is applied to reduce the dimensionality of the volumetric morphology signatures and generate the internal structure features. The multi-scale and physics based internal structure features may bring stronger statistical power than other traditional methods for volumetric morphology analysis. To validate our method, we apply support vector machine to classify synthetic data and brain MR images. In our experiments, the proposed work outperformed FreeSurfer thickness features in Alzheimer's disease patient and normal control subject classification analysis. PMID:26550613
Analysis of heat kernel highlights the strongly modular and heat-preserving structure of proteins
NASA Astrophysics Data System (ADS)
Livi, Lorenzo; Maiorino, Enrico; Pinna, Andrea; Sadeghian, Alireza; Rizzi, Antonello; Giuliani, Alessandro
2016-01-01
In this paper, we study the structure and dynamical properties of protein contact networks with respect to other biological networks, together with simulated archetypal models acting as probes. We consider both classical topological descriptors, such as modularity and statistics of the shortest paths, and different interpretations in terms of diffusion provided by the discrete heat kernel, which is elaborated from the normalized graph Laplacians. A principal component analysis shows high discrimination among the network types, by considering both the topological and heat kernel based vector characterizations. Furthermore, a canonical correlation analysis demonstrates the strong agreement among those two characterizations, providing thus an important justification in terms of interpretability for the heat kernel. Finally, and most importantly, the focused analysis of the heat kernel provides a way to yield insights on the fact that proteins have to satisfy specific structural design constraints that the other considered networks do not need to obey. Notably, the heat trace decay of an ensemble of varying-size proteins denotes subdiffusion, a peculiar property of proteins.
Reduction of Salmonella Enteritidis Population Sizes on Almond Kernels with Infrared Heat
Technology Transfer Automated Retrieval System (TEKTRAN)
Catalytic infrared (IR) heating was investigated to determine its effect on Salmonella enterica serovar Enteritidis population sizes on raw almond kernels. Using a double-sided catalytic infrared heating system, a radiation intensity of 5458 W/m2 caused a fast temperature increase at the kernel surf...
NASA Astrophysics Data System (ADS)
Fomin, Fedor V.
Preprocessing (data reduction or kernelization) as a strategy of coping with hard problems is universally used in almost every implementation. The history of preprocessing, like applying reduction rules simplifying truth functions, can be traced back to the 1950's [6]. A natural question in this regard is how to measure the quality of preprocessing rules proposed for a specific problem. For a long time the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this anomaly was that if we start with an instance I of an NP-hard problem and can show that in polynomial time we can replace this with an equivalent instance I' with |I'| < |I| then that would imply P=NP in classical complexity.
Sharp Two-Sided Heat Kernel Estimates of Twisted Tubes and Applications
NASA Astrophysics Data System (ADS)
Grillo, Gabriele; Kovařík, Hynek; Pinchover, Yehuda
2014-07-01
We prove on-diagonal bounds for the heat kernel of the Dirichlet Laplacian in locally twisted three-dimensional tubes Ω. In particular, we show that for any fixed x the heat kernel decays for large times as , where E 1 is the fundamental eigenvalue of the Dirichlet Laplacian on the cross section of the tube. This shows that any, suitably regular, local twisting speeds up the decay of the heat kernel with respect to the case of straight (untwisted) tubes. Moreover, the above large time decay is valid for a wide class of subcritical operators defined on a straight tube. We also discuss some applications of this result, such as Sobolev inequalities and spectral estimates for Schrödinger operators.
Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J; Wang, Yalin
2015-05-01
Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the gray matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer's disease (AD) and mild cognitive impairment (MCI) on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces. PMID:25700360
Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J.; Wang, Yalin
2015-01-01
Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the grey matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer’s disease (AD) and mild cognitive impairment (MCI) on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces. PMID:25700360
Heat Pumps With Direct Expansion Solar Collectors
NASA Astrophysics Data System (ADS)
Ito, Sadasuke
In this paper, the studies of heat pump systems using solar collectors as the evaporators, which have been done so far by reserchers, are reviwed. Usually, a solar collector without any cover is preferable to one with ac over because of the necessity of absorbing heat from the ambient air when the intensity of the solar energy on the collector is not enough. The performance of the collector depends on its area and the intensity of the convective heat transfer on the surface. Fins are fixed on the backside of the collector-surface or on the tube in which the refrigerant flows in order to increase the convective heat transfer. For the purpose of using a heat pump efficiently throughout year, a compressor with variable capacity is applied. The solar assisted heat pump can be used for air conditioning at night during the summer. Only a few groups of people have studied cooling by using solar assisted heat pump systems. In Japan, a kind of system for hot water supply has been produced commercially in a company and a kind of system for air conditioning has been installed in buildings commercially by another company.
Plasma heating via adiabatic magnetic compression-expansion cycle
NASA Astrophysics Data System (ADS)
Avinash, K.; Sengupta, M.; Ganesh, R.
2016-06-01
Heating of collisionless plasmas in closed adiabatic magnetic cycle comprising of a quasi static compression followed by a non quasi static constrained expansion against a constant external pressure is proposed. Thermodynamic constraints are derived to show that the plasma always gains heat in cycles having at least one non quasi static process. The turbulent relaxation of the plasma to the equilibrium state at the end of the non quasi static expansion is discussed and verified via 1D Particle in Cell (PIC) simulations. Applications of this scheme to heating plasmas in open configurations (mirror machines) and closed configurations (tokamak, reverse field pinche) are discussed.
Three-dimensional photodissociation in strong laser fields: Memory-kernel effective-mode expansion
Li Xuan; Thanopulos, Ioannis; Shapiro, Moshe
2011-03-15
We introduce a method for the efficient computation of non-Markovian quantum dynamics for strong (and time-dependent) system-bath interactions. The past history of the system dynamics is incorporated by expanding the memory kernel in exponential functions thereby transforming in an exact fashion the non-Markovian integrodifferential equations into a (larger) set of ''effective modes'' differential equations (EMDE). We have devised a method which easily diagonalizes the EMDE, thereby allowing for the efficient construction of an adiabatic basis and the fast propagation of the EMDE in time. We have applied this method to three-dimensional photodissociation of the H{sub 2}{sup +} molecule by strong laser fields. Our calculations properly include resonance-Raman scattering via the continuum, resulting in extensive rotational and vibrational excitations. The calculated final kinetic and angular distribution of the photofragments are in overall excellent agreement with experiments, both when transform-limited pulses and when chirped pulses are used.
Investigation of direct expansion in ground source heat pumps
NASA Astrophysics Data System (ADS)
Kalman, M. D.
A fully instrumented subscale ground coupled heat pump system was developed, and built, and used to test and obtain data on three different earth heat exchanger configurations under heating conditions (ground cooling). Various refrigerant flow control and compressor protection devices were tested for their applicability to the direct expansion system. Undistributed Earth temperature data were acquired at various depths. The problem of oil return at low evaporator temperatures and low refrigerant velocities was addressed. An analysis was performed to theoretically determine what evaporator temperature can be expected with an isolated ground pipe configuration with given length, pipe size, soil conditions and constant heat load. Technical accomplishments to data are summarized.
NASA Astrophysics Data System (ADS)
Altaç, Zekeriya; Tekkalmaz, Mesut
2013-11-01
In this study, a nodal method based on the synthetic kernel (SKN) approximation is developed for solving the radiative transfer equation (RTE) in one- and two-dimensional cartesian geometries. The RTE for a two-dimensional node is transformed to one-dimensional RTE, based on face-averaged radiation intensity. At the node interfaces, double P1 expansion is employed to the surface angular intensities with the isotropic transverse leakage assumption. The one-dimensional radiative integral transfer equation (RITE) is obtained in terms of the node-face-averaged incoming/outgoing incident energy and partial heat fluxes. The synthetic kernel approximation is employed to the transfer kernels and nodal-face contributions. The resulting SKN equations are solved analytically. One-dimensional interface-coupling nodal SK1 and SK2 equations (incoming/outgoing incident energy and net partial heat flux) are derived for the small nodal-mesh limit. These equations have simple algebraic and recursive forms which impose burden on neither the memory nor the computational time. The method was applied to one- and two-dimensional benchmark problems including hot/cold medium with transparent/emitting walls. The 2D results are free of ray effect and the results, for geometries of a few mean-free-paths or more, are in excellent agreement with the exact solutions.
Heat damage and in vitro starch digestibility of puffed wheat kernels.
Cattaneo, Stefano; Hidalgo, Alyssa; Masotti, Fabio; Stuknytė, Milda; Brandolini, Andrea; De Noni, Ivano
2015-12-01
The effect of processing conditions on heat damage, starch digestibility, release of advanced glycation end products (AGEs) and antioxidant capacity of puffed cereals was studied. The determination of several markers arising from Maillard reaction proved pyrraline (PYR) and hydroxymethylfurfural (HMF) as the most reliable indices of heat load applied during puffing. The considerable heat load was evidenced by the high levels of both PYR (57.6-153.4 mg kg(-1) dry matter) and HMF (13-51.2 mg kg(-1) dry matter). For cost and simplicity, HMF looked like the most appropriate index in puffed cereals. Puffing influenced starch in vitro digestibility, being most of the starch (81-93%) hydrolyzed to maltotriose, maltose and glucose whereas only limited amounts of AGEs were released. The relevant antioxidant capacity revealed by digested puffed kernels can be ascribed to both the new formed Maillard reaction products and the conditions adopted during in vitro digestion. PMID:26041194
An Irreducible Form of Gamma Matrices for HMDS Coefficients of the Heat Kernel in Higher Dimensions
NASA Astrophysics Data System (ADS)
Fukuda, M.; Yajima, S.; Higashida, Y.; Kubota, S.; Tokuo, S.; Kamo, Y.
2009-05-01
The heat kernel method is used to calculate 1-loop corrections of a fermion interacting with general background fields. To apply the Hadamard-Minakshisundaram-DeWitt-Seeley (HMDS) coefficients a_q(x,x') of the heat kernel to calculate the corrections, it is meaningful to decompose the coefficients into tensorial components with irreducible matrices, which are the totally antisymmetric products of γ matrices. We present formulae for the tensorial forms of the γ-matrix-valued quantities X, tilde{Λ}_{μν} and their product and covariant derivative in terms of the irreducible matrices in higher dimensions. The concrete forms of HMDS coefficients obtained by repeated application of the formulae simplifies the derivation of the loop corrections after the trace calculations, because each term in the coefficients contains one of the irreducible matrices and some of the terms are expressed by commutator and the anticommutator with respect to th e generator of non-abelian gauge groups. The form of the third HMDS coefficient is useful for evaluating some of the fermionic anomalies in 6-dimensional curved space. We show that the new formulae appear in the chiral {U(1)} anomaly when the vector and the third-order tensor gauge fields do not commute.
Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation
Huang, Hao; Yoo, Shinjae; Yu, Dantong; Qin, Hong
2015-06-01
Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, so it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.
Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation
Huang, Hao; Yoo, Shinjae; Yu, Dantong; Qin, Hong
2015-06-01
Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, somore » it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.« less
Kornilov, Oleg; Toennies, J. Peter
2015-02-21
The size distribution of para-H{sub 2} (pH{sub 2}) clusters produced in free jet expansions at a source temperature of T{sub 0} = 29.5 K and pressures of P{sub 0} = 0.9–1.96 bars is reported and analyzed according to a cluster growth model based on the Smoluchowski theory with kernel scaling. Good overall agreement is found between the measured and predicted, N{sub k} = A k{sup a} e{sup −bk}, shape of the distribution. The fit yields values for A and b for values of a derived from simple collision models. The small remaining deviations between measured abundances and theory imply a (pH{sub 2}){sub k} magic number cluster of k = 13 as has been observed previously by Raman spectroscopy. The predicted linear dependence of b{sup −(a+1)} on source gas pressure was verified and used to determine the value of the basic effective agglomeration reaction rate constant. A comparison of the corresponding effective growth cross sections σ{sub 11} with results from a similar analysis of He cluster size distributions indicates that the latter are much larger by a factor 6-10. An analysis of the three body recombination rates, the geometric sizes and the fact that the He clusters are liquid independent of their size can explain the larger cross sections found for He.
Rotational Relaxation in Nonequilibrium Freejet Expansions of Heated Nitrogen
NASA Technical Reports Server (NTRS)
Gochberg, Lawrence A.; Hurlbut, Franklin C.; Arnold, James O. (Technical Monitor)
1994-01-01
Rotational temperatures have been measured in rarefied, nonequilibrium, heated freejet expansions of nitrogen using the electron beam fluorescence technique at the University of California at Berkeley Low Density Wind Tunnel facility. Spectroscopic measurements of the (0,0) band of the first negative system of nitrogen reveal the nonequilibrium behavior in the flowfield upstream of, and through the Mach disk, which forms as the freejet expands into a region of finite back pressure. Results compare well with previous freejet expansion data and computations regarding location of the Mach disk and terminal rotational temperature in the expansion. Measurements are also presented for shock thickness based on the rotational temperature changes in the flow. Thickening shock layers, departures of rotational temperature from equilibrium in the expansion region, and downstream rotational temperature recovery much below that of an isentropic normal shock provide indications of the rarefied, nonequilibrium flow behavior. The data are analyzed to infer constant values of the rotational-relaxation collision number from 2.2 to 6.5 for the various flow conditions. Collision numbers are also calculated in a consistent manner for data from other investigations for which is seen a qualitative increase with increasing temperature. Rotational-relaxation collision numbers are seen as not fully descriptive of the rarefied freejet flows. This may be due to the high degree of nonequilibrium in the flowfields, and/or to the use of a temperature-insensitive rotational-relaxation collision number model in the data analyses.
NASA Astrophysics Data System (ADS)
Juan-Mian, Lei; Xue-Ying, Peng
2016-02-01
Kernel gradient free-smoothed particle hydrodynamics (KGF-SPH) is a modified smoothed particle hydrodynamics (SPH) method which has higher precision than the conventional SPH. However, the Laplacian in KGF-SPH is approximated by the two-pass model which increases computational cost. A new kind of discretization scheme for the Laplacian is proposed in this paper, then a method with higher precision and better stability, called Improved KGF-SPH, is developed by modifying KGF-SPH with this new Laplacian model. One-dimensional (1D) and two-dimensional (2D) heat conduction problems are used to test the precision and stability of the Improved KGF-SPH. The numerical results demonstrate that the Improved KGF-SPH is more accurate than SPH, and stabler than KGF-SPH. Natural convection in a closed square cavity at different Rayleigh numbers are modeled by the Improved KGF-SPH with shifting particle position, and the Improved KGF-SPH results are presented in comparison with those of SPH and finite volume method (FVM). The numerical results demonstrate that the Improved KGF-SPH is a more accurate method to study and model the heat transfer problems.
Energy recovery during expansion of compressed gas using power plant low-quality heat sources
Ochs, Thomas L.; O'Connor, William K.
2006-03-07
A method of recovering energy from a cool compressed gas, compressed liquid, vapor, or supercritical fluid is disclosed which includes incrementally expanding the compressed gas, compressed liquid, vapor, or supercritical fluid through a plurality of expansion engines and heating the gas, vapor, compressed liquid, or supercritical fluid entering at least one of the expansion engines with a low quality heat source. Expansion engines such as turbines and multiple expansions with heating are disclosed.
Calculates Thermal Neutron Scattering Kernel.
Energy Science and Technology Software Center (ESTSC)
1989-11-10
Version 00 THRUSH computes the thermal neutron scattering kernel by the phonon expansion method for both coherent and incoherent scattering processes. The calculation of the coherent part is suitable only for calculating the scattering kernel for heavy water.
Hypervelocity Heat-Transfer Measurements in an Expansion Tube
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Perkins, John N.
1996-01-01
A series of experiments has been conducted in the NASA HYPULSE Expansion Tube, in both CO2 and air test gases, in order to obtain data for comparison with computational results and to assess the capability for performing hypervelocity heat-transfer studies in this facility. Heat-transfer measurements were made in both test gases on 70 deg sphere-cone models and on hemisphere models of various radii. HYPULSE freestream flow conditions in these test gases were found to be repeatable to within 3-10%, and aerothermodynamic test times of 150 microsec in CO2 and 125 microsec in air were identified. Heat-transfer measurement uncertainty was estimated to be 10-15%. Comparisons were made with computational results from the non-equilibrium Navier-Stokes solver NEQ2D. Measured and computed heat-transfer rates agreed to within 10% on the hemispheres and on the sphere-cone forebodies, and to within 10% in CO2 and 25% in air on the afterbodies and stings of the sphere-cone models.
Optimization of Heat-Sink Cooling Structure in EAST with Hydraulic Expansion Technique
NASA Astrophysics Data System (ADS)
Xu, Tiejun; Huang, Shenghong; Xie, Han; Song, Yuntao; Zhan, Ping; Ji, Xiang; Gao, Daming
2011-12-01
Considering utilization of the original chromium-bronze material, two processing techniques including hydraulic expansion and high temperature vacuum welding were proposed for the optimization of heat-sink structure in EAST. The heat transfer performance of heat-sink with or without cooling tube was calculated and different types of connection between tube and heat-sink were compared by conducting a special test. It is shown from numerical analysis that the diameter of heat-sink channel can be reduced from 12 mm to 10 mm. Compared with the original sample, the thermal contact resistance between tube and heat-sink for welding sample can reduce the heat transfer performance by 10%, while by 20% for the hydraulic expansion sample. However, the welding technique is more complicated and expensive than hydraulic expansion technique. Both the processing technique and the heat transfer performance of heat-sink prototype should be further considered for the optimization of heat-sink structure in EAST.
Investigation of contact resistance for fin-tube heat exchanger by means of tube expansion
NASA Astrophysics Data System (ADS)
Hing, Yau Kar; Raghavan, Vijay R.; Meng, Chin Wai
2012-06-01
An experimental study on the heat transfer performance of a fin-tube heat exchanger due to mechanical expansion of the tube by bullets has been reported in this paper. The manufacture of a fin-tube heat exchanger commonly involves inserting copper tubes into a stack of aluminium fins and expanding the tubes mechanically. The mechanical expansion is achieved by inserting a steel bullet through the tube. The steel bullet has a larger diameter than the tube and the expansion provides a firm surface contact between fins and tubes. Five bullet expansion ratios (i.e. 1.045 to 1.059) have been used in the study to expand a 9.52mm diameter tubes in a fin-tube heat exchanger. The study is conducted on a water-to-water loop experiment rig under steady state conditions. In addition, the effects of fin hardness and fin pitch are investigated in the study. The results indicate that the optimum heat transfer occurred at a bullet expansion ratio ranging from 1.049 to 1.052. It is also observed that larger fin pitches require larger bullet expansion ratios, especially with lower fin hardness. As the fin pitch increases, both fin hardness (i.e. H22 and H24) exhibit increasing heat transfer rate per fin (W/fin). With the H22 hardness temper, the increase is as much as 11% while H24 increases by 1.2%.
Melacci, Stefano; Gori, Marco
2013-11-01
Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, because the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given that dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization. PMID:24051728
Melacci, Stefano; Gori, Marco
2013-04-12
Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, since the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given which dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization. PMID:23589591
NASA Astrophysics Data System (ADS)
Chen, Li-Hao; Liu, Zong-Pei; Pan, Yung-Ning
2016-08-01
In this paper, the effect of homogenization heat treatment on α value [coefficient of thermal expansion (10-6 K-1)] of low thermal expansion cast irons was studied. In addition, constrained thermal cyclic tests were conducted to evaluate the dimensional stability of the low thermal expansion cast irons with various heat treatment conditions. The results indicate that when the alloys were homogenized at a relatively low temperature, e.g., 1023 K (750 °C), the elimination of Ni segregation was not very effective, but the C concentration in the matrix was moderately reduced. On the other hand, if the alloys were homogenized at a relatively high temperature, e.g., 1473 K (1200 °C), opposite results were obtained. Consequently, not much improvement (reduction) in α value was achieved in both cases. Therefore, a compound homogenization heat treatment procedure was designed, namely 1473 K (1200 °C)/4 hours/FC/1023 K (750 °C)/2 hours/WQ, in which a relatively high homogenization temperature of 1473 K (1200 °C) can effectively eliminate the Ni segregation, and a subsequent holding stage at 1023.15 K (750 °C) can reduce the C content in the matrix. As a result, very low α values of around (1 to 2) × 10-6 K-1 were obtained. Regarding the constrained thermal cyclic testing in 303 K to 473 K (30 °C to 200 °C), the results indicate that regardless of heat treatment condition, low thermal expansion cast irons exhibit exceedingly higher dimensional stability than either the regular ductile cast iron or the 304 stainless steel. Furthermore, positive correlation exists between the α 303.15 K to 473.15 K value and the amount of shape change after the thermal cyclic testing. Among the alloys investigated, Heat I-T3B (1473 K (1200 °C)/4 hours/FC/1023 K (750 °C)/2 hours/WQ) exhibits the lowest α 303 K to 473 K value (1.72 × 10-6 K-1), and hence has the least shape change (7.41 μm) or the best dimensional stability.
NASA Astrophysics Data System (ADS)
Chen, Li-Hao; Liu, Zong-Pei; Pan, Yung-Ning
2016-05-01
In this paper, the effect of homogenization heat treatment on α value [coefficient of thermal expansion (10-6 K-1)] of low thermal expansion cast irons was studied. In addition, constrained thermal cyclic tests were conducted to evaluate the dimensional stability of the low thermal expansion cast irons with various heat treatment conditions. The results indicate that when the alloys were homogenized at a relatively low temperature, e.g., 1023 K (750 °C), the elimination of Ni segregation was not very effective, but the C concentration in the matrix was moderately reduced. On the other hand, if the alloys were homogenized at a relatively high temperature, e.g., 1473 K (1200 °C), opposite results were obtained. Consequently, not much improvement (reduction) in α value was achieved in both cases. Therefore, a compound homogenization heat treatment procedure was designed, namely 1473 K (1200 °C)/4 hours/FC/1023 K (750 °C)/2 hours/WQ, in which a relatively high homogenization temperature of 1473 K (1200 °C) can effectively eliminate the Ni segregation, and a subsequent holding stage at 1023.15 K (750 °C) can reduce the C content in the matrix. As a result, very low α values of around (1 to 2) × 10-6 K-1 were obtained. Regarding the constrained thermal cyclic testing in 303 K to 473 K (30 °C to 200 °C), the results indicate that regardless of heat treatment condition, low thermal expansion cast irons exhibit exceedingly higher dimensional stability than either the regular ductile cast iron or the 304 stainless steel. Furthermore, positive correlation exists between the α 303.15 K to 473.15 K value and the amount of shape change after the thermal cyclic testing. Among the alloys investigated, Heat I-T3B (1473 K (1200 °C)/4 hours/FC/1023 K (750 °C)/2 hours/WQ) exhibits the lowest α 303 K to 473 K value (1.72 × 10-6 K-1), and hence has the least shape change (7.41 μm) or the best dimensional stability.
Eigenvalue Expansion Approach to Study Bio-Heat Equation
NASA Astrophysics Data System (ADS)
Khanday, M. A.; Nazir, Khalid
2016-07-01
A mathematical model based on Pennes bio-heat equation was formulated to estimate temperature profiles at peripheral regions of human body. The heat processes due to diffusion, perfusion and metabolic pathways were considered to establish the second-order partial differential equation together with initial and boundary conditions. The model was solved using eigenvalue method and the numerical values of the physiological parameters were used to understand the thermal disturbance on the biological tissues. The results were illustrated at atmospheric temperatures TA = 10∘C and 20∘C.
The Statistical Interpretation of Classical Thermodynamic Heating and Expansion Processes
ERIC Educational Resources Information Center
Cartier, Stephen F.
2011-01-01
A statistical model has been developed and applied to interpret thermodynamic processes typically presented from the macroscopic, classical perspective. Through this model, students learn and apply the concepts of statistical mechanics, quantum mechanics, and classical thermodynamics in the analysis of the (i) constant volume heating, (ii)…
Effects of city expansion on heat stress under climate change conditions.
Argüeso, Daniel; Evans, Jason P; Pitman, Andrew J; Di Luca, Alejandro
2015-01-01
We examine the joint contribution of urban expansion and climate change on heat stress over the Sydney region. A Regional Climate Model was used to downscale present (1990-2009) and future (2040-2059) simulations from a Global Climate Model. The effects of urban surfaces on local temperature and vapor pressure were included. The role of urban expansion in modulating the climate change signal at local scales was investigated using a human heat-stress index combining temperature and vapor pressure. Urban expansion and climate change leads to increased risk of heat-stress conditions in the Sydney region, with substantially more frequent adverse conditions in urban areas. Impacts are particularly obvious in extreme values; daytime heat-stress impacts are more noticeable in the higher percentiles than in the mean values and the impact at night is more obvious in the lower percentiles than in the mean. Urban expansion enhances heat-stress increases due to climate change at night, but partly compensates its effects during the day. These differences are due to a stronger contribution from vapor pressure deficit during the day and from temperature increases during the night induced by urban surfaces. Our results highlight the inappropriateness of assessing human comfort determined using temperature changes alone and point to the likelihood that impacts of climate change assessed using models that lack urban surfaces probably underestimate future changes in terms of human comfort. PMID:25668390
Effects of City Expansion on Heat Stress under Climate Change Conditions
Argüeso, Daniel; Evans, Jason P.; Pitman, Andrew J.; Di Luca, Alejandro
2015-01-01
We examine the joint contribution of urban expansion and climate change on heat stress over the Sydney region. A Regional Climate Model was used to downscale present (1990–2009) and future (2040–2059) simulations from a Global Climate Model. The effects of urban surfaces on local temperature and vapor pressure were included. The role of urban expansion in modulating the climate change signal at local scales was investigated using a human heat-stress index combining temperature and vapor pressure. Urban expansion and climate change leads to increased risk of heat-stress conditions in the Sydney region, with substantially more frequent adverse conditions in urban areas. Impacts are particularly obvious in extreme values; daytime heat-stress impacts are more noticeable in the higher percentiles than in the mean values and the impact at night is more obvious in the lower percentiles than in the mean. Urban expansion enhances heat-stress increases due to climate change at night, but partly compensates its effects during the day. These differences are due to a stronger contribution from vapor pressure deficit during the day and from temperature increases during the night induced by urban surfaces. Our results highlight the inappropriateness of assessing human comfort determined using temperature changes alone and point to the likelihood that impacts of climate change assessed using models that lack urban surfaces probably underestimate future changes in terms of human comfort. PMID:25668390
Pressurized heat treatment of glass-ceramic to control thermal expansion
Kramer, Daniel P.
1985-01-01
A method of producing a glass-ceramic having a specified thermal expansion value is disclosed. The method includes the step of pressurizing the parent glass material to a predetermined pressure during heat treatment so that the glass-ceramic produced has a specified thermal expansion value. Preferably, the glass-ceramic material is isostatically pressed. A method for forming a strong glass-ceramic to metal seal is also disclosed in which the glass-ceramic is fabricated to have a thermal expansion value equal to that of the metal. The determination of the thermal expansion value of a parent glass material placed in a high-temperature environment is also used to determine the pressure in the environment.
Fuel expansion system with preheater and EMI-heated fuel injector
Goldsberry, J.
1989-09-05
This patent describes a fuel expansion, pre-combustion treatment device for use with carburated or fuel injected combustion engines. It comprises sonic heating and fuel line cleansing means; foraminous dispersing means; EMI field generating means for essentially irradiating the dispersing means.
Debye temperature, thermal expansion, and heat capacity of TcC up to 100 GPa
Song, T.; Ma, Q.; Tian, J.H.; Liu, X.B.; Ouyang, Y.H.; Zhang, C.L.; Su, W.F.
2015-01-15
Highlights: • A number of thermodynamic properties of rocksalt TcC are investigated for the first time. • The quasi-harmonic Debye model is applied to take into account the thermal effect. • The pressure and temperature up to about 100 GPa and 3000 K, respectively. - Abstract: Debye temperature, thermal expansion coefficient, and heat capacity of ideal stoichiometric TcC in the rocksalt structure have been studied systematically by using ab initio plane-wave pseudopotential density functional theory method within the generalized gradient approximation. Through the quasi-harmonic Debye model, in which the phononic effects are considered, the dependences of Debye temperature, thermal expansion coefficient, constant-volume heat capacity, and constant-pressure heat capacity on pressure and temperature are successfully predicted. All the thermodynamic properties of TcC with rocksalt phase have been predicted in the entire temperature range from 300 to 3000 K and pressure up to 100 GPa.
Negative thermal expansion and anomalies of heat capacity of LuB50 at low temperatures
Novikov, V. V.; Zhemoedov, N. A.; Matovnikov, A. V.; Mitroshenkov, N. V.; Kuznetsov, S. V.; Bud'ko, S. L.
2015-07-20
Heat capacity and thermal expansion of LuB50 boride were experimentally studied in the 2–300 K temperature range. The data reveal an anomalous contribution to the heat capacity at low temperatures. The value of this contribution is proportional to the first degree of temperature. It was identified that this anomaly in heat capacity is caused by the effect of disorder in the LuB50 crystalline structure and it can be described in the soft atomic potential model (SAP). The parameters of the approximation were determined. The temperature dependence of LuB50 heat capacity in the whole temperature range was approximated by the summore » of SAP contribution, Debye and two Einstein components. The parameters of SAP contribution for LuB50 were compared to the corresponding values for LuB66, which was studied earlier. Negative thermal expansion at low temperatures was experimentally observed for LuB50. The analysis of the experimental temperature dependence for the Gruneisen parameter of LuB50 suggested that the low-frequency oscillations, described in SAP mode, are responsible for the negative thermal expansion. As a result, the glasslike character of the behavior of LuB50 thermal characteristics at low temperatures was confirmed.« less
Ha, Jae-Won
2015-01-01
The aim of this study was to investigate the efficacy of near-infrared radiation (NIR) heating combined with lactic acid (LA) sprays for inactivating Salmonella enterica serovar Enteritidis on almond and pine nut kernels and to elucidate the mechanisms of the lethal effect of the NIR-LA combined treatment. Also, the effect of the combination treatment on product quality was determined. Separately prepared S. Enteritidis phage type (PT) 30 and non-PT 30 S. Enteritidis cocktails were inoculated onto almond and pine nut kernels, respectively, followed by treatments with NIR or 2% LA spray alone, NIR with distilled water spray (NIR-DW), and NIR with 2% LA spray (NIR-LA). Although surface temperatures of nuts treated with NIR were higher than those subjected to NIR-DW or NIR-LA treatment, more S. Enteritidis survived after NIR treatment alone. The effectiveness of NIR-DW and NIR-LA was similar, but significantly more sublethally injured cells were recovered from NIR-DW-treated samples. We confirmed that the enhanced bactericidal effect of the NIR-LA combination may not be attributable to cell membrane damage per se. NIR heat treatment might allow S. Enteritidis cells to become permeable to applied LA solution. The NIR-LA treatment (5 min) did not significantly (P > 0.05) cause changes in the lipid peroxidation parameters, total phenolic contents, color values, moisture contents, and sensory attributes of nut kernels. Given the results of the present study, NIR-LA treatment may be a potential intervention for controlling food-borne pathogens on nut kernel products. PMID:25911473
Ha, Jae-Won; Kang, Dong-Hyun
2015-07-01
The aim of this study was to investigate the efficacy of near-infrared radiation (NIR) heating combined with lactic acid (LA) sprays for inactivating Salmonella enterica serovar Enteritidis on almond and pine nut kernels and to elucidate the mechanisms of the lethal effect of the NIR-LA combined treatment. Also, the effect of the combination treatment on product quality was determined. Separately prepared S. Enteritidis phage type (PT) 30 and non-PT 30 S. Enteritidis cocktails were inoculated onto almond and pine nut kernels, respectively, followed by treatments with NIR or 2% LA spray alone, NIR with distilled water spray (NIR-DW), and NIR with 2% LA spray (NIR-LA). Although surface temperatures of nuts treated with NIR were higher than those subjected to NIR-DW or NIR-LA treatment, more S. Enteritidis survived after NIR treatment alone. The effectiveness of NIR-DW and NIR-LA was similar, but significantly more sublethally injured cells were recovered from NIR-DW-treated samples. We confirmed that the enhanced bactericidal effect of the NIR-LA combination may not be attributable to cell membrane damage per se. NIR heat treatment might allow S. Enteritidis cells to become permeable to applied LA solution. The NIR-LA treatment (5 min) did not significantly (P > 0.05) cause changes in the lipid peroxidation parameters, total phenolic contents, color values, moisture contents, and sensory attributes of nut kernels. Given the results of the present study, NIR-LA treatment may be a potential intervention for controlling food-borne pathogens on nut kernel products. PMID:25911473
Heat capacity and thermal expansion of icosahedral lutetium boride LuB66
Novikov, V V; Avdashchenko, D V; Matovnikov, A V; Mitroshenkov, N V; Bud’ko, S L
2014-01-07
The experimental values of heat capacity and thermal expansion for lutetium boride LuB66 in the temperature range of 2-300 K were analysed in the Debye-Einstein approximation. It was found that the vibration of the boron sub-lattice can be considered within the Debye model with high characteristic temperatures; low-frequency vibration of weakly connected metal atoms is described by the Einstein model.
Microwave heating of a Ba photoplasma in free expansion into a vacuum
Furtlehner, J.P.; Blanchet, A.; Leloutre, B.
1995-12-31
The microwave heating of a pulsed Ba photoplasma and its free expansion into a vacuum is studied theoretically and experimentally. The vapor production apparatus and the two step photoionization scheme have been described in a previous paper. The heating experimental device is essentially a microwave loop working in a self tuning oscillator mode composed of a transmission rectangular microwave resonator associated with a TWT power amplifier. The amplifier is coupled to the rectangular resonator by two coaxial-line probes: with a coupling coefficient very close to 1 at the input and with a coefficient equal about 10{sup -3} at the output.
Claudio Filippone, Ph.D.
1999-06-01
Thermal-hydraulic analysis of a specially designed steam expansion device (heat cavity) was performed to prove the feasibility of steam expansions at elevated rates for power generation with higher efficiency. The steam expansion process inside the heat cavity greatly depends on the gap within which the steam expands and accelerates. This system can be seen as a miniaturized boiler integrated inside the expander where steam (or the proper fluid) is generated almost instantaneously prior to its expansion in the work-producing unit. Relatively cold water is pulsed inside the heat cavity, where the heat transferred causes the water to flash to steam, thereby increasing its specific volume by a large factor. The gap inside the heat cavity forms a special nozzle-shaped system in which the fluid expands rapidly, accelerating toward the system outlet. The expansion phenomenon is the cause of ever-increasing fluid speed inside the cavity system, eliminating the need for moving parts (pumps, valves, etc.). In fact, the subsequent velocity induced by the sudden fluid expansion causes turbulent conditions, forcing accelerating Reynolds and Nusselt numbers which, in turn, increase the convective heat transfer coefficient. When the combustion of fossil fuels constitutes the heat source, the heat cavity concept can be applied directly inside the stator of conventional turbines, thereby greatly increasing the overall system efficiency.
Claudio Filippone, Ph.D.
1999-06-01
Thermal-hydraulic analysis of a specially designed steam expansion device (heat cavity) was performed to prove the feasibility of steam expansions at elevated rates for power generation with higher efficiency. The steam expansion process inside the heat cavity greatly depends on the gap within which the steam expands and accelerates. This system can be seen as a miniaturized boiler integrated inside the expander where steam (or the proper fluid) is generated almost instantaneously prior to its expansion in the work-producing unit. Relatively cold water is pulsed inside the heat cavity, where the heat transferred causes the water to flash to steam, thereby increasing its specific volume by a large factor. The gap inside the heat cavity forms a special nozzle-shaped system in which the fluid expands rapidly, accelerating toward the system outlet. The expansion phenomenon is the cause of ever-increasing fluid speed inside the cavity system, eliminating the need for moving parts (pumps, valves, etc.). In fact, the subsequent velocity induced by the sudden fluid expansion causes turbulent conditions, forcing accelerating Reynolds and Nusselt numbers which, in turn, increase the convective heat transfer coefficient. When the combustion of fossil fuels constitutes the heat source, the heat cavity concept can be applied directly inside the stator of conventional turbines, thereby greatly increasing the overall system efficiency.
Evaluating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Wilton, Donald R.; Champagne, Nathan J.
2008-01-01
Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.
Heat Transfer and Fluid Dynamics Measurements in the Expansion Space of a Stirling Cycle Engine
NASA Technical Reports Server (NTRS)
Jiang, Nan; Simon, Terrence W.
2006-01-01
The heater (or acceptor) of a Stirling engine, where most of the thermal energy is accepted into the engine by heat transfer, is the hottest part of the engine. Almost as hot is the adjacent expansion space of the engine. In the expansion space, the flow is oscillatory, impinging on a two-dimensional concavely-curved surface. Knowing the heat transfer on the inside surface of the engine head is critical to the engine design for efficiency and reliability. However, the flow in this region is not well understood and support is required to develop the CFD codes needed to design modern Stirling engines of high efficiency and power output. The present project is to experimentally investigate the flow and heat transfer in the heater head region. Flow fields and heat transfer coefficients are measured to characterize the oscillatory flow as well as to supply experimental validation for the CFD Stirling engine design codes. Presented also is a discussion of how these results might be used for heater head and acceptor region design calculations.
Thermal Expansion, Specific Heat and Magnetostriction Measurements on R-Copper
NASA Astrophysics Data System (ADS)
Chien, Teh-Shih
The RCu (R = Gd, Tb, Dy and Ho) and R _2In (R = Gd and Tb) alloys have been systematically studied by thermal expansion, specific heat and magnetostriction measurements in order to investigate their magnetic and physical behaviors. GdCu and TbCu alloys undergo martensitic transformations at high and low temperatures. The Neel temperature of the GdCu alloy is 141.3 K from thermal expansion measurements. The Neel temperature T_{rm N} and martensitic transformation temperature M _{rm s} are 113.6 K and 116 K, respectively, for TbCu alloy. This is the first study to distinguish T_{rm N} from M_{rm s} using thermal expansion and specific heat measurements as well as a large thermal hysteresis. Both GdCu and TbCu alloys have a first-order structural transformation and a second-order magnetic phase transition. DyCu alloy has T_{rm N} = 60.5 K. The magnetic specific heat, C_{ rm m}, is a function of T^3 which obeys spin wave theory. HoCu alloy has T _{rm N} = 26 K and a spin reorientation at 14.1 K. YCu alloy has a Debye temperature of 230 K and C_{rm e} = 0.002T J/moleK. The Debye temperature is 160 K for all RCu alloys except for the DyCu alloy which has theta = 150 K. Gd_2In alloy has T _{rm N} = 97 K and T _{rm c} = 190.3 K which are associated with the antiferromagnetic and ferromagnetic transitions, respectively, from thermal expansion and magnetostriction measurements. Gd_2In alloy is a metamagnet with a critical magnetic field H = 8 kOe. Volume magnetostriction, omega_{rm V} is a function of H^{2 over3} in the ferromagnetic state. omega_{rm v} is a function of H^2, as expected, in the antiferromagnetic and paramagnetic states. The Curie temperature is 167.5 K for Tb_2In, as given by the thermal expansion and specific heat measurements. omega_{rm v} is a function of H in the ferromagnetic state. omega_{rm v} is a function of H^2, as expected, in the paramagnetic state.
ERIC Educational Resources Information Center
Forsyth Technical Inst., Winston-Salem, NC.
This vocational physics individualized student instructional module on thermometers consists of the three units: Temperature and heat, expansion thermometers, and electrical thermometers. Designed with a laboratory orientation, experiments are included on linear expansion; making a bimetallic thermometer, a liquid-in-gas thermometer, and a gas…
On the calculation of turbulent heat and mass transport downstream from an abrupt pipe expansion
NASA Technical Reports Server (NTRS)
Amano, R. S.
1982-01-01
A numerical study is reported of heat/mass transfer in the separated flow region created by an abrupt pipe expansion. Computations have employed a hybrid method of central and upwind finite differencing to solve the full Navier-Stokes equations with turbulent model (k approximately equal to epsilon). The study has given its main attention to the simulation of the region in the immediate vicinity of the wall, by formulating near-wall model for the evaluation of the mean generation and destruction rate of the epsilon equation. The computed results were compared with the experimental data and they showed generally encouraging agreement with the measurements.
NASA Astrophysics Data System (ADS)
Medvedev, Grigori; Lee, Eun-Woong; Caruthers, James
2011-03-01
An observation that different experimental methods give different values of Tg is part of the lore of the field of the glassy polymers. We report on a careful study of a series of polymeric systems both thermoplastic and thermoset, including PMMA, PC, PS, and 3,3' DDS Epon 825, conducted using DSC and TMA techniques. We found that for the same thermal history the heat capacity and the coefficient of thermal expansion (both measured upon heating) as functions of temperature transition from the glassy asymptote to the equilibrium asymptote at significantly different temperatures; this difference was in the range from 8 to 17 degrees, depending on the system. We argue that such a large difference in the enthalpy and volume responses during the same thermal history is inconsistent with the commonly used material clock models, but is consistent with the view of the glassy materials as containing dynamically heterogeneous regions.
X-ray radiographic expansion measurements of isochorically heated thin wire targets
Hochhaus, D. C.; Aurand, B.; Basko, M.; Ecker, B.; Kühl, T.; Ma, T.; Rosmej, F.; Zielbauer, B.; Neumayer, P.
2013-06-15
Solid density matter at temperatures ranging from 150 eV to <5 eV has been created by irradiating thin wire targets with high-energy laser pulses at intensities ≈10{sup 18}W/cm{sup 2}. Energy deposition and transport of the laser-produced fast electrons are inferred from spatially resolved K{sub α}-spectroscopy. Time resolved x-ray radiography is employed to image the target mass density up to solid density and proves isochoric heating. The subsequent hydrodynamic evolution of the target is observed for up to 3 ns and is compared to radiation-hydrodynamic simulations. At distances of several hundred micrometers from the laser interaction region, where temperatures of 5–20 eV and small temperature gradients are found, the hydrodynamic evolution of the wire is a near axially symmetric isentropic expansion, and good agreement between simulations and radiography data confirms heating of the wire over hundreds of micrometers.
Hemingway, B.S.; Evans, H.T., Jr.; Nord, G.L., Jr.; Haselton, H.T., Jr.; Robie, R.A.; McGee, J.J.
1986-01-01
A small but sharp anomaly in the heat capacity of akermanite at 357.9 K, and a discontinuity in its thermal expansion at 693 K, as determined by XRD, have been found. The enthalpy and entropy assigned to the heat-capacity anomaly, for the purpose of tabulation, are 679 J/mol and 1.9 J/(mol.K), respectively. They were determined from the difference between the measured values of the heat capacity in the T interval 320-365 K and that obtained from an equation which fits the heat-capacity and heat-content data for akermanite from 290 to 1731 K. Heat-capacity measurements are reported for the T range from 9 to 995 K. The entropy and enthalpy of formation of akermanite at 298.15 K and 1 bar are 212.5 + or - 0.4 J/(mol.K) and -3864.5 + or - 4.0 kJ/mol, respectively. Weak satellite reflections have been observed in hk0 single-crystal X-ray precession photographs and electron-diffraction patterns of this material at room T. With in situ heating by TEM, the satellite reflections decreased significantly in intensity above 358 K and disappeared at about 580 K and, on cooling, reappeared. These observations suggest that the anomalies in the thermal behaviour of akermanite are associated with local displacements of Ca ions from the mirror plane (space group P421m) and accompanying distortion of the MgSi2O7 framework.-L.C.C.
Are heat waves susceptible to mitigate the expansion of a species progressing with global warming?
Robinet, Christelle; Rousselet, Jérôme; Pineau, Patrick; Miard, Florie; Roques, Alain
2013-09-01
A number of organisms, especially insects, are extending their range in response of the increasing trend of warmer temperatures. However, the effects of more frequent climatic anomalies on these species are not clearly known. The pine processionary moth, Thaumetopoea pityocampa, is a forest pest that is currently extending its geographical distribution in Europe in response to climate warming. However, its population density largely decreased in its northern expansion range (near Paris, France) the year following the 2003 heat wave. In this study, we tested whether the 2003 heat wave could have killed a large part of egg masses. First, the local heat wave intensity was determined. Then, an outdoor experiment was conducted to measure the deviation between the temperatures recorded by weather stations and those observed within sun-exposed egg masses. A second experiment was conducted under laboratory conditions to simulate heat wave conditions (with night/day temperatures of 20/32°C and 20/40°C compared to the control treatment 13/20°C) and measure the potential effects of this heat wave on egg masses. No effects were noticed on egg development. Then, larvae hatched from these egg masses were reared under mild conditions until the third instar and no delayed effects on the development of larvae were found. Instead of eggs, the 2003 heat wave had probably affected directly or indirectly the young larvae that were already hatched when it occurred. Our results suggest that the effects of extreme climatic anomalies occurring over narrow time windows are difficult to determine because they strongly depend on the life stage of the species exposed to these anomalies. However, these effects could potentially reduce or enhance the average warming effects. As extreme weather conditions are predicted to become more frequent in the future, it is necessary to disentangle the effects of the warming trend from the effects of climatic anomalies when predicting the response of a
Are heat waves susceptible to mitigate the expansion of a species progressing with global warming?
Robinet, Christelle; Rousselet, Jérôme; Pineau, Patrick; Miard, Florie; Roques, Alain
2013-01-01
A number of organisms, especially insects, are extending their range in response of the increasing trend of warmer temperatures. However, the effects of more frequent climatic anomalies on these species are not clearly known. The pine processionary moth, Thaumetopoea pityocampa, is a forest pest that is currently extending its geographical distribution in Europe in response to climate warming. However, its population density largely decreased in its northern expansion range (near Paris, France) the year following the 2003 heat wave. In this study, we tested whether the 2003 heat wave could have killed a large part of egg masses. First, the local heat wave intensity was determined. Then, an outdoor experiment was conducted to measure the deviation between the temperatures recorded by weather stations and those observed within sun-exposed egg masses. A second experiment was conducted under laboratory conditions to simulate heat wave conditions (with night/day temperatures of 20/32°C and 20/40°C compared to the control treatment 13/20°C) and measure the potential effects of this heat wave on egg masses. No effects were noticed on egg development. Then, larvae hatched from these egg masses were reared under mild conditions until the third instar and no delayed effects on the development of larvae were found. Instead of eggs, the 2003 heat wave had probably affected directly or indirectly the young larvae that were already hatched when it occurred. Our results suggest that the effects of extreme climatic anomalies occurring over narrow time windows are difficult to determine because they strongly depend on the life stage of the species exposed to these anomalies. However, these effects could potentially reduce or enhance the average warming effects. As extreme weather conditions are predicted to become more frequent in the future, it is necessary to disentangle the effects of the warming trend from the effects of climatic anomalies when predicting the response of a
High Enthalpy Studies of Capsule Heating in an Expansion Tunnel Facility
NASA Technical Reports Server (NTRS)
Dufrene, Aaron; MacLean, Matthew; Holden, Michael
2012-01-01
Measurements were made on an Orion heat shield model to demonstrate the capability of the new LENS-XX expansion tunnel facility to make high quality measurements of heat transfer distributions at flow velocities from 3 km/s (h(sub 0) = 5 MJ/kg) to 8.4 km/s (h(sub 0) = 36 MJ/kg). Thirty-nine heat transfer gauges, including both thin-film and thermocouple instruments, as well as four pressure gauges, and high-speed Schlieren were used to assess the aerothermal environment on the capsule heat shield. Only results from laminar boundary layer runs are reported. A major finding of this test series is that the high enthalpy, low-density flows displayed surface heating behavior that is observed to be consistent with some finite-rate recombination process occurring on the surface of the model. It is too early to speculate on the nature of the mechanism, but the response of the gages on the surface seems generally repeatable and consistent for a range of conditions. This result is an important milestone in developing and proving a capability to make measurements in a ground test environment and extrapolate them to flight for conditions with extreme non-equilibrium effects. Additionally, no significant, isolated stagnation point augmentation ("bump") was observed in the tests in this facility. Cases at higher Reynolds number seemed to show the greatest amount of overall increase in heating on the windward side of the model, which may in part be due to small-scale particulate.
Assessment of bulk modulus, thermal expansion and heat capacity of minerals
NASA Astrophysics Data System (ADS)
Saxena, S. K.
1989-04-01
Since the heat capacity of a solid at constant pressure ( CP) is related to the isothermal bulk modulus ( KT) and isobaric thermal expansion ( αP), an assessment of the experimental data on these properties is necessary to establish the internal consistency of a thermodynamic data set. Through suitable formulations of the temperature dependence of bulk modulus, thermal expansion and heat capacity at constant volume ( CV) and the application of non-linear programming techniques, it is possible to assess the internal consistency of these data and the measured heat capacity at constant pressure. Such optimization of the data on periclase has been performed with the following results: αP = 0.3754 × 10 -4 + 0.791 × 10 -8T - 0.784 T-2 + 0.9148 T-3 (11) KT = 1.684 × 10 6-241 T - 0.056 T2 + 0.167 × 10 -4T3( bar) (12) CV = 48.02 - 0.572 × 10 6T(13) -2 - 0.4876 × 10 11T-4 - 0.1502 × 10 12T-6 + 0.9836 × 10 20T-8V (1, 298) = 11.245 (cm 3/mol). (14) If appropriate CP data are available, it is possible to estimate the temperature dependence of αP and KT for any solid. In suitable cases, the method may be used through a combination of the data on CP and phase equilibrium to calculate Kt, its pressure derivative and thermal expansion. Such optimized data for brucite are: H0f(1, 298.15) = -924620, S0(1, 298.15) = 64.08 αP = 0.1002 E - 4 + 0.1468 E - 7 T + 1.8606 T-2 (18) kt = 0.5712 Mb, ( ∂K T/∂P) = 4.712Cv= 118.58 - 0.639 E + 7 T-2 + 0.34574 E + 12 T-4 - 0.10538 E + 17 T-6. (19)
Effects of heat shock protein gp96 on human dendritic cell maturation and CTL expansion.
Zhang, Yuxia; Zan, Yanlu; Shan, Ming; Liu, Changmei; Shi, Ming; Li, Wei; Zhang, Zhixin; Liu, Na; Wang, Fusheng; Zhong, Weidong; Liao, Fulian; Gao, George F; Tien, Po
2006-06-01
We reported previously that heat shock protein gp96 and its N-terminal fragment were able to stimulate CTL expansion specific for a HBV peptide (SYVNTNMGL) in BALB/c mice. Here we characterized the adjuvant effects of gp96 on human HLA-A2 restricted T cells. Full-length gp96 isolated from healthy human liver and recombinant fragments both from prokaryotic cells and eukaryotic cells were analyzed for their ability to stimulate maturation of human dendritic cells. It was found that in vitro these proteins were capable of maturating human monocyte-derived dendritic cells (MDDC) isolated from healthy donors as well as from HBV-positive, hepatocellular carcinoma (HCC) patients. In HLA-A2.1/Kb transgenic mice, gp96 and the recombinant fragments were found to augment CTL response specific for the HBcAg(18-27) FLPSDFFPSV peptide of hepatitis B virus. PMID:16630554
Boundary-layer computational model for predicting the flow and heat transfer in sudden expansions
NASA Technical Reports Server (NTRS)
Lewis, J. P.; Pletcher, R. H.
1986-01-01
Fully developed turbulent and laminar flows through symmetric planar and axisymmetric expansions with heat transfer were modeled using a finite-difference discretization of the boundary-layer equations. By using the boundary-layer equations to model separated flow in place of the Navier-Stokes equations, computational effort was reduced permitting turbulence modelling studies to be economically carried out. For laminar flow, the reattachment length was well predicted for Reynolds numbers as low as 20 and the details of the trapped eddy were well predicted for Reynolds numbers above 200. For turbulent flows, the Boussinesq assumption was used to express the Reynolds stresses in terms of a turbulent viscosity. Near-wall algebraic turbulence models based on Prandtl's-mixing-length model and the maximum Reynolds shear stress were compared.
Surface urban heat island effect and its relationship with urban expansion in Nanjing, China
NASA Astrophysics Data System (ADS)
Tu, Lili; Qin, Zhihao; Li, Wenjuan; Geng, Jun; Yang, Lechan; Zhao, Shuhe; Zhan, Wenfeng; Wang, Fei
2016-04-01
Nanjing, a typical megacity in eastern China, has undergone dramatic expansion during the past decade. The surface urban heat island (SUHI) effect is an important indicator of the environmental consequences of urbanization and has rapidly changed the dynamics of Nanjing. Accurate measurements of the effects and changes resulting from the SUHI effect may provide useful information for urban planning. Index, centroid transfer, and correlation analyses were conducted to measure the dynamics of the SUHI and elucidate the relationship between the SUHI and urban expansion in Nanjing over the past decade. Overall, the results indicated that (1) the region affected by the SUHI effect gradually expanded southward and eastward from 2000 to 2012; (2) the centroid of the SUHI moved gradually southeastward and then southward and southwestward, which is consistent with the movement of the urban centroid; (3) the trajectory of the level-3 SUHI centroid did not correspond with the urban mass or SUHI centroids during the study period and (4) the SUHI intensity and urban fractal characteristics were negatively correlated. In addition, we presented insights regarding the minimization of the SUHI effect in cities such as Nanjing, China.
Heat kernels on cone of AdS2 and k-wound circular Wilson loop in AdS5 × S5 superstring
NASA Astrophysics Data System (ADS)
Bergamin, R.; Tseytlin, A. A.
2016-04-01
We compute the one-loop world-sheet correction to partition function of {{AdS}}5× {{{S}}}5 superstring that should be representing k-fundamental circular Wilson loop in planar limit. The 2d metric of the minimal surface ending on k-wound circle at the boundary is that of a cone of AdS2 with deficit 2π (1-k). We compute the determinants of 2d fluctuation operators by first constructing heat kernels of scalar and spinor Laplacians on the cone using the Sommerfeld formula. The final expression for the k-dependent part of the one-loop correction has simple integral representation but is different from earlier results.
NASA Astrophysics Data System (ADS)
Speranza, Giulio; Vona, Alessandro; Di Genova, Danilo; Romano, Claudia
2015-04-01
Rocksalt overall characteristics and peculiarity are well known and have made rocksalt bodies one of the most favorable choice for nuclear waste storage purposes. Low to medium temperature effects related to nuclear waste heat generation have been studied by several authors. However, high temperature related salt behavior has been poorly investigated as well as studies focused on the effect of temperature increase on fluids contained in halite. Here we present the results of thermal expansion experiments in the range 50 - 700°C made on halite single crystals with different fluid contents. Our results show that thermally unaltered halite is subjected, upon heating, to thermal instability around 300 - 450°C, with sudden increase in expansivity, sample cracking and fluids emission. Moreover, thermal expansion results higher for fluid-rich salts. In contrast, thermally altered halite, lacks the instability occurrence, showing a constant linear thermal expansion regardless its fluid contents. Rocksalt thermal instability, that is likely to be due to fluids overpressure development upon heating, lead also to a bulk density reduction. Thus, unaltered salt heated to temperature around 300°C or more could cause damage, fluids emission and density drop, increasing the salt mobility. For this reason, a detailed and quantitative study of fluid type, abundance and arrangement within crystals, as well as their response to stress and thermal changes is fundamental for both scientific and applicative purposes regarding halite.
A STUDY ON DEF-RELATED EXPANSION IN HEAT-CURED CONCRETE
NASA Astrophysics Data System (ADS)
Kawabata, Yuichiro; Matsushita, Hiromichi
This paper reports the requirements for deleterious expansion due to delayed ettringite formation (DEF) based on field experience. In recent years, the delete rious expansion of concrete have been reported. The concrete have been characterized by expansion and cracking after several years of service in environments exposed in wet conditions. In many cases, the concrete consists of white cement, limestone and copper slag and it has been manufactured at elevated temperatures for early shipment. From detailed analysis, it was made clear that the cause of deleterious expansion was DEF. The gaps which are featured in DEF-damaged concrete were observed around limest one aggregate. There was a possibility that use of limestone aggregate affects DEF-related expansion while the condition of steam curing was the most effective factor for DEF-related expansion. Based on experimental data, the mechanism of DEF-related expansion and the methodology of diagnosing DEF-deterior ated concrete structures were discussed in this paper.
Negative thermal expansion and anomalies of heat capacity of LuB_{50} at low temperatures
Novikov, V. V.; Zhemoedov, N. A.; Matovnikov, A. V.; Mitroshenkov, N. V.; Kuznetsov, S. V.; Bud'ko, S. L.
2015-07-20
Heat capacity and thermal expansion of LuB_{50} boride were experimentally studied in the 2–300 K temperature range. The data reveal an anomalous contribution to the heat capacity at low temperatures. The value of this contribution is proportional to the first degree of temperature. It was identified that this anomaly in heat capacity is caused by the effect of disorder in the LuB_{50} crystalline structure and it can be described in the soft atomic potential model (SAP). The parameters of the approximation were determined. The temperature dependence of LuB_{50} heat capacity in the whole temperature range was approximated by the sum of SAP contribution, Debye and two Einstein components. The parameters of SAP contribution for LuB_{50} were compared to the corresponding values for LuB_{66}, which was studied earlier. Negative thermal expansion at low temperatures was experimentally observed for LuB_{50}. The analysis of the experimental temperature dependence for the Gruneisen parameter of LuB_{50} suggested that the low-frequency oscillations, described in SAP mode, are responsible for the negative thermal expansion. As a result, the glasslike character of the behavior of LuB_{50} thermal characteristics at low temperatures was confirmed.
NASA Astrophysics Data System (ADS)
Terekhov, V. I.; Bogatko, T. V.
2016-06-01
The results of a numerical study of the influence of the thicknesses of dynamic and thermal boundary layers on turbulent separation and heat transfer in a tube with sudden expansion are presented. The first part of this work studies the influence of the thickness of the dynamic boundary layer, which was varied by changing the length of the stabilization area within the maximal extent possible: from zero to half of the tube diameter. In the second part of the study, the flow before separation was hydrodynamically stabilized and the thermal layer before the expansion could simultaneously change its thickness from 0 to D1/2. The Reynolds number was varied in the range of {Re}_{{{{D}}1 }} = 6.7 \\cdot 103 {{to}} 1.33 \\cdot 105 , and the degree of tube expansion remained constant at ER = (D 2/D 1)2 = 1.78. A significant effect of the thickness of the separated boundary layer on both dynamic and thermal characteristics of the flow is shown. In particular, it was found out that with an increase in the thickness of the boundary layer the recirculation zone increases and the maximal Nusselt number decreases. It was determined that the growth of the heat layer thickness does not affect the hydrodynamic characteristics of the flow after separation but does lead to a reduction of heat transfer intensity in the separation area and removal of the coordinates of maximal heat transfer from the point of tube expansion. The generalizing dependence for the maximal Nusselt number at various thermal layer thicknesses is given. Comparison with experimental data confirmed the main trends in the behavior of heat and mass transfer processes in separated flows behind a step with different thermal prehistories.
NASA Astrophysics Data System (ADS)
Oon, Cheen Sean; Nee Yew, Sin; Chew, Bee Teng; Salim Newaz, Kazi Md; Al-Shamma'a, Ahmed; Shaw, Andy; Amiri, Ahmad
2015-05-01
Flow separation and reattachment of 0.2% TiO2 nanofluid in an asymmetric abrupt expansion is studied in this paper. Such flows occur in various engineering and heat transfer applications. Computational fluid dynamics package (FLUENT) is used to investigate turbulent nanofluid flow in the horizontal double-tube heat exchanger. The meshing of this model consists of 43383 nodes and 74891 elements. Only a quarter of the annular pipe is developed and simulated as it has symmetrical geometry. Standard k-epsilon second order implicit, pressure based-solver equation is applied. Reynolds numbers between 17050 and 44545, step height ratio of 1 and 1.82 and constant heat flux of 49050 W/m2 was utilized in the simulation. Water was used as a working fluid to benchmark the study of the heat transfer enhancement in this case. Numerical simulation results show that the increase in the Reynolds number increases the heat transfer coefficient and Nusselt number of the flowing fluid. Moreover, the surface temperature will drop to its lowest value after the expansion and then gradually increase along the pipe. Finally, the chaotic movement and higher thermal conductivity of the TiO2 nanoparticles have contributed to the overall heat transfer enhancement of the nanofluid compare to the water.
NASA Astrophysics Data System (ADS)
Capra, B. R.; Morgan, R. G.; Leyland, P.
2005-02-01
The present study focused on simulating a trajectory point towards the end of the first experimental heatshield of the FIRE II vehicle, at a total flight time of 1639.53s. Scale replicas were sized according to binary scaling and instrumented with thermocouples for testing in the X1 expansion tube, located at The University of Queensland. Correlation of flight to experimental data was achieved through the separation, and independent treatment of the heat modes. Preliminary investigation indicates that the absolute value of radiant surface flux is conserved between two binary scaled models, whereas convective heat transfer increases with the length scale. This difference in the scaling techniques result in the overall contribution of radiative heat transfer diminishing to less than 1% in expansion tubes from a flight value of approximately 9-17%. From empirical correlation's it has been shown that the St √ Re number decreases, under special circumstances, in expansion tubes by the percentage radiation present on the flight vehicle. Results obtained in this study give a strong indication that the relative radiative heat transfer contribution in the expansion tube tests is less than that in flight, supporting the analysis that the absolute value remains constant with binary scaling. Key words: Heat Transfer, Fire II Flight Vehicle, Expansion Tubes, Binary Scaling. NOMENCLATURE dA elemental surface area, m2 H0 stagnation enthalpy, MJ/kg L arbitrary length, m ls scale factor equal to Lf /Le M Mach Number ˙m mass flow rate, kg/s p pressure, kPa ˙q heat transfer rate, W/m2 ¯q averaged heat transfer rate W/m2 RN nose radius m Re Reynolds number, equal to ρURN µ s/RD radial distance from symmetry axis St Stanton number, equal to ˙q ρUH0 St √ Re = ˙qR 1/2 N (ρU)1/2 µ1/2H0 over radius of forebody (D/2) T temperature, K U velocity, m/s Ue equivalent velocity m/s, equal to √ 2H0 U1 primary shock speed m/s U2 secondary shock speed m/s ρ density, kg/m3 ρL binary
ERIC Educational Resources Information Center
Moore, William M.
1984-01-01
Describes the procedures and equipment for an experiment on the adiabatic expansion of gases suitable for demonstration and discussion in the physical chemical laboratory. The expansion produced shows how the process can change temperature and still return to a different location on an isotherm. (JN)
Shen, Bo
2011-01-01
This paper describes steady-state performance simulations performed on a 3-ton R-22 split heat pump in heating mode. In total, 150 steady-state points were simulated, which covers refrigerant charge levels from 70 % to 130% relative to the nominal value, the outdoor temperatures at 17 F (-8.3 C), 35 F (1.7 C) and 47 F (8.3 C), indoor air flow rates from 60% to 150% of the rated air flow rate, and two types of expansion devices (fixed orifice and thermostatic expansion valve). A charge tuning method, which is to calibrate the charge inventory model based on measurements at two operation conditions, was applied and shown to improve the system simulation accuracy significantly in an extensive range of charge levels. In addition, we discuss the effects of suction line accumulator in modeling a heat pump system using either a fixed orifice or thermal expansion valve. Last, we identify the issue of refrigerant mass flow mal-distribution at low charge levels and propose an improved modeling approach.
Huang, Lulu; Massa, Lou
2010-01-01
The Kernel Energy Method (KEM) provides a way to calculate the ab-initio energy of very large biological molecules. The results are accurate, and the computational time reduced. However, by use of a list of double kernel interactions a significant additional reduction of computational effort may be achieved, still retaining ab-initio accuracy. A numerical comparison of the indices that name the known double interactions in question, allow one to list higher order interactions having the property of topological continuity within the full molecule of interest. When, that list of interactions is unpacked, as a kernel expansion, which weights the relative importance of each kernel in an expression for the total molecular energy, high accuracy, and a further significant reduction in computational effort results. A KEM molecular energy calculation based upon the HF/STO3G chemical model, is applied to the protein insulin, as an illustration. PMID:21243065
NASA Technical Reports Server (NTRS)
Moore, J. A.
1975-01-01
A general description of the Langley 6-inch expansion tube is presented along with discussion of the basic components, internal resistance heater, arc-discharge assemblies, instrumentation, and operating procedure. Preliminary results using unheated and resistance-heated helium as the driver gas are presented. The driver-gas pressure ranged from approximately 17 to 59 MPa and its temperature ranged from 300 to 510 K. Interface velocities of approximately 3.8 to 6.7 km/sec were generated between the test gas and the acceleration gas using air as the test gas and helium as the acceleration gas. Test flow quality and comparison of measured and predicted expansion-tube flow quantities are discussed.
Chemical path of ettringite formation in heat-cured mortar and its relationship to expansion
NASA Astrophysics Data System (ADS)
Shimada, Yukie
Delayed ettringite formation (DEF) refers to a deterioration process of cementitious materials that have been exposed to high temperatures and subsequent moist conditions, often resulting in damaging expansion. The occurrence of DEF-related damage may lead to severe economic consequences. While concerns of related industries continue to raise the need for reliable and practical test methods for DEF assessment, the mechanism(s) involved in DEF remains controversial. In order to provide a better understanding of the DEF phenomenon, the present study investigated mortar systems made with various mixing and curing parameters for detailed changes in pore solution chemistry and solid phase development, while corresponding changes in physical properties were also closely monitored. This approach enabled the development of a correlation between the chemical and physical changes and provided the opportunity for a holistic analysis. The present study revealed that there exist relationships between the physical properties and expansive behavior. The normal aging process of the cementitious systems involves dissolution of ettringite crystals finely distributed within the hardened cement paste and subsequent recrystallization as innocuous crystals in the largest accessible spaces. This process, known as Ostwald ripening, facilitates relaxation of any expansive pressure developed within the paste. The rate of Ostwald ripening is rather slow in a well-compacted, dense microstructure containing few flaws. Thus, an increase in mechanical strength accompanied by a reduction in diffusion rate by altering the mortar parameters increases the risk of DEF-related expansion and vice versa. Introduction of the Ostwald ripening process as a stress relief mechanism to the previously proposed paste expansion hypothesis provides a comprehensive description of the observed expansive behavior. Chemical analyses provided semi-quantitative information on the stability of ettringite during high
Mihaila, Bogden; Zubelewicz, Aleksander; Stan, Marius; Ramirez, Juan
2008-01-01
We study the thermal expansion of UO{sub 2+x} nuclear fuel rod in the context of a model coupling heat transfer and oxygen diffusion discussed previously by J.C. Ramirez, M. Stan and P. Cristea [J. Nucl. Mat. 359 (2006) 174]. We report results of simulations performed for steady-state and time-dependent regimes in one-dimensional configurations. A variety of initial- and boundary-value scenarios are considered. We use material properties obtained from previously published correlations or from analysis of previously published data. All simulations were performed using the commercial code COMSOL Multiphysics{sup TM} and are readily extendable to include multidimensional effects.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318
NASA Astrophysics Data System (ADS)
Tomarov, G. V.; Nikolaev, A. E.; Semenov, V. N.; Shipkov, A. A.; Shepelev, S. V.
2015-06-01
The results of laboratory studies of material properties and of numerical and analytical investigations to assess the stress-strain state of the metal of the bellows expansion joints used in the district heating system pipelines at MOEK subjected to corrosion failure are presented. The main causes and the dominant mechanisms of failure of the expansion joints have been identified. The influence of the initial crevice defects and the operating conditions on the features and intensity of destruction processes in expansion joints used in the district heating system pipelines at MOEK has been established.
Nakanishi, Koichi; Kogure, Akinori; Deuchi, Keiji; Kuwana, Ritsuko; Takamatsu, Hiromu; Ito, Kiyoshi
2015-01-01
We previously developed a method for evaluating the heat resistance of microorganisms by measuring the transition temperature at which the coefficient of linear expansion of a cell changes. Here, we performed heat resistance measurements using a scanning probe microscope with a nano thermal analysis system. The microorganisms studied included six strains of the genus Bacillus or related genera, one strain each of the thermophilic obligate anaerobic bacterial genera Thermoanaerobacter and Moorella, two strains of heat-resistant mold, two strains of non-sporulating bacteria, and one strain of yeast. Both vegetative cells and spores were evaluated. The transition temperature at which the coefficient of linear expansion due to heating changed from a positive value to a negative value correlated strongly with the heat resistance of the microorganism as estimated from the D value. The microorganisms with greater heat resistance exhibited higher transition temperatures. There was also a strong negative correlation between the coefficient of linear expansion and heat resistance in bacteria and yeast, such that microorganisms with greater heat resistance showed lower coefficients of linear expansion. These findings suggest that our method could be useful for evaluating the heat resistance of microorganisms. PMID:26699861
Dalir, Nemat
2014-01-01
An exact analytical solution is obtained for the problem of three-dimensional transient heat conduction in the multilayered sphere. The sphere has multiple layers in the radial direction and, in each layer, time-dependent and spatially nonuniform volumetric internal heat sources are considered. To obtain the temperature distribution, the eigenfunction expansion method is used. An arbitrary combination of homogenous boundary condition of the first or second kind can be applied in the angular and azimuthal directions. Nevertheless, solution is valid for nonhomogeneous boundary conditions of the third kind (convection) in the radial direction. A case study problem for the three-layer quarter-spherical region is solved and the results are discussed.
Ning, F L; Glavatskiy, K; Ji, Z; Kjelstrup, S; H Vlugt, T J
2015-01-28
Understanding the thermal and mechanical properties of CH4 and CO2 hydrates is essential for the replacement of CH4 with CO2 in natural hydrate deposits as well as for CO2 sequestration and storage. In this work, we present isothermal compressibility, isobaric thermal expansion coefficient and specific heat capacity of fully occupied single-crystal sI-CH4 hydrates, CO2 hydrates and hydrates of their mixture using molecular dynamics simulations. Eight rigid/nonpolarisable water interaction models and three CH4 and CO2 interaction potentials were selected to examine the atomic interactions in the sI hydrate structure. The TIP4P/2005 water model combined with the DACNIS united-atom CH4 potential and TraPPE CO2 rigid potential were found to be suitable molecular interaction models. Using these molecular models, the results indicate that both the lattice parameters and the compressibility of the sI hydrates agree with those from experimental measurements. The calculated bulk modulus for any mixture ratio of CH4 and CO2 hydrates varies between 8.5 GPa and 10.4 GPa at 271.15 K between 10 and 100 MPa. The calculated thermal expansion and specific heat capacities of CH4 hydrates are also comparable with experimental values above approximately 260 K. The compressibility and expansion coefficient of guest gas mixture hydrates increase with an increasing ratio of CO2-to-CH4, while the bulk modulus and specific heat capacity exhibit the opposite trend. The presented results for the specific heat capacities of 2220-2699.0 J kg(-1) K(-1) for any mixture ratio of CH4 and CO2 hydrates are the first reported so far. These computational results provide a useful database for practical natural gas recovery from CH4 hydrates in deep oceans where CO2 is considered to replace CH4, as well as for phase equilibrium and mechanical stability of gas hydrate-bearing sediments. The computational schemes also provide an appropriate balance between computational accuracy and cost for predicting
Ritchie, R.H.; Sakakura, A.Y.
1956-01-01
The formal solutions of problems involving transient heat conduction in infinite internally bounded cylindrical solids may be obtained by the Laplace transform method. Asymptotic series representing the solutions for large values of time are given in terms of functions related to the derivatives of the reciprocal gamma function. The results are applied to the case of the internally bounded infinite cylindrical medium with, (a) the boundary held at constant temperature; (b) with constant heat flow over the boundary; and (c) with the "radiation" boundary condition. A problem in the flow of gas through a porous medium is considered in detail.
Electron and ion dynamics during the expansion of a laser-heated plasma under vacuum
NASA Astrophysics Data System (ADS)
Bellei, C.; Foord, M. E.; Bartal, T.; Key, M. H.; McLean, H. S.; Patel, P. K.; Stephens, R. B.; Beg, F. N.
2012-03-01
The trajectories of electrons and ions when a hot plasma expands under vacuum are studied in detail from a theoretical point of view and with the aid of numerical simulations. Exact analytic solutions are obtained in multi-dimensions, starting from the solution for the expansion of a quasi-neutral, Gaussian, collisionless plasma in vacuum [D. S. Dorozhkina and V. E. Semenov, Phys. Rev. Lett. 81, 2691 (1998)]. Focusing of laser-accelerated ions with concave targets is investigated with the hybrid particle-in-cell code Lsp. For a given laser energy and pulse duration, a larger laser focal spot is found to be beneficial to focus the ion beam to a smaller focal spot, due both to a geometric effect and to the decrease in the transverse gradient of the hot electron pressure.
Effect of surface expansion on transient heat transfer from a cylinder in cross flow
Youssef, F.A.
1996-12-31
Use is made of a certain transformation that fixes the integration domain with time in order to numerically solve the problem of forced convection beat transfer from an impulsively started translating, rotating and expanding circular cylinder. A previously derived stability condition for the expanding cylinder surface is also used. This condition assists in using the classical explicit forward in time-centered space scheme to solve the vorticity and the energy equation with a reasonable time step. The Fast Fourier Transform is used to solve the Poisson equation for the stream function. The time development of the temperature field is obtained and discussed. The progress of the local Nusselt number over the circular cylinder surface with time is explored. The effects of different parameters related to the flow field, surface expansion and rotation are investigated.
Enzyme Activities of Starch and Sucrose Pathways and Growth of Apical and Basal Maize Kernels 1
Ou-Lee, Tsai-Mei; Setter, Tim Lloyd
1985-01-01
Apical kernels of maize (Zea mays L.) ears have smaller size and lower growth rates than basal kernels. To improve our understanding of this difference, the developmental patterns of starch-synthesis-pathway enzyme activities and accumulation of sugars and starch was determined in apical- and basal-kernel endosperm of greenhouse-grown maize (cultivar Cornell 175) plants. Plants were synchronously pollinated, kernels were sampled from apical and basal ear positions throughout kernel development, and enzyme activities were measured in crude preparations. Several factors were correlated with the higher dry matter accumulation rate and larger mature kernel size of basal-kernel endosperm. During the period of cell expansion (7 to 19 days after pollination), the activity of insoluble (acid) invertase and sucose concentration in endosperm of basal kernels exceeded that in apical kernels. Soluble (alkaline) invertase was also high during this stage but was the same in endosperm of basal and apical kernels, while glucose concentration was higher in apical-kernel endosperm. During the period of maximal starch synthesis, the activities of sucrose synthase, ADP-Glc-pyrophosphorylase, and insoluble (granule-bound) ADP-Glc-starch synthase were higher in endosperm of basal than apical kernels. Soluble ADP-Glc-starch synthase, which was maximal during the early stage before starch accumulated, was the same in endosperm from apical and basal kernels. It appeared that differences in metabolic potential between apical and basal kernels were established at an early stage in kernel development. PMID:16664503
The role of turbulence in coronal heating and solar wind expansion.
Cranmer, Steven R; Asgari-Targhi, Mahboubeh; Miralles, Mari Paz; Raymond, John C; Strachan, Leonard; Tian, Hui; Woolsey, Lauren N
2015-05-13
Plasma in the Sun's hot corona expands into the heliosphere as a supersonic and highly magnetized solar wind. This paper provides an overview of our current understanding of how the corona is heated and how the solar wind is accelerated. Recent models of magnetohydrodynamic turbulence have progressed to the point of successfully predicting many observed properties of this complex, multi-scale system. However, it is not clear whether the heating in open-field regions comes mainly from the dissipation of turbulent fluctuations that are launched from the solar surface, or whether the chaotic 'magnetic carpet' in the low corona energizes the system via magnetic reconnection. To help pin down the physics, we also review some key observational results from ultraviolet spectroscopy of the collisionless outer corona. PMID:25848083
The role of turbulence in coronal heating and solar wind expansion
Cranmer, Steven R.; Asgari-Targhi, Mahboubeh; Miralles, Mari Paz; Raymond, John C.; Strachan, Leonard; Tian, Hui; Woolsey, Lauren N.
2015-01-01
Plasma in the Sun's hot corona expands into the heliosphere as a supersonic and highly magnetized solar wind. This paper provides an overview of our current understanding of how the corona is heated and how the solar wind is accelerated. Recent models of magnetohydrodynamic turbulence have progressed to the point of successfully predicting many observed properties of this complex, multi-scale system. However, it is not clear whether the heating in open-field regions comes mainly from the dissipation of turbulent fluctuations that are launched from the solar surface, or whether the chaotic ‘magnetic carpet’ in the low corona energizes the system via magnetic reconnection. To help pin down the physics, we also review some key observational results from ultraviolet spectroscopy of the collisionless outer corona. PMID:25848083
NASA Astrophysics Data System (ADS)
Plug, L. J.; West, J. J.
2009-03-01
Thaw lakes, widespread in permafrost lowlands, expand their basins by conduction of heat from warm lake water into adjacent permafrost, subsidence of icy permafrost on thawing, and movement of thawed sediment from lake margins into basins by diffusive and advective mass wasting. We describe a cross-sectional numerical model with thermal processes and mass wasting. To test the model and provide an initial investigation of its utility, the model is driven using historical daily temperatures and permafrost conditions for the northern Seward Peninsula, Alaska (NSP; thick syngenetic ice, mean annual air temperature (MAAT) -6°C) and Yukon coastal plain (YCP; thin epigenetic ice, MAAT -10°C). In the model, lakes develop dynamic equilibrium profiles that are independent of initial morphology. These profiles migrate outward episodically and match the morphology of profiles from lakes that were measured at each site. Modeled NSP lakes expand more rapidly than YCP lakes (0.26 versus 0.10 m a-1) under respective modern climates. When identical climates are imposed, NSP lakes still grow more rapidly because their deeper basins and steeper bathymetric slopes move thawed insulating sediment away from the lake margin. In sensitivity tests, an increase of 3°C in MAAT causes 2.5× (NSP) and 1.6× (YCP) faster expansion of lakes. An 8°C decrease essentially halts expansion for both sites, consistent with paleostudies which attribute basins to postglacial warming. In the model, basins expand monotonically but lakes do not. The 1σ interannual variability of lake expansion is 0.51 (NSP) and 0.44 m a-1 (YCP), with single year rates of up to ±8 m occurring because of instabilities from thermal/mass movement coupling even under a stationary climate. This variability is likely a minimum estimate, compared to natural variability, and suggests that long measurement time series, of basins not lake surfaces, would best detect thermokarst acceleration resulting from a climate change.
NASA Astrophysics Data System (ADS)
Allen, Philip B.
2015-08-01
The quasiharmonic (QH) approximation uses harmonic vibrational frequencies ωQ ,H(V ) computed at volumes V near V0 where the Born-Oppenheimer (BO) energy Eel(V ) is minimum. When this is used in the harmonic free energy, QH approximation gives a good zeroth order theory of thermal expansion and first-order theory of bulk modulus, where nth-order means smaller than the leading term by ɛn, where ɛ =ℏ ωvib/Eel or kBT /Eel , and Eel is an electronic energy scale, typically 2 to 10 eV. Experiment often shows evidence for next-order corrections. When such corrections are needed, anharmonic interactions must be included. The most accessible measure of anharmonicity is the quasiparticle (QP) energy ωQ(V ,T ) seen experimentally by vibrational spectroscopy. However, this cannot just be inserted into the harmonic free energy FH. In this paper, a free energy is found that corrects the double-counting of anharmonic interactions that is made when F is approximated by FH( ωQ(V ,T ) ) . The term "QP thermodynamics" is used for this way of treating anharmonicity. It enables (n +1 ) -order corrections if QH theory is accurate to order n . This procedure is used to give corrections to the specific heat and volume thermal expansion. The QH formulas for isothermal (BT) and adiabatic (BS) bulk moduli are clarified, and the route to higher-order corrections is indicated.
Sparse representation with kernels.
Gao, Shenghua; Tsang, Ivor Wai-Hung; Chia, Liang-Tien
2013-02-01
Recent research has shown the initial success of sparse coding (Sc) in solving many computer vision tasks. Motivated by the fact that kernel trick can capture the nonlinear similarity of features, which helps in finding a sparse representation of nonlinear features, we propose kernel sparse representation (KSR). Essentially, KSR is a sparse coding technique in a high dimensional feature space mapped by an implicit mapping function. We apply KSR to feature coding in image classification, face recognition, and kernel matrix approximation. More specifically, by incorporating KSR into spatial pyramid matching (SPM), we develop KSRSPM, which achieves a good performance for image classification. Moreover, KSR-based feature coding can be shown as a generalization of efficient match kernel and an extension of Sc-based SPM. We further show that our proposed KSR using a histogram intersection kernel (HIK) can be considered a soft assignment extension of HIK-based feature quantization in the feature coding process. Besides feature coding, comparing with sparse coding, KSR can learn more discriminative sparse codes and achieve higher accuracy for face recognition. Moreover, KSR can also be applied to kernel matrix approximation in large scale learning tasks, and it demonstrates its robustness to kernel matrix approximation, especially when a small fraction of the data is used. Extensive experimental results demonstrate promising results of KSR in image classification, face recognition, and kernel matrix approximation. All these applications prove the effectiveness of KSR in computer vision and machine learning tasks. PMID:23014744
Duff, I.
1994-12-31
This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.
Heating of the Upper Atmosphere and the Expansion of the Corona of Titan
NASA Astrophysics Data System (ADS)
Michael, M.; Johnson, R. E.; Shematovich, V. I.; La Haye, V. D.; Waite, H.; Wong, M. C.; Sittler, E. C.; Ledvina, S.; Luhmann, J. G.; Leblanc, F.
2005-12-01
The atmosphere of Titan and its plasma environment are of much interest due to the recent observations by the Cassini spacecraft. It is well established that the upper atmosphere of Titan is continuously bombarded by pick-up ions and deflected ambient magnetospheric ions (Shematovich et al 2003). The deposition of energy, escape of atoms and molecules, and heating of the upper atmosphere of Titan are studied using a Direct Simulation Monte Carlo method (Michael et al 2005). It is found that the globally averaged flux of deflected magnetospheric ions and pick-up ions deposit more energy in the exobase region of Titan than solar radiation. The energy deposition in this region determines the non-thermal corona, the atmospheric loss, and the production of a neutral torus. It is found that the inclusion of the molecular pickup ions is critical to accurately determining the amount of energy deposited close to the exobase (Michael and Johnson 2005). Depending on the nature of the local interaction with the magnetosphere, the plasma flow through the exobase region and heating of the exobase region can increase the content of the corona (Michael and Johnson 2005). We compare the model results with the observational data of a number of instruments onboard Cassini spacecraft. References Michael, M., Johnson, R.E., Leblanc, F., Liu, M., Luhmann, J.G., Shematovich, V. I., Ejection of Nitrogen from Titan's atmosphere by magnetospheric ions and pickup Ions. Icarus, 175, 263-267, 2005. Michael, M., Johnson, R.E., Energy deposition of pickup ions and heating of Titan's atmosphere, Planet. Space Sci., In press, 2005. Shematovich, V.I., Johnson, R.E., Michael, M., Luhmann, J.G., Nitrogen loss from Titan. J. Geophys. Res., 108, 5086, 10.1029/2003JE002096, 2003.
NASA Astrophysics Data System (ADS)
Artemov, V. I.; Minko, K. B.; Yan'kov, G. G.
2015-12-01
Homogeneous equilibrium and nonequilibrium (relaxation) models are used to simulate flash boiling flows in nozzles. The simulation were performed using the author's CFD-code ANES. Existing experimental data are used to test the realized mathematical model and the modified algorithms of ANES CFD-code. The results of test calculations are presented, together with data obtained for the nozzle and expansion unit of the steam generator and separator in the waste-heat system at ZAO NPVP Turbokon. The SIMPLE algorithm may be used for the transonic and supersonic flashing liquid flow. The relaxation model yields better agreement with experimental data regarding the distribution of void fraction along the nozzle axis. For the given class of flow, the difference between one- and two-dimensional models is slight.
Internal Thermal Control System Hose Heat Transfer Fluid Thermal Expansion Evaluation Test Report
NASA Technical Reports Server (NTRS)
Wieland, P. O.; Hawk, H. D.
2001-01-01
During assembly of the International Space Station, the Internal Thermal Control Systems in adjacent modules are connected by jumper hoses referred to as integrated hose assemblies (IHAs). A test of an IHA has been performed at the Marshall Space Flight Center to determine whether the pressure in an IHA filled with heat transfer fluid would exceed the maximum design pressure when subjected to elevated temperatures (up to 60 C (140 F)) that may be experienced during storage or transportation. The results of the test show that the pressure in the IHA remains below 227 kPa (33 psia) (well below the 689 kPa (100 psia) maximum design pressure) even at a temperature of 71 C (160 F), with no indication of leakage or damage to the hose. Therefore, based on the results of this test, the IHA can safely be filled with coolant prior to launch. The test and results are documented in this Technical Memorandum.
Online Sequential Extreme Learning Machine With Kernels.
Scardapane, Simone; Comminiello, Danilo; Scarpiniti, Michele; Uncini, Aurelio
2015-09-01
The extreme learning machine (ELM) was recently proposed as a unifying framework for different families of learning algorithms. The classical ELM model consists of a linear combination of a fixed number of nonlinear expansions of the input vector. Learning in ELM is hence equivalent to finding the optimal weights that minimize the error on a dataset. The update works in batch mode, either with explicit feature mappings or with implicit mappings defined by kernels. Although an online version has been proposed for the former, no work has been done up to this point for the latter, and whether an efficient learning algorithm for online kernel-based ELM exists remains an open problem. By explicating some connections between nonlinear adaptive filtering and ELM theory, in this brief, we present an algorithm for this task. In particular, we propose a straightforward extension of the well-known kernel recursive least-squares, belonging to the kernel adaptive filtering (KAF) family, to the ELM framework. We call the resulting algorithm the kernel online sequential ELM (KOS-ELM). Moreover, we consider two different criteria used in the KAF field to obtain sparse filters and extend them to our context. We show that KOS-ELM, with their integration, can result in a highly efficient algorithm, both in terms of obtained generalization error and training time. Empirical evaluations demonstrate interesting results on some benchmarking datasets. PMID:25561597
Technology Transfer Automated Retrieval System (TEKTRAN)
The current US corn grading system accounts for the portion of damaged kernels, which is measured by time-consuming and inaccurate visual inspection. Near infrared spectroscopy (NIRS), a non-destructive and fast analytical method, was tested as a tool for discriminating corn kernels with heat and f...
NASA Astrophysics Data System (ADS)
Ma, Hongyun; Shao, Haiyan; Song, Jie
2014-02-01
Rapid urbanization has intensified summer heat waves in recent decades in Beijing, China. In this study, effectiveness of applying high-reflectance roofs on mitigating the warming effects caused by urban expansion and foehn wind was simulated for a record-breaking heat wave occurred in Beijing during July 13-15, 2002. Simulation experiments were performed using the Weather Research and Forecast (WRF version 3.0) model coupled with an urban canopy model. The modeled diurnal air temperatures were compared well with station observations in the city and the wind convergence caused by urban heat island (UHI) effect could be simulated clearly. By increasing urban roof albedo, the simulated UHI effect was reduced due to decreased net radiation, and the simulated wind convergence in the urban area was weakened. Using WRF3.0 model, the warming effects caused by urban expansion and foehn wind were quantified separately, and were compared with the cooling effect due to the increased roof albedo. Results illustrated that the foehn warming effect under the northwesterly wind contributed greatly to this heat wave event in Beijing, while contribution from urban expansion accompanied by anthropogenic heating was secondary, and was mostly evident at night. Increasing roof albedo could reduce air temperature both in the day and at night, and could more than offset the urban expansion effect. The combined warming caused by the urban expansion and the foehn wind could be potentially offset with high-reflectance roofs by 58.8 % or cooled by 1.4 °C in the early afternoon on July 14, 2002, the hottest day during the heat wave.
Robotic Intelligence Kernel: Communications
Walton, Mike C.
2009-09-16
The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.
Robotic Intelligence Kernel: Driver
2009-09-16
The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.
NASA Astrophysics Data System (ADS)
Hofmeister, A.
2010-12-01
Many measurements and models of heat transport in lower mantle candidate phases contain systematic errors: (1) conventional methods of insulators involve thermal losses that are pressure (P) and temperature (T) dependent due to physical contact with metal thermocouples, (2) measurements frequently contain unwanted ballistic radiative transfer which hugely increases with T, (3) spectroscopic measurements of dense samples in diamond anvil cells involve strong refraction by which has not been accounted for in analyzing transmission data, (4) the role of grain boundary scattering in impeding heat and light transfer has largely been overlooked, and (5) essentially harmonic physical properties have been used to predict anharmonic behavior. Improving our understanding of the physics of heat transport requires accurate data, especially as a function of temperature, where anharmonicity is the key factor. My laboratory provides thermal diffusivity (D) at T from laser flash analysis, which lacks the above experimental errors. Measuring a plethora of chemical compositions in diverse dense structures (most recently, perovskites, B1, B2, and glasses) as a function of temperature provides a firm basis for understanding microscopic behavior. Given accurate measurements for all quantities: (1) D is inversely proportional to [T x alpha(T)] from ~0 K to melting, where alpha is thermal expansivity, and (2) the damped harmonic oscillator model matches measured D(T), using only two parameters (average infrared dielectric peak width and compressional velocity), both acquired at temperature. These discoveries pertain to the anharmonic aspects of heat transport. I have previously discussed the easily understood quasi-harmonic pressure dependence of D. Universal behavior makes application to the Earth straightforward: due to the stiffness and slow motions of the plates and interior, and present-day, slow planetary cooling rates, Earth can be approximated as being in quasi
Linearized Kernel Dictionary Learning
NASA Astrophysics Data System (ADS)
Golts, Alona; Elad, Michael
2016-06-01
In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.
Bergfeld, D.; Vaughan, R. Greg; Evans, William C.; Olsen, Eric
2015-01-01
The Long Valley hydrothermal system supports geothermal power production from 3 binary plants (Casa Diablo) near the town of Mammoth Lakes, California. Development and growth of thermal ground at sites west of Casa Diablo have created concerns over planned expansion of a new well field and the associated increases in geothermal fluid production. To ensure that all areas of ground heating are identified prior to new geothermal development, we obtained high-resolution aerial thermal infrared imagery across the region. The imagery covers the existing and proposed well fields and part of the town of Mammoth Lakes. Imagery results from a predawn flight on Oct. 9, 2014 readily identified the Shady Rest thermal area (SRST), one of two large areas of ground heating west of Casa Diablo, as well as other known thermal areas smaller in size. Maximum surface temperatures at 3 thermal areas were 26–28 °C. Numerous small areas with ground temperatures >16 °C were also identified and slated for field investigations in summer 2015. Some thermal anomalies in the town of Mammoth Lakes clearly reflect human activity.Previously established projects to monitor impacts from geothermal power production include yearly surveys of soil temperatures and diffuse CO2 emissions at SRST, and less regular surveys to collect samples from fumaroles and gas vents across the region. Soil temperatures at 20 cm depth at SRST are well correlated with diffuse CO2 flux, and both parameters show little variation during the 2011–14 field surveys. Maximum temperatures were between 55–67 °C and associated CO2 discharge was around 12–18 tonnes per day. The carbon isotope composition of CO2 is fairly uniform across the area ranging between –3.7 to –4.4 ‰. The gas composition of the Shady Rest fumarole however has varied with time, and H2S concentrations in the gas have been increasing since 2009.
NASA Astrophysics Data System (ADS)
Liu, Yuanming; Israelsson, Ulf E.; Larson, Melora
2001-03-01
The superfluid transition in ^4He in the presence of a heat current (Q) provides an ideal system for the study of phase transitions under non-equilibrium, dynamical conditions. Many physical properties become nonlinear and Q-dependent near the transition temperature, T_ λ. For instance, the heat capacity enhancement by a heat current was predicted theoretically(R. Haussmann and V. Dohm, Phys. Rev. Lett. 72), 3060 (1994); T.C.P. Chui phet al., Phys. Rev. Lett. 77, 1793 (1996)., and observed experimentally(A.W. Harter phet al)., Phys. Rev. Lett. 84, 2195 (2000).. Because the thermal expansion coefficient is a linear function of the specific heat near T_ λ, both exhibit similar critical behaviors under equilibrium conditions. An enhancement of the thermal expansion coefficient is also expected if a similar relationship exists under non-equilibrium conditions. We report our experimental search of the enhancement of the thermal expansion of superfluid ^4He by a heat current (0 <= Q <= 100 μW/cm^2). We conducted the measurements in a thermal conductivity cell at sample pressures of SVP and 21.2 bar. The measurements were also performed in a reduced gravity environment of 0.01g provided by the low-gravity simulator we have developed at JPL.
Expansion of a radial jet from a guillotine tube breach in a shell-and-tube heat exchanger
Velasco, F.J.S.; del Pra, C. Lopez; Herranz, Luis E.
2008-02-15
Aerodynamics of a particle-laden gas jet entering the secondary side of a shell-and-tube heat exchanger from a tube guillotine breach, determines to a large extent radioactive retention in the break stage of the steam generator (SG) during hypothetical SGTR accident sequences in pressurized nuclear water reactors (PWRs). These scenarios were shown to be risk-dominant in PWRs. The major insights gained from a set of experiments into such aerodynamics are summarized in this paper. A scaled-down mock-up with representative dimensions of a real SG was built. Two-dimensional (2D) PIV technique was used to characterize the flow field in the space between the breach and the neighbor tubes in the gas flow range investigated (Re{sub D} = 0.8-2.7 x 10{sup 5}). Pitot tube measurements and CFD simulations were used to discuss and complement PIV data. The results, reported mainly in terms of velocity and turbulent intensity profiles, show that jet penetration and gas entrainment are considerably enhanced when increasing Re{sub D}. The presence of tubes was observed to distort the jet shape and to foster gas entrainment with respect to a jet expansion free of tubes. Turbulence intensity level close to the breach increases linearly with Re{sub D}. Account of this information into aerosol modeling will enhance predictive capability of inertial impaction and turbulent deposition equations. (author)
NASA Astrophysics Data System (ADS)
Beets, Nathan; Wake Forest CenterNanotechnology; Molecular Materials Team; Fraunhofer Institute Collaboration
2015-11-01
Two major problems with many third generation photovoltaics is their complex structure and greater expense for increased efficiency. Spectral splitting devices have been used by many with varying degrees of success to collect more and more of the spectrum, but simple, efficient, and cost-effective setups that employ spectral splitting remain elusive. This study explores this problem, presenting a solar engine that employs stokes shifting via laser dyes to convert incident light to the wavelength bandgap of the solar cell and collects the resultant infrared radiation unused by the photovoltaic cell as heat in ethylene glycol or glycerin. When used in conjunction with micro turbines, fluid expansion creates mechanical work, and the temperature difference between the cell and the environment is made available for use. The effect of focusing is also observed as a means to boost efficiency via concentration. Experimental results from spectral scans, vibrational voltage analysis of the PV itself and temperature measurements from a thermocouple are all compared to theoretical results using a program in Mathematica written to model refraction and lensing in the devices used, a quantum efficiency test of the cells, the absorption and emission curves of the dues used to determine the spectrum shift, and the various equations for fill factor, efficiency, and current in different setups. An efficiency increase well over 50% from the control devices is observed, and a new solar engine proposed.
A short- time beltrami kernel for smoothing images and manifolds.
Spira, Alon; Kimmel, Ron; Sochen, Nir
2007-06-01
We introduce a short-time kernel for the Beltrami image enhancing flow. The flow is implemented by "convolving" the image with a space dependent kernel in a similar fashion to the solution of the heat equation by a convolution with a Gaussian kernel. The kernel is appropriate for smoothing regular (flat) 2-D images, for smoothing images painted on manifolds, and for simultaneously smoothing images and the manifolds they are painted on. The kernel combines the geometry of the image and that of the manifold into one metric tensor, thus enabling a natural unified approach for the manipulation of both. Additionally, the derivation of the kernel gives a better geometrical understanding of the Beltrami flow and shows that the bilateral filter is a Euclidean approximation of it. On a practical level, the use of the kernel allows arbitrarily large time steps as opposed to the existing explicit numerical schemes for the Beltrami flow. In addition, the kernel works with equal ease on regular 2-D images and on images painted on parametric or triangulated manifolds. We demonstrate the denoising properties of the kernel by applying it to various types of images and manifolds. PMID:17547140
Oil point pressure of Indian almond kernels
NASA Astrophysics Data System (ADS)
Aregbesola, O.; Olatunde, G.; Esuola, S.; Owolarafe, O.
2012-07-01
The effect of preprocessing conditions such as moisture content, heating temperature, heating time and particle size on oil point pressure of Indian almond kernel was investigated. Results showed that oil point pressure was significantly (P < 0.05) affected by above mentioned parameters. It was also observed that oil point pressure reduced with increase in heating temperature and heating time for both coarse and fine particles. Furthermore, an increase in moisture content resulted in increased oil point pressure for coarse particles while there was a reduction in oil point pressure with increase in moisture content for fine particles.
LeFebvre, W.
1994-08-01
For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.
Weighted Bergman Kernels and Quantization}
NASA Astrophysics Data System (ADS)
Engliš, Miroslav
Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion
Shen, Bo
2011-01-01
This paper describes extensive tests performed on a 3-ton R-22 split heat pump in heating mode. The tests contain 150 steady-state performance tests, 18 cyclic tests and 18 defrost tests. During the testing work, the refrigerant charge level was varied from 70 % to 130% relative to the nominal value; the outdoor temperature was altered by three levels at 17 F (-8.3 C), 35 F (1.7 C) and 47 F (8.3 C); indoor air flow rates ranged from 60% to 150% of the rated air flow rate; and the expansion device was switched from a fixed-orifice to a thermal expansion value. Detailed performance data from the extensive steady state cyclic and defrost testing performed were presented and compared.
NASA Astrophysics Data System (ADS)
Terekhov, V. I.; Bogatko, T. V.
2008-03-01
Results of numerical investigation of the boundary layer thickness on turbulent separation and heat transfer in a tube with an abrupt expansion are shown. The Menter turbulence model of shear stress transfer implemented in Fluent package was used for calculations. The range of Reynolds numbers was from 5·103 to 105. The air was used as the working fluid. A degree of tube expansion was ( D 2/ D 1)2 = 1.78. A significant effect of thickness of the separated boundary layer both on dynamic and thermal characteristics of the flow is shown. In particular, it was found that with an increase in the boundary layer thickness the recirculation zone increases, and the maximum heat transfer coefficient decreases.
NASA Astrophysics Data System (ADS)
Takizuka, T.; Tokunaga, S.; Hoshino, K.; Shimizu, K.; Asakura, N.
2015-08-01
Edge localized modes (ELMs) in the H-mode operation of tokamak reactors may be suppressed/mitigated by the resonant magnetic perturbation (RMP), but RMP coils are considered incompatible with DEMO reactors under the strong neutron flux. We propose an innovative concept of the RMP without installing coils but inserting ferritic steels of the helical configuration. Helically perturbed field is naturally formed in the axisymmetric toroidal field through the helical ferritic steel inserts (FSIs). When ELMs are avoided, large stationary heat load on divertor plates can be reduced by adopting a flux-tube-expansion (FTE) divertor like an X divertor. Separatrix shape and divertor-plate inclination are similar to those of a simple long-leg divertor configuration. Combination of the helical FSIs and the FTE divertor is a suitable method for the heat control to avoid transient ELM heat pulse and to reduce stationary divertor heat load in a tokamak DEMO reactor.
Robotic Intelligence Kernel: Architecture
Energy Science and Technology Software Center (ESTSC)
2009-09-16
The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.
NASA Technical Reports Server (NTRS)
Spafford, Eugene H.; Mckendry, Martin S.
1986-01-01
An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.
Robotic Intelligence Kernel: Visualization
Energy Science and Technology Software Center (ESTSC)
2009-09-16
The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.
NASA Astrophysics Data System (ADS)
Hummel, Tobias; Pacheco-Vega, Arturo
2012-11-01
In the present study we use Karhunen-Loève (KL) expansions to model the dynamic behavior of a single-phase natural convection loop. The loop is filled with an incompressible fluid that exchanges heat through the walls of its toroidal shape. Influx and efflux of energy take place at different parts of the loop. The focus here is a sinusoidal variation of the heat flux exchanged with the environment for three different scenarios; i.e., stable, limit cycles and chaos. For the analysis, one-dimensional models, in which the tilt angle and the amplitude of the heat flux are used as parameters, were first developed under suitable assumptions and then solved numerically to generate the data from which the KL-based models could be constructed. The method of snapshots, along with a Galerkin projection, was then used to find the basis functions and corresponding constants of each expansion, thus producing the optimal representation of the system. Results from this study indicate that the dimension of the KL-based dynamical system depends on the linear stability of the steady states; the number of basis functions necessary to describe the system increases with increased complexity of the system operation. When compared to typical dynamical systems based on Fourier expansions the KL-based models are, in general, more compact and equally accurate in the dynamic description of the natural convection loop.
Resummed memory kernels in generalized system-bath master equations
Mavros, Michael G.; Van Voorhis, Troy
2014-08-07
Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.
Resummed memory kernels in generalized system-bath master equations
NASA Astrophysics Data System (ADS)
Mavros, Michael G.; Van Voorhis, Troy
2014-08-01
Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.
Resummed memory kernels in generalized system-bath master equations.
Mavros, Michael G; Van Voorhis, Troy
2014-08-01
Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics. PMID:25106575
Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.
Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I
2016-03-01
The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor. PMID:27021084
Prediction: Design of experiments based on approximating covariance kernels
Fedorov, V.
1998-11-01
Using Mercer`s expansion to approximate the covariance kernel of an observed random function the authors transform the prediction problem to the regression problem with random parameters. The latter one is considered in the framework of convex design theory. First they formulate results in terms of the regression model with random parameters, then present the same results in terms of the original problem.
Kernel optimization in discriminant analysis.
You, Di; Hamsici, Onur C; Martinez, Aleix M
2011-03-01
Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results, using a large number of databases and classifiers, demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates. PMID:20820072
MC Kernel: Broadband Waveform Sensitivity Kernels for Seismic Tomography
NASA Astrophysics Data System (ADS)
Stähler, Simon C.; van Driel, Martin; Auer, Ludwig; Hosseini, Kasra; Sigloch, Karin; Nissen-Meyer, Tarje
2016-04-01
We present MC Kernel, a software implementation to calculate seismic sensitivity kernels on arbitrary tetrahedral or hexahedral grids across the whole observable seismic frequency band. Seismic sensitivity kernels are the basis for seismic tomography, since they map measurements to model perturbations. Their calculation over the whole frequency range was so far only possible with approximative methods (Dahlen et al. 2000). Fully numerical methods were restricted to the lower frequency range (usually below 0.05 Hz, Tromp et al. 2005). With our implementation, it's possible to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (www.axisem.info), and thus on spherically symmetric models. The advantage is that frequencies up to 0.2 Hz and higher can be accessed. Since the usage of irregular, adapted grids is an integral part of regularisation in seismic tomography, MC Kernel works in a inversion-grid-centred fashion: A Monte-Carlo integration method is used to project the kernel onto each basis function, which allows to control the desired precision of the kernel estimation. Also, it means that the code concentrates calculation effort on regions of interest without prior assumptions on the kernel shape. The code makes extensive use of redundancies in calculating kernels for different receivers or frequency-pass-bands for one earthquake, to facilitate its usage in large-scale global seismic tomography.
Hedstrom, C S; Shearer, P W; Miller, J C; Walton, V M
2014-10-01
Halyomorpha halys Stål, the brown marmorated stink bug (Hemiptera: Pentatomidae), is an invasive pest with established populations in Oregon. The generalist feeding habits of H. halys suggest it has the potential to be a pest of many specialty crops grown in Oregon, including hazelnuts, Corylus avellana L. The objectives of this study were to: 1) characterize the damage to developing hazelnut kernels resulting from feeding by H. halys adults, 2) determine how the timing of feeding during kernel development influences damage to kernels, and 3) determine if hazelnut shell thickness has an effect on feeding frequency on kernels. Adult brown marmorated stink bugs were allowed to feed on developing nuts for 1-wk periods from initial kernel development (spring) until harvest (fall). Developing nuts not exposed to feeding by H. halys served as a control treatment. The degree of damage and diagnostic symptoms corresponded with the hazelnut kernels' physiological development. Our results demonstrated that when H. halys fed on hazelnuts before kernel expansion, development of the kernels could cease, resulting in empty shells. When stink bugs fed during kernel expansion, kernels appeared malformed. When stink bugs fed on mature nuts the kernels exhibited corky, necrotic areas. Although significant differences in shell thickness were observed among the cultivars, no significant differences occurred in the proportions of damaged kernels based on field tests and laboratory choice tests. The results of these studies demonstrated that commercial hazelnuts are susceptible to damage caused by the feeding of H. halys throughout the entire period of kernel development. PMID:26309276
NASA Technical Reports Server (NTRS)
Reddy, N. M.
1980-01-01
Convective heat transfer measurements, made on the conical portion of spherically blunted cones (30 deg and 40 deg half angle) in an expansion tube are discussed. The test gases used were helium and air; flow velocities were about 6.8 km/sec for helium and about 5.1 km/sec for air. The measured heating rates are compared with calculated results using a viscous shock layer computer code. For air, various techniques to determine flow velocity yielded identical results, but for helium, the flow velocity varied by as much as eight percent depending on which technique was used. The measured heating rates are in satisfactory agreement with calculation for helium, assuming the lower flow velocity, the measurements are significantly greater than theory and the discrepancy increased with increasing distance along the cone.
Lee, Myung Hee; Liu, Yufeng
2013-12-01
The continuum regression technique provides an appealing regression framework connecting ordinary least squares, partial least squares and principal component regression in one family. It offers some insight on the underlying regression model for a given application. Moreover, it helps to provide deep understanding of various regression techniques. Despite the useful framework, however, the current development on continuum regression is only for linear regression. In many applications, nonlinear regression is necessary. The extension of continuum regression from linear models to nonlinear models using kernel learning is considered. The proposed kernel continuum regression technique is quite general and can handle very flexible regression model estimation. An efficient algorithm is developed for fast implementation. Numerical examples have demonstrated the usefulness of the proposed technique. PMID:24058224
Technology Transfer Automated Retrieval System (TEKTRAN)
Solid-phase microextraction (SPME) in conjunction with GC/MS was used to distinguish non-aromatic rice (Oryza sativa, L.) kernels from aromatic rice kernels. In this method, single kernels along with 10 µl of 0.1 ng 2,4,6-Trimethylpyridine (TMP) were placed in sealed vials and heated to 80oC for 18...
ERIC Educational Resources Information Center
Fakhruddin, Hasan
1993-01-01
Describes a paradox in the equation for thermal expansion. If the calculations for heating a rod and subsequently cooling a rod are determined, the new length of the cool rod is shorter than expected. (PR)
NASA Astrophysics Data System (ADS)
Georgescu, M.; Bierwagen, B. G.; Morefield, P.; Weaver, C. P.
2013-12-01
With population projections ranging from 380 to 690 million inhabitants for U.S. 2100, considerable conversion of landscapes will be necessary to meet increased demand for the built environment. Incorporating Integrated Climate and Land Use Scenarios (ICLUS) urban expansion data for 2100 as surface boundary conditions within the Weather Research and Forecasting (WRF) modeling system, we examine hydroclimatic consequences owing to built environment expansion scenarios across the conterminous U.S. Continuous, multi-year and multi-member continental scale numerical simulations are performed for a modern day urban representation (Control), a worst-case (A2) and a best-case (B1) urban expansion scenario. Three adaptation approaches are explored to assess the potential offset of urban-induced warming to growth of the built environment: (i) widespread adoption of cool roofs, (ii) a simple representation of green roofs, and a (iii) hypothetical hybrid approach integrating properties of both cool and green roofs (i.e., reflective green roofs).Widespread adoption of adaptation strategies exhibit hydroclimatic impacts that are regionally and seasonally dependant. To help prioritize region-specific adaptation strategies, the potential to offset urban-induced warming by each of the trio of strategies is examined and contrasted across the various hydrometeorological environments.
Calculation of the temperature and thermal expansion of a STM tip heated by a short laser pulse
NASA Astrophysics Data System (ADS)
Geshev, P. I.; Klein, S.; Dickmann, K.
A mathematical model for the calculation of the temperature field in a scanning tunneling microscope (STM) tip under laser illumination is developed. The duration of the laser pulse is a few nanoseconds or shorter. A Gaussian distribution of the laser light intensity in time and space is assumed. Two different mechanisms of tip heating are taken into account: 1. due to an enhanced electric field on the tip; 2. due to heating of the side surface of the tip by the focused spot of laser light. An average tip temperature is calculated using the heat conductivity equation. The enhanced electric field on the tip is calculated by the method of boundary integral equations.
Kernel Phase and Kernel Amplitude in Fizeau Imaging
NASA Astrophysics Data System (ADS)
Pope, Benjamin J. S.
2016-09-01
Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent fhistory of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.
The vector potential of a circular cylindrical antenna in terms of a toroidal harmonic expansion
NASA Astrophysics Data System (ADS)
Selvaggi, Jerry; Salon, Sheppard; Chari, M. V. K.
2008-08-01
A toroidal harmonic expansion is developed which is used to represent the vector potential due to a circular cylindrical antenna with a rectangular cross section at any arbitrary point in space. The singular part of the antenna kernel is represented by an associated toroidal harmonic expansion and the analytic part of the kernel is represented by a binomial expansion. A simple example is given to illustrate the application of the toroidal expansion.
NASA Technical Reports Server (NTRS)
Back, L. H.; Massier, P. F.; Roschke, E. J.
1972-01-01
Heat transfer and pressure measurements obtained in the separation, reattachment, and redevelopment regions along a tube and nozzle located downstream of an abrupt channel expansion are presented for a very high enthalpy flow of argon. The ionization energy fraction extended up to 0.6 at the tube inlet just downstream of the arc heater. Reattachment resulted from the growth of an instability in the vortex sheet-like shear layer between the central jet that discharged into the tube and the reverse flow along the wall at the lower Reynolds numbers, as indicated by water flow visualization studies which were found to dynamically model the high-temperature gas flow. A reasonably good prediction of the heat transfer in the reattachment region where the highest heat transfer occurred and in the redevelopment region downstream can be made by using existing laminar boundary layer theory for a partially ionized gas. In the experiments as much as 90 per cent of the inlet energy was lost by heat transfer to the tube and the nozzle wall.
Bruemmer, David J.
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Expansion of the one-loop effective action in covariant derivatives
Zuk, J.A.
1986-06-15
With an approach based on the heat-kernel representation, we show how to construct the expansion of the one-loop effective action in powers of covariant derivatives D/sub ..mu../ whenever it can be expressed in terms of an operator determinant of the form det(-D/sup 2/+V), where V is some positive Hermitian matrix-valued function. We present general expressions for the contributions to the effective Lagrangian in two and four covariant derivatives for four Euclidean space-time dimensions.
Chung, Moo K; Schaefer, Stacey M; Van Reekum, Carien M; Peschke-Schmitz, Lara; Sutterer, Mattew J; Davidson, Richard J
2014-01-01
We present a new unified kernel regression framework on manifolds. Starting with a symmetric positive definite kernel, we formulate a new bivariate kernel regression framework that is related to heat diffusion, kernel smoothing and recently popular diffusion wavelets. Various properties and performance of the proposed kernel regression framework are demonstrated. The method is subsequently applied in investigating the influence of age and gender on the human amygdala and hippocampus shapes. We detected a significant age effect on the posterior regions of hippocampi while there is no gender effect present. PMID:25485452
Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Theodoridis, Sergios
2008-12-01
Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.
NASA Astrophysics Data System (ADS)
Bakaleinikov, L. A.; Flegontova, E. Yu.; Ender, A. Ya.; Ender, I. A.
2016-04-01
A recurrence procedure for a sequential construction of kernels G_{{l_1},{l_2}}^l ( c, c 1, c 2) appearing upon the expansion of a nonlinear collision integral of the Boltzmann equation in spherical harmonics is developed. The starting kernel for this procedure is kernel G 0,0 0 ( c, c 1, c 2) of the collision integral for the distribution function isotropic with respect to the velocities. Using the recurrence procedure, a set of kernels G_{{l_1},{l_2}}^{ + l} ( c, c 1, c 2) for a gas consisting of hard spheres and Maxwellian molecules is constructed. It is shown that the resultant kernels exhibit similarity and symmetry properties and satisfy the relations following from the conservation laws.
Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels.
Jayasumana, Sadeep; Hartley, Richard; Salzmann, Mathieu; Li, Hongdong; Harandi, Mehrtash
2015-12-01
In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. PMID:26539851
Enzymatic treatment of peanut kernels to reduce allergen levels
Technology Transfer Automated Retrieval System (TEKTRAN)
This study investigated the use of enzymatic treatment to reduce peanut allergens in peanut kernel by processing conditions, such as, pretreatment with heat and proteolysis at different enzyme concentrations and treatment times. Two major peanut allergens, Ara h 1 and Ara h 2, were used as indicator...
Popping the Kernel Modeling the States of Matter
ERIC Educational Resources Information Center
Hitt, Austin; White, Orvil; Hanson, Debbie
2005-01-01
This article discusses how to use popcorn to engage students in model building and to teach them about the nature of matter. Popping kernels is a simple and effective method to connect the concepts of heat, motion, and volume with the different phases of matter. Before proceeding with the activity the class should discuss the nature of scientific…
Shaikhislamov, I. F.; Khodachenko, M. L.; Sasunov, Yu. L.; Lammer, H.; Kislyakova, K. G.; Erkaev, N. V.
2014-11-10
In the present series of papers we propose a consistent description of the mass loss process. To study in a comprehensive way the effects of the intrinsic magnetic field of a close-orbit giant exoplanet (a so-called hot Jupiter) on atmospheric material escape and the formation of a planetary inner magnetosphere, we start with a hydrodynamic model of an upper atmosphere expansion in this paper. While considering a simple hydrogen atmosphere model, we focus on the self-consistent inclusion of the effects of radiative heating and ionization of the atmospheric gas with its consequent expansion in the outer space. Primary attention is paid to an investigation of the role of the specific conditions at the inner and outer boundaries of the simulation domain, under which different regimes of material escape (free and restricted flow) are formed. A comparative study is performed of different processes, such as X-ray and ultraviolet (XUV) heating, material ionization and recombination, H{sub 3}{sup +} cooling, adiabatic and Lyα cooling, and Lyα reabsorption. We confirm the basic consistency of the outcomes of our modeling with the results of other hydrodynamic models of expanding planetary atmospheres. In particular, we determine that, under the typical conditions of an orbital distance of 0.05 AU around a Sun-type star, a hot Jupiter plasma envelope may reach maximum temperatures up to ∼9000 K with a hydrodynamic escape speed of ∼9 km s{sup –1}, resulting in mass loss rates of ∼(4-7) · 10{sup 10} g s{sup –1}. In the range of the considered stellar-planetary parameters and XUV fluxes, that is close to the mass loss in the energy-limited case. The inclusion of planetary intrinsic magnetic fields in the model is a subject of the follow-up paper (Paper II).
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 8 2011-01-01 2011-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...
Cusp Kernels for Velocity-Changing Collisions
NASA Astrophysics Data System (ADS)
McGuyer, B. H.; Marsland, R., III; Olsen, B. A.; Happer, W.
2012-05-01
We introduce an analytical kernel, the “cusp” kernel, to model the effects of velocity-changing collisions on optically pumped atoms in low-pressure buffer gases. Like the widely used Keilson-Storer kernel [J. Keilson and J. E. Storer, Q. Appl. Math. 10, 243 (1952)QAMAAY0033-569X], cusp kernels are characterized by a single parameter and preserve a Maxwellian velocity distribution. Cusp kernels and their superpositions are more useful than Keilson-Storer kernels, because they are more similar to real kernels inferred from measurements or theory and are easier to invert to find steady-state velocity distributions.
Source identity and kernel functions for Inozemtsev-type systems
Langmann, Edwin; Takemura, Kouichi
2012-08-15
The Inozemtsev Hamiltonian is an elliptic generalization of the differential operator defining the BC{sub N} trigonometric quantum Calogero-Sutherland model, and its eigenvalue equation is a natural many-variable generalization of the Heun differential equation. We present kernel functions for Inozemtsev Hamiltonians and Chalykh-Feigin-Veselov-Sergeev-type deformations thereof. Our main result is a solution of a heat-type equation for a generalized Inozemtsev Hamiltonian which is the source of all these kernel functions. Applications are given, including a derivation of simple exact eigenfunctions and eigenvalues of the Inozemtsev Hamiltonian.
FUV Continuum in Flare Kernels Observed by IRIS
NASA Astrophysics Data System (ADS)
Daw, Adrian N.; Kowalski, Adam; Allred, Joel C.; Cauzzi, Gianna
2016-05-01
Fits to Interface Region Imaging Spectrograph (IRIS) spectra observed from bright kernels during the impulsive phase of solar flares are providing long-sought constraints on the UV/white-light continuum emission. Results of fits of continua plus numerous atomic and molecular emission lines to IRIS far ultraviolet (FUV) spectra of bright kernels are presented. Constraints on beam energy and cross sectional area are provided by cotemporaneous RHESSI, FERMI, ROSA/DST, IRIS slit-jaw and SDO/AIA observations, allowing for comparison of the observed IRIS continuum to calculations of non-thermal electron beam heating using the RADYN radiative-hydrodynamic loop model.
Source identity and kernel functions for Inozemtsev-type systems
NASA Astrophysics Data System (ADS)
Langmann, Edwin; Takemura, Kouichi
2012-08-01
The Inozemtsev Hamiltonian is an elliptic generalization of the differential operator defining the BCN trigonometric quantum Calogero-Sutherland model, and its eigenvalue equation is a natural many-variable generalization of the Heun differential equation. We present kernel functions for Inozemtsev Hamiltonians and Chalykh-Feigin-Veselov-Sergeev-type deformations thereof. Our main result is a solution of a heat-type equation for a generalized Inozemtsev Hamiltonian which is the source of all these kernel functions. Applications are given, including a derivation of simple exact eigenfunctions and eigenvalues of the Inozemtsev Hamiltonian.
NASA Astrophysics Data System (ADS)
Cerdeiriña, C. A.; Troncoso, J.; Carballo, E.; Romaní, L.
2002-09-01
The heat capacity per unit volume Cp and density ρ of the nitromethane-1-butanol critical mixture near its upper consolute point are determined in this work. Cp data are obtained at atmospheric pressure as a function of temperature in the one-phase and two-phase regions, using a differential scanning calorimeter. The suitability of DSC for recording Cp as a function of T in the critical region is confirmed by measurements of the nitromethane-cyclohexane mixture, the results being quite consistent with reported data. By fitting the Cp data in the one-phase region, the critical exponent α is found to be 0.110+/-0.014-and hence consistent with the universal accepted value-and the critical amplitude A+=0.0606+/-0.0006 J K-1 cm-3. ρ data were only obtained in the one-phase region, using a vibrating tube densimeter. The amplitude of the density anomaly was found to be C+1=-0.017+/-0.003 g cm-3, which is moderately low in spite of the large difference between the densities of the pure liquids. The thermodynamic consistency of the A+ and C+1 values was examined in relation to the previously reported value for the slope of the critical line dTc/dp. The results of this analysis were consistent with previous work on this matter.
Boundary conditions for gas flow problems from anisotropic scattering kernels
NASA Astrophysics Data System (ADS)
To, Quy-Dong; Vu, Van-Huyen; Lauriat, Guy; Léonard, Céline
2015-10-01
The paper presents an interface model for gas flowing through a channel constituted of anisotropic wall surfaces. Using anisotropic scattering kernels and Chapman Enskog phase density, the boundary conditions (BCs) for velocity, temperature, and discontinuities including velocity slip and temperature jump at the wall are obtained. Two scattering kernels, Dadzie and Méolans (DM) kernel, and generalized anisotropic Cercignani-Lampis (ACL) are examined in the present paper, yielding simple BCs at the wall fluid interface. With these two kernels, we rigorously recover the analytical expression for orientation dependent slip shown in our previous works [Pham et al., Phys. Rev. E 86, 051201 (2012) and To et al., J. Heat Transfer 137, 091002 (2015)] which is in good agreement with molecular dynamics simulation results. More important, our models include both thermal transpiration effect and new equations for the temperature jump. While the same expression depending on the two tangential accommodation coefficients is obtained for slip velocity, the DM and ACL temperature equations are significantly different. The derived BC equations associated with these two kernels are of interest for the gas simulations since they are able to capture the direction dependent slip behavior of anisotropic interfaces.
Domain transfer multiple kernel learning.
Duan, Lixin; Tsang, Ivor W; Xu, Dong
2012-03-01
Cross-domain learning methods have shown promising results by leveraging labeled patterns from the auxiliary domain to learn a robust classifier for the target domain which has only a limited number of labeled samples. To cope with the considerable change between feature distributions of different domains, we propose a new cross-domain kernel learning framework into which many existing kernel methods can be readily incorporated. Our framework, referred to as Domain Transfer Multiple Kernel Learning (DTMKL), simultaneously learns a kernel function and a robust classifier by minimizing both the structural risk functional and the distribution mismatch between the labeled and unlabeled samples from the auxiliary and target domains. Under the DTMKL framework, we also propose two novel methods by using SVM and prelearned classifiers, respectively. Comprehensive experiments on three domain adaptation data sets (i.e., TRECVID, 20 Newsgroups, and email spam data sets) demonstrate that DTMKL-based methods outperform existing cross-domain learning and multiple kernel learning methods. PMID:21646679
RTOS kernel in portable electrocardiograph
NASA Astrophysics Data System (ADS)
Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.
2011-12-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
2011-01-01
The chemical composition of small organic molecules is often very similar to amino acid side chains or the bases in nucleic acids, and hence there is no a priori reason why a molecular mechanics force field could not describe both organic liquids and biomolecules with a single parameter set. Here, we devise a benchmark for force fields in order to test the ability of existing force fields to reproduce some key properties of organic liquids, namely, the density, enthalpy of vaporization, the surface tension, the heat capacity at constant volume and pressure, the isothermal compressibility, the volumetric expansion coefficient, and the static dielectric constant. Well over 1200 experimental measurements were used for comparison to the simulations of 146 organic liquids. Novel polynomial interpolations of the dielectric constant (32 molecules), heat capacity at constant pressure (three molecules), and the isothermal compressibility (53 molecules) as a function of the temperature have been made, based on experimental data, in order to be able to compare simulation results to them. To compute the heat capacities, we applied the two phase thermodynamics method (Lin et al. J. Chem. Phys.2003, 119, 11792), which allows one to compute thermodynamic properties on the basis of the density of states as derived from the velocity autocorrelation function. The method is implemented in a new utility within the GROMACS molecular simulation package, named g_dos, and a detailed exposé of the underlying equations is presented. The purpose of this work is to establish the state of the art of two popular force fields, OPLS/AA (all-atom optimized potential for liquid simulation) and GAFF (generalized Amber force field), to find common bottlenecks, i.e., particularly difficult molecules, and to serve as a reference point for future force field development. To make for a fair playing field, all molecules were evaluated with the same parameter settings, such as thermostats and barostats
ERIC Educational Resources Information Center
McArdle, Heather K.
1997-01-01
Describes a week-long activity for general to honors-level students that addresses Hubble's law and the universal expansion theory. Uses a discrepant event-type activity to lead up to the abstract principles of the universal expansion theory. (JRH)
Technology Transfer Automated Retrieval System (TEKTRAN)
Oat (Avena sativa L.) kernels appear to contain much higher polar lipid concentrations than other plant tissues. We have extracted, identified, and quantified polar lipids from 18 oat genotypes grown in replicated plots in three environments in order to determine genotypic or environmental variation...
Accelerating the Original Profile Kernel
Hamp, Tobias; Goldberg, Tatyana; Rost, Burkhard
2013-01-01
One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel. PMID:23825697
Adaptive wiener image restoration kernel
Yuan, Ding
2007-06-05
A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.
Local Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
NASA Astrophysics Data System (ADS)
He, Qing; Shchekin, Alexander K.; Xie, Ming-Liang
2015-06-01
New analytical solutions in the theory of the Brownian coagulation with a wide class of collision kernels have been found with using the Taylor-series expansion method of moments (TEMOM). It has been shown at different power exponents in the collision kernels from this class and at arbitrary initial conditions that the relative rates of changing zeroth and second moments of the particle volume distribution have the same long time behavior with power exponent -1, while the dimensionless particle moment related to the geometric standard deviation tends to the constant value which equals 2. The power exponent in the collision kernel in the class studied affects the time of approaching the self-preserving distribution, the smaller the value of the index, the longer time. It has also been shown that constant collision kernel gives for the moments in the Brownian coagulation the results which are very close to that in the continuum regime.
Point-Kernel Shielding Code System.
Energy Science and Technology Software Center (ESTSC)
1982-02-17
Version 00 QAD-BSA is a three-dimensional, point-kernel shielding code system based upon the CCC-48/QAD series. It is designed to calculate photon dose rates and heating rates using exponential attenuation and infinite medium buildup factors. Calculational provisions include estimates of fast neutron penetration using data computed by the moments method. Included geometry routines can describe complicated source and shield geometries. An internal library contains data for many frequently used structural and shielding materials, enabling the codemore » to solve most problems with only source strengths and problem geometry required as input. This code system adapts especially well to problems requiring multiple sources and sources with asymmetrical geometry. In addition to being edited separately, the total interaction rates from many sources may be edited at each detector point. Calculated photon interaction rates agree closely with those obtained using QAD-P5A.« less
PERI - Auto-tuning Memory Intensive Kernels for Multicore
Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H
2008-06-24
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.
PERI - auto-tuning memory-intensive kernels for multicore
NASA Astrophysics Data System (ADS)
Williams, S.; Datta, K.; Carter, J.; Oliker, L.; Shalf, J.; Yelick, K.; Bailey, D.
2008-07-01
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to sparse matrix vector multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the high-performance computing literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4× improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.
Kernel Near Principal Component Analysis
MARTIN, SHAWN B.
2002-07-01
We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.
Derivation of aerodynamic kernel functions
NASA Technical Reports Server (NTRS)
Dowell, E. H.; Ventres, C. S.
1973-01-01
The method of Fourier transforms is used to determine the kernel function which relates the pressure on a lifting surface to the prescribed downwash within the framework of Dowell's (1971) shear flow model. This model is intended to improve upon the potential flow aerodynamic model by allowing for the aerodynamic boundary layer effects neglected in the potential flow model. For simplicity, incompressible, steady flow is considered. The proposed method is illustrated by deriving known results from potential flow theory.
Haselton, H.T., Jr.; Hemingway, B.S.; Robie, R.A.
1984-01-01
Low-T heat capacities (5-380 K) have been measured by adiabatic calorimetry for synthetic CaAl2SiO6 glass and pyroxene. High-T unit cell parameters were measured for CaAl2SiO6 pyroxene by means of a Nonius Guinier-Lenne powder camera in order to determine the mean coefficient of thermal expansion in the T range 25-1200oC. -J.A.Z.
Kernel CMAC with improved capability.
Horváth, Gábor; Szabó, Tamás
2007-02-01
The cerebellar model articulation controller (CMAC) has some attractive features, namely fast learning capability and the possibility of efficient digital hardware implementation. Although CMAC was proposed many years ago, several open questions have been left even for today. The most important ones are about its modeling and generalization capabilities. The limits of its modeling capability were addressed in the literature, and recently, certain questions of its generalization property were also investigated. This paper deals with both the modeling and the generalization properties of CMAC. First, a new interpolation model is introduced. Then, a detailed analysis of the generalization error is given, and an analytical expression of this error for some special cases is presented. It is shown that this generalization error can be rather significant, and a simple regularized training algorithm to reduce this error is proposed. The results related to the modeling capability show that there are differences between the one-dimensional (1-D) and the multidimensional versions of CMAC. This paper discusses the reasons of this difference and suggests a new kernel-based interpretation of CMAC. The kernel interpretation gives a unified framework. Applying this approach, both the 1-D and the multidimensional CMACs can be constructed with similar modeling capability. Finally, this paper shows that the regularized training algorithm can be applied for the kernel interpretations too, which results in a network with significantly improved approximation capabilities. PMID:17278566
RKRD: Runtime Kernel Rootkit Detection
NASA Astrophysics Data System (ADS)
Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.
In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.
Visualizing and Interacting with Kernelized Data.
Barbosa, A; Paulovich, F V; Paiva, A; Goldenstein, S; Petronetto, F; Nonato, L G
2016-03-01
Kernel-based methods have experienced a substantial progress in the last years, tuning out an essential mechanism for data classification, clustering and pattern recognition. The effectiveness of kernel-based techniques, though, depends largely on the capability of the underlying kernel to properly embed data in the feature space associated to the kernel. However, visualizing how a kernel embeds the data in a feature space is not so straightforward, as the embedding map and the feature space are implicitly defined by the kernel. In this work, we present a novel technique to visualize the action of a kernel, that is, how the kernel embeds data into a high-dimensional feature space. The proposed methodology relies on a solid mathematical formulation to map kernelized data onto a visual space. Our approach is faster and more accurate than most existing methods while still allowing interactive manipulation of the projection layout, a game-changing trait that other kernel-based projection techniques do not have. PMID:26829242
The flare kernel in the impulsive phase
NASA Technical Reports Server (NTRS)
Dejager, C.
1986-01-01
The impulsive phase of a flare is characterized by impulsive bursts of X-ray and microwave radiation, related to impulsive footpoint heating up to 50 or 60 MK, by upward gas velocities (150 to 400 km/sec) and by a gradual increase of the flare's thermal energy content. These phenomena, as well as non-thermal effects, are all related to the impulsive energy injection into the flare. The available observations are also quantitatively consistent with a model in which energy is injected into the flare by beams of energetic electrons, causing ablation of chromospheric gas, followed by convective rise of gas. Thus, a hole is burned into the chromosphere; at the end of impulsive phase of an average flare the lower part of that hole is situated about 1800 km above the photosphere. H alpha and other optical and UV line emission is radiated by a thin layer (approx. 20 km) at the bottom of the flare kernel. The upward rising and outward streaming gas cools down by conduction in about 45 s. The non-thermal effects in the initial phase are due to curtailing of the energy distribution function by escape of energetic electrons. The single flux tube model of a flare does not fit with these observations; instead we propose the spaghetti-bundle model. Microwave and gamma-ray observations suggest the occurrence of dense flare knots of approx. 800 km diameter, and of high temperature. Future observations should concentrate on locating the microwave/gamma-ray sources, and on determining the kernel's fine structure and the related multi-loop structure of the flaring area.
Niazi, A.; Bud'ko, S.L.; Schlagel, D.L.; Yan, J.Q.; Lograsso, T.A.; Kreyssig, A.; Das, S.; Nandi, S.; Goldman, A.I.; Honecker, A.; McCallum, R.W.; Reehuis, M.; Pieper, O.; Lake, B.; Johnston, D.C.
2009-05-01
The compound CaV{sub 2}O{sub 4} contains V{sup +3} cations with spin S=1 and has an orthorhombic structure at room temperature containing zigzag chains of V atoms running along the c axis. We have grown single crystals of CaV{sub 2}O{sub 4} and report crystallography, static magnetization, magnetic susceptibility x, ac magnetic susceptibility, heat capacity C{sub p}, and thermal expansion measurements in the temperature T range of 1.8--350 K on the single crystals and on polycrystalline samples. An orthorhombic-to-monoclinic structural distortion and a long-range antiferromagnetic (AF) transition were found at sample-dependent temperatures T{sub S}{approx}108--145 K and T{sub N}{approx}51--76 K, respectively. In two annealed single crystals, another transition was found at {approx}200 K. In one of the crystals, this transition is mostly due to V{sub 2}O{sub 3} impurity phase that grows coherently in the crystals during annealing. However, in the other crystal the origin of this transition at 200 K is unknown. The x(T) shows a broad maximum at {approx}300 K associated with short-range AF ordering and the anisotropy of x above T{sub N} is small. The anisotropic x(T{yields}0) data below T{sub N} show that the (average) easy axis of the AF magnetic structure is the b axis. The C{sub p}(T) data indicate strong short-range AF ordering above T{sub N}, consistent with the x(T) data. We fitted our x data by a J{sub 1}-J{sub 2} S=1 Heisenberg chain model, where J{sub 1}(J{sub 2}) is the (next)-nearest-neighbor exchange interaction. We find J{sub 1}{approx}230 K and surprisingly, J{sub 2}/J{sub 1}{approx}0 (or J{sub 1}/J{sub 2}{approx}0). The interaction J{sub {perpendicular}} between these S=1 chains leading to long-range AF ordering at T{sub N} is estimated to be J{sub {perpendicular}}/J{sub 1}{approx_equal}0.04.
Nonlinear projection trick in kernel methods: an alternative to the kernel trick.
Kwak, Nojun
2013-12-01
In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach. PMID:24805227
Image texture analysis of crushed wheat kernels
NASA Astrophysics Data System (ADS)
Zayas, Inna Y.; Martin, C. R.; Steele, James L.; Dempster, Richard E.
1992-03-01
The development of new approaches for wheat hardness assessment may impact the grain industry in marketing, milling, and breeding. This study used image texture features for wheat hardness evaluation. Application of digital imaging to grain for grading purposes is principally based on morphometrical (shape and size) characteristics of the kernels. A composite sample of 320 kernels for 17 wheat varieties were collected after testing and crushing with a single kernel hardness characterization meter. Six wheat classes where represented: HRW, HRS, SRW, SWW, Durum, and Club. In this study, parameters which characterize texture or spatial distribution of gray levels of an image were determined and used to classify images of crushed wheat kernels. The texture parameters of crushed wheat kernel images were different depending on class, hardness and variety of the wheat. Image texture analysis of crushed wheat kernels showed promise for use in class, hardness, milling quality, and variety discrimination.
Reducing Thermal Expansivity of Composite Panels
NASA Technical Reports Server (NTRS)
Smith, D. D.
1985-01-01
Coefficient of thermal expansion of laminated graphite/epoxy composite panels altered after panels cured by postcuring heat treatment. Postcure decreases coefficient of thermal expansion by increasing crosslinking between molecules. Treatment makes it possible to reprocess costly panels for requisite thermal expansivity instead of discarding them.
Molecular Hydrodynamics from Memory Kernels.
Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin
2016-04-01
The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t^{-3/2}. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius. PMID:27104730
NASA Technical Reports Server (NTRS)
Desmarais, R. N.; Rowe, W. S.
1984-01-01
For the design of active controls to stabilize flight vehicles, which requires the use of unsteady aerodynamics that are valid for arbitrary complex frequencies, algorithms are derived for evaluating the nonelementary part of the kernel of the integral equation that relates unsteady pressure to downwash. This part of the kernel is separated into an infinite limit integral that is evaluated using Bessel and Struve functions and into a finite limit integral that is expanded in series and integrated termwise in closed form. The developed series expansions gave reliable answers for all complex reduced frequencies and executed faster than exponential approximations for many pressure stations.
A reduced volumetric expansion factor plot
NASA Technical Reports Server (NTRS)
Hendricks, R. C.
1979-01-01
A reduced volumetric expansion factor plot has been constructed for simple fluids which is suitable for engineering computations in heat transfer. Volumetric expansion factors have been found useful in correlating heat transfer data over a wide range of operating conditions including liquids, gases and the near critical region.
A reduced volumetric expansion factor plot
NASA Technical Reports Server (NTRS)
Hendricks, R. C.
1979-01-01
A reduced volumetric expansion factor plot was constructed for simple fluids which is suitable for engineering computations in heat transfer. Volumetric expansion factors were found useful in correlating heat transfer data over a wide range of operating conditions including liquids, gases and the near critical region.
KERNEL PHASE IN FIZEAU INTERFEROMETRY
Martinache, Frantz
2010-11-20
The detection of high contrast companions at small angular separation appears feasible in conventional direct images using the self-calibration properties of interferometric observable quantities. The friendly notion of closure phase, which is key to the recent observational successes of non-redundant aperture masking interferometry used with adaptive optics, appears to be one example of a wide family of observable quantities that are not contaminated by phase noise. In the high-Strehl regime, soon to be available thanks to the coming generation of extreme adaptive optics systems on ground-based telescopes, and already available from space, closure phase like information can be extracted from any direct image, even taken with a redundant aperture. These new phase-noise immune observable quantities, called kernel phases, are determined a priori from the knowledge of the geometry of the pupil only. Re-analysis of archive data acquired with the Hubble Space Telescope NICMOS instrument using this new kernel-phase algorithm demonstrates the power of the method as it clearly detects and locates with milliarcsecond precision a known companion to a star at angular separation less than the diffraction limit.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...
Code of Federal Regulations, 2013 CFR
2013-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...
Code of Federal Regulations, 2014 CFR
2014-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...
Corn kernel oil and corn fiber oil
Technology Transfer Automated Retrieval System (TEKTRAN)
Unlike most edible plant oils that are obtained directly from oil-rich seeds by either pressing or solvent extraction, corn seeds (kernels) have low levels of oil (4%) and commercial corn oil is obtained from the corn germ (embryo) which is an oil-rich portion of the kernel. Commercial corn oil cou...
Nonlocal energy-optimized kernel: Recovering second-order exchange in the homogeneous electron gas
NASA Astrophysics Data System (ADS)
Bates, Jefferson E.; Laricchia, Savio; Ruzsinszky, Adrienn
2016-01-01
In order to remedy some of the shortcomings of the random phase approximation (RPA) within adiabatic connection fluctuation-dissipation (ACFD) density functional theory, we introduce a short-ranged, exchange-like kernel that is one-electron self-correlation free and exact for two-electron systems in the high-density limit. By tuning a free parameter in our model to recover an exact limit of the homogeneous electron gas correlation energy, we obtain a nonlocal, energy-optimized kernel that reduces the errors of RPA for both homogeneous and inhomogeneous solids. Using wave-vector symmetrization for the kernel, we also implement RPA renormalized perturbation theory for extended systems, and demonstrate its capability to describe the dominant correlation effects with a low-order expansion in both metallic and nonmetallic systems. The comparison of ACFD structural properties with experiment is also shown to be limited by the choice of norm-conserving pseudopotential.
NASA Astrophysics Data System (ADS)
Zhang, Ning; Wang, Xuemei; Chen, Yan; Dai, Wei; Wang, Xueyuan
2015-08-01
Urbanization is an extreme way in which human being changes the land use/land cover of the earth surface, and anthropogenic heat release occurs at the same time. In this paper, the anthropogenic heat release parameterization scheme in the Weather Research and Forecasting model is modified to consider the spatial heterogeneity of the release; and the impacts of land use change and anthropogenic heat release on urban boundary layer structure in the Pearl River Delta, China, are studied with a series of numerical experiments. The results show that the anthropogenic heat release contributes nearly 75 % to the urban heat island intensity in our studied period. The impact of anthropogenic heat release on near-surface specific humidity is very weak, but that on relative humidity is apparent due to the near-surface air temperature change. The near-surface wind speed decreases after the local land use is changed to urban type due to the increased land surface roughness, but the anthropogenic heat release leads to increases of the low-level wind speed and decreases above in the urban boundary layer because the anthropogenic heat release reduces the boundary layer stability and enhances the vertical mixing.
Bayesian Kernel Mixtures for Counts
Canale, Antonio; Dunson, David B.
2011-01-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
NASA Astrophysics Data System (ADS)
Wang, Hua; Alatancang; Huang, Jun-Jie
2009-12-01
The free vibration problem of rectangular thin plates is rewritten as a new upper triangular matrix differential system. For the associated operator matrix, we find that the two diagonal block operators are Hamiltonian. Moreover, the existence and completeness of normed symplectic orthogonal eigenfunction systems of these two block operators are demonstrated. Based on the completeness, the general solution of the free vibration of rectangular thin plates is given by double symplectic eigenfunction expansion method.
Robie, R.A.; Evans, H.T., Jr.; Hemingway, B.S.
1988-01-01
The heat capacity of ilvaite from Seriphos, Greece was measured by adiabatic shield calorimetry (6.4 to 380.7 K) and by differential scanning calorimetry (340 to 950 K). The thermal expansion of ilvaite was also investigated, by X-ray methods, between 308 and 853 K. At 298.15 K the standard molar heat capacity and entropy for ilvaite are 298.9??0.6 and 292.3??0.6 J/(mol. K) respectively. Between 333 and 343 K ilvaite changes from monoclinic to orthorhombic. The antiferromagnetic transition is shown by a hump in Cp0with a Ne??el temperature of 121.9??0.5 K. A rounded hump in Cp0between 330 and 400 K may possibily arise from the thermally activated electron delocalization (hopping) known to take place in this temperature region. ?? 1988 Springer-Verlag.
Non-separable pairing interaction kernels applied to superconducting cuprates
NASA Astrophysics Data System (ADS)
Haley, Stephen B.; Fink, Herman J.
2014-05-01
A pairing Hamiltonian H(Γ) with a non-separable interaction kernel Γ produces HTS for relatively weak interactions. The doping and temperature dependence of Γ(x,T) and the chemical potential μ(x) is determined by a probabilistic filling of the electronic states in the cuprate unit cell. A diverse set of HTS and normal state properties is examined, including the SC phase transition boundary TC(x), SC gap Δ(x,T), entropy S(x,T), specific heat C(x,T), and spin susceptibility χs(x,T). Detailed x,T agreement with cuprate experiment is obtained for all properties.
Putting Priors in Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
Chen, Fei; Tillberg, Paul W.; Boyden, Edward S.
2014-01-01
In optical microscopy, fine structural details are resolved by using refraction to magnify images of a specimen. Here we report the discovery that, by synthesizing a swellable polymer network within a specimen, it can be physically expanded, resulting in physical magnification. By covalently anchoring specific labels located within the specimen directly to the polymer network, labels spaced closer than the optical diffraction limit can be isotropically separated and optically resolved, a process we call expansion microscopy (ExM). Thus, this process can be used to perform scalable super-resolution microscopy with diffraction-limited microscopes. We demonstrate ExM with effective ~70 nm lateral resolution in both cultured cells and brain tissue, performing three-color super-resolution imaging of ~107 μm3 of the mouse hippocampus with a conventional confocal microscope. PMID:25592419
Load regulating expansion fixture
Wagner, Lawrence M.; Strum, Michael J.
1998-01-01
A free standing self contained device for bonding ultra thin metallic films, such as 0.001 inch beryllium foils. The device will regulate to a predetermined load for solid state bonding when heated to a bonding temperature. The device includes a load regulating feature, whereby the expansion stresses generated for bonding are regulated and self adjusting. The load regulator comprises a pair of friction isolators with a plurality of annealed copper members located therebetween. The device, with the load regulator, will adjust to and maintain a stress level needed to successfully and economically complete a leak tight bond without damaging thin foils or other delicate components.
Load regulating expansion fixture
Wagner, L.M.; Strum, M.J.
1998-12-15
A free standing self contained device for bonding ultra thin metallic films, such as 0.001 inch beryllium foils is disclosed. The device will regulate to a predetermined load for solid state bonding when heated to a bonding temperature. The device includes a load regulating feature, whereby the expansion stresses generated for bonding are regulated and self adjusting. The load regulator comprises a pair of friction isolators with a plurality of annealed copper members located therebetween. The device, with the load regulator, will adjust to and maintain a stress level needed to successfully and economically complete a leak tight bond without damaging thin foils or other delicate components. 1 fig.
Kernel map compression for speeding the execution of kernel-based methods.
Arif, Omar; Vela, Patricio A
2011-06-01
The use of Mercer kernel methods in statistical learning theory provides for strong learning capabilities, as seen in kernel principal component analysis and support vector machines. Unfortunately, after learning, the computational complexity of execution through a kernel is of the order of the size of the training set, which is quite large for many applications. This paper proposes a two-step procedure for arriving at a compact and computationally efficient execution procedure. After learning in the kernel space, the proposed extension exploits the universal approximation capabilities of generalized radial basis function neural networks to efficiently approximate and replace the projections onto the empirical kernel map used during execution. Sample applications demonstrate significant compression of the kernel representation with graceful performance loss. PMID:21550884
Point Kernel Gamma-Ray Shielding Code With Geometric Progression Buildup Factors.
Energy Science and Technology Software Center (ESTSC)
1990-11-30
Version 00 QADMOD-GP is a PC version of the mainframe code CCC-396/QADMOD-G, a point-kernel integration code for calculating gamma ray fluxes and dose rates or heating rates at specific detector locations within a three-dimensional shielding geometry configuration due to radiation from a volume-distributed source.
A simple method for computing the relativistic Compton scattering kernel for radiative transfer
NASA Technical Reports Server (NTRS)
Prasad, M. K.; Kershaw, D. S.; Beason, J. D.
1986-01-01
Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2013-01-01 2013-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...
7 CFR 51.2296 - Three-fourths half kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296... STANDARDS) United States Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2296 Three-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more...
UPDATE OF GRAY KERNEL DISEASE OF MACADAMIA - 2006
Technology Transfer Automated Retrieval System (TEKTRAN)
Gray kernel is an important disease of macadamia that affects the quality of kernels with gray discoloration and a permeating, foul odor that can render entire batches of nuts unmarketable. We report on the successful production of gray kernel in raw macadamia kernels artificially inoculated with s...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...
KITTEN Lightweight Kernel 0.1 Beta
Energy Science and Technology Software Center (ESTSC)
2007-12-12
The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten providesmore » unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.« less
Biological sequence classification with multivariate string kernels.
Kuksa, Pavel P
2013-01-01
String kernel-based machine learning methods have yielded great success in practical tasks of structured/sequential data analysis. They often exhibit state-of-the-art performance on many practical tasks of sequence analysis such as biological sequence classification, remote homology detection, or protein superfamily and fold prediction. However, typical string kernel methods rely on the analysis of discrete 1D string data (e.g., DNA or amino acid sequences). In this paper, we address the multiclass biological sequence classification problems using multivariate representations in the form of sequences of features vectors (as in biological sequence profiles, or sequences of individual amino acid physicochemical descriptors) and a class of multivariate string kernels that exploit these representations. On three protein sequence classification tasks, the proposed multivariate representations and kernels show significant 15-20 percent improvements compared to existing state-of-the-art sequence classification methods. PMID:24384708
Biological Sequence Analysis with Multivariate String Kernels.
Kuksa, Pavel P
2013-03-01
String kernel-based machine learning methods have yielded great success in practical tasks of structured/sequential data analysis. They often exhibit state-of-the-art performance on many practical tasks of sequence analysis such as biological sequence classification, remote homology detection, or protein superfamily and fold prediction. However, typical string kernel methods rely on analysis of discrete one-dimensional (1D) string data (e.g., DNA or amino acid sequences). In this work we address the multi-class biological sequence classification problems using multivariate representations in the form of sequences of features vectors (as in biological sequence profiles, or sequences of individual amino acid physico-chemical descriptors) and a class of multivariate string kernels that exploit these representations. On a number of protein sequence classification tasks proposed multivariate representations and kernels show significant 15-20\\% improvements compared to existing state-of-the-art sequence classification methods. PMID:23509193
Variational Dirichlet Blur Kernel Estimation.
Zhou, Xu; Mateos, Javier; Zhou, Fugen; Molina, Rafael; Katsaggelos, Aggelos K
2015-12-01
Blind image deconvolution involves two key objectives: 1) latent image and 2) blur estimation. For latent image estimation, we propose a fast deconvolution algorithm, which uses an image prior of nondimensional Gaussianity measure to enforce sparsity and an undetermined boundary condition methodology to reduce boundary artifacts. For blur estimation, a linear inverse problem with normalization and nonnegative constraints must be solved. However, the normalization constraint is ignored in many blind image deblurring methods, mainly because it makes the problem less tractable. In this paper, we show that the normalization constraint can be very naturally incorporated into the estimation process by using a Dirichlet distribution to approximate the posterior distribution of the blur. Making use of variational Dirichlet approximation, we provide a blur posterior approximation that considers the uncertainty of the estimate and removes noise in the estimated kernel. Experiments with synthetic and real data demonstrate that the proposed method is very competitive to the state-of-the-art blind image restoration methods. PMID:26390458
TICK: Transparent Incremental Checkpointing at Kernel Level
Energy Science and Technology Software Center (ESTSC)
2004-10-25
TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5
Seal, R.R., II; Robie, R.A.; Hemingway, B.S.; Evans, H.T., Jr.
1996-01-01
The heat capacity of synthetic Cu3AsS4 (enargite) was measured by quasi-adiabatic calorimetry from the temperatures 5 K to 355 K and by differential scanning calorimetry from T = 339 K to T = 720 K. Heat-capacity anomalies were observed at T = (58.5 ?? 0.5) K (??trsHom = 1.4??R??K; ??trsSom = 0.02??R) and at T = (66.5 ?? 0.5) K (??trsHom = 4.6??R??K; ??trsSom = 0.08??R), where R = 8.31451 J??K-1??mol-1. The causes of the anomalies are unknown. At T = 298.15 K, Cop,m and Som(T) are (190.4 ?? 0.2) J??K-1??mol-1 and (257.6 ?? 0.6) J??K-1??mol-1, respectively. The superambient heat capacities are described from T = 298.15 K to T = 944 K by the least-squares regression equation: Cop,m/(J??K-1??mol-1) = (196.7 ?? 1.2) + (0.0499 ?? 0.0016)??(T/K) -(1918 000 ?? 84 000)??(T/K)-2. The thermal expansion of synthetic enargite was measured from T = 298.15 K to T = 573 K by powder X-ray diffraction. The thermal expansion of the unit-cell volume (Z = 2) is described from T = 298.15 K to T = 573 K by the least-squares equation: V/pm3 = 106??(288.2 ?? 0.2) + 104??(1.49 ?? 0.04)??(T/K). ?? 1996 Academic Press Limited.
NASA Astrophysics Data System (ADS)
Ventura, Guglielmo; Perfetti, Mauro
All solid materials, when cooled to low temperatures experience a change in physical dimensions which called "thermal contraction" and is typically lower than 1 % in volume in the 4-300 K temperature range. Although the effect is small, it can have a heavy impact on the design of cryogenic devices. The thermal contraction of different materials may vary by as much as an order of magnitude: since cryogenic devices are constructed at room temperature with a lot of different materials, one of the major concerns is the effect of the different thermal contraction and the resulting thermal stress that may occur when two dissimilar materials are bonded together. In this chapter, theory of thermal contraction is reported in Sect.
PET Image Reconstruction Using Kernel Method
Wang, Guobao; Qi, Jinyi
2014-01-01
Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249
PET image reconstruction using kernel method.
Wang, Guobao; Qi, Jinyi
2015-01-01
Image reconstruction from low-count positron emission tomography (PET) projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4-D dynamic PET patient dataset showed promising results. PMID:25095249
NASA Technical Reports Server (NTRS)
Liu, Y.; Israelsson, U.; Larson, M.
2001-01-01
Presentation on the transition in 4He in the presence of a heat current (Q) provides an ideal system for the study of phase transitions under non-equlibrium, dynamical conditions. Many physical properties become nonlinear and Q-dependant near the transition temperature, T_Lambada.
Wang Qishan Bag, Jnanankur
2008-05-23
Formation of nuclear inclusions consisting of aggregates of a polyalanine expansion mutant of nuclear poly(A)-binding protein (PABPN1) is the hallmark of oculopharyngeal muscular dystrophy (OPMD). OPMD is a late onset autosomal dominant disease. Patients with this disorder exhibit progressive swallowing difficulty and drooping of their eye lids, which starts around the age of 50. Previously we have shown that treatment of cells expressing the mutant PABPN1 with a number of chemicals such as ibuprofen, indomethacin, ZnSO{sub 4}, and 8-hydroxy-quinoline induces HSP70 expression and reduces PABPN1 aggregation. In these studies we have shown that expression of additional HSPs including HSP27, HSP40, and HSP105 were induced in mutant PABPN1 expressing cells following exposure to the chemicals mentioned above. Furthermore, all three additional HSPs were translocated to the nucleus and probably helped to properly fold the mutant PABPN1 by co-localizing with this protein.
Note on trigonometric expansions of theta functions
NASA Astrophysics Data System (ADS)
Chouikha, A. Raouf
2003-04-01
We are interested in properties of coefficients of certain expansions of the classical theta functions. We show that they are solutions of a differential system derived from the heat equation. We plan to explicitly give expressions of these coefficients.
On the formation of new ignition kernels in the chemically active dispersed mixtures
NASA Astrophysics Data System (ADS)
Ivanov, M. F.; Kiverin, A. D.
2015-11-01
The specific features of the combustion waves propagating through the channels filled with chemically active gaseous mixture and non-uniformly suspended micro particles are studied numerically. It is shown that the heat radiated by the hot products, absorbed by the micro particles and then transferred to the environmental fresh mixture can be the source of new ignition kernels in the regions of particles' clusters. Herewith the spatial distribution of the particles determines the features of combustion regimes arising in these kernels. One can highlight the multi-kernel ignition in the polydisperse mixtures and ignition of the combustion regimes with shocks and detonation formation in the mixtures with pronounced gradients of microparticles concentration.
Analog forecasting with dynamics-adapted kernels
NASA Astrophysics Data System (ADS)
Zhao, Zhizhen; Giannakis, Dimitrios
2016-09-01
Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.
NASA Astrophysics Data System (ADS)
Martínez-Suástegui, Lorenzo; Barreto, Enrique; Treviño, César
2015-11-01
Transient laminar opposing mixed convection is studied experimentally in an open vertical rectangular channel with two discrete protruded heat sources subjected to uniform heat flux simulating electronic components. Experiments are performed for a Reynolds number of Re = 700, Prandtl number of Pr = 7, inclination angles with respect to the horizontal of γ =0o , 45o and 90o, and different values of buoyancy strength or modified Richardson number, Ri* =Gr* /Re2 . From the experimental measurements, the space averaged surface temperatures, overall Nusselt number of each simulated electronic chip, phase-space plots of the self-oscillatory system, characteristic times of temperature oscillations and spectral distribution of the fluctuating energy have been obtained. Results show that when a threshold in the buoyancy parameter is reached, strong three-dimensional secondary flow oscillations develop in the axial and spanwise directions. This research was supported by the Consejo Nacional de Ciencia y Tecnología (CONACYT), Grant number 167474 and by the Secretaría de Investigación y Posgrado del IPN, Grant number SIP 20141309.
Brodsky, N.S.; Riggins, M.; Connolly, J.; Ricci, P.
1997-09-01
Specimens were tested from four thermal-mechanical units, namely Tiva Canyon (TCw), Paintbrush Tuff (PTn), and two Topopah Spring units (TSw1 and TSw2), and from two lithologies, i.e., welded devitrified (TCw, TSw1, TSw2) and nonwelded vitric tuff (PTn). Thermal conductivities in W(mk){sup {minus}1} averaged over all boreholes, ranged (depending upon temperature and saturation state) from 1.2 to 1.9 for TCw, from 0.4 to 0.9 for PTn, from 1.0 to 1.7 for TSw1, and from 1.5 to 2.3 for TSw2. Mean coefficients of thermal expansion were highly temperature dependent and values, averaged over all boreholes, ranged (depending upon temperature and saturation state) from 6.6 {times} 10{sup {minus}6} to 49 {times} 10{sup {minus}6} C{sup {minus}1} for TCw, from the negative range to 16 {times} 10{sup {minus}6} {center_dot} {degree}C{sup {minus}1} for PTn, from 6.3 {times} 10{sup {minus}6} to 44 {times} 10{sup {minus}6} C{sup {minus}1} for TSw1, and from 6.7 {times} 10{sup {minus}6} to 37 {times} 10{sup {minus}6} {center_dot} {degree}C{sup {minus}1} for TSw2. Mean values of thermal capacitance in J/cm{sup 3}K (averaged overall specimens) ranged from 1.6 J to 2.1 for TSw1 and from 1.8 to 2.5 for TSw2. In general, the lithostratigraphic classifications of rock assigned by the USGS are consistent with the mineralogical data presented in this report.
Nonparametric entropy estimation using kernel densities.
Lake, Douglas E
2009-01-01
The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation. PMID:19897106
Tile-Compressed FITS Kernel for IRAF
NASA Astrophysics Data System (ADS)
Seaman, R.
2011-07-01
The Flexible Image Transport System (FITS) is a ubiquitously supported standard of the astronomical community. Similarly, the Image Reduction and Analysis Facility (IRAF), developed by the National Optical Astronomy Observatory, is a widely used astronomical data reduction package. IRAF supplies compatibility with FITS format data through numerous tools and interfaces. The most integrated of these is IRAF's FITS image kernel that provides access to FITS from any IRAF task that uses the basic IMIO interface. The original FITS kernel is a complex interface of purpose-built procedures that presents growing maintenance issues and lacks recent FITS innovations. A new FITS kernel is being developed at NOAO that is layered on the CFITSIO library from the NASA Goddard Space Flight Center. The simplified interface will minimize maintenance headaches as well as add important new features such as support for the FITS tile-compressed (fpack) format.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.
Fast Generation of Sparse Random Kernel Graphs
2015-01-01
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most 𝒪(n(logn)2). As a practical example we show how to generate samples of power-law degree distribution graphs with tunable assortativity. PMID:26356296
Experimental study of turbulent flame kernel propagation
Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve
2008-07-15
Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{sub j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)
Full Waveform Inversion Using Waveform Sensitivity Kernels
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
2013-04-01
We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver
Volatile compound formation during argan kernel roasting.
El Monfalouti, Hanae; Charrouf, Zoubida; Giordano, Manuela; Guillaume, Dominique; Kartah, Badreddine; Harhar, Hicham; Gharby, Saïd; Denhez, Clément; Zeppa, Giuseppe
2013-01-01
Virgin edible argan oil is prepared by cold-pressing argan kernels previously roasted at 110 degrees C for up to 25 minutes. The concentration of 40 volatile compounds in virgin edible argan oil was determined as a function of argan kernel roasting time. Most of the volatile compounds begin to be formed after 15 to 25 minutes of roasting. This suggests that a strictly controlled roasting time should allow the modulation of argan oil taste and thus satisfy different types of consumers. This could be of major importance considering the present booming use of edible argan oil. PMID:23472454
Modified wavelet kernel methods for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Hsu, Pai-Hui; Huang, Xiu-Man
2015-10-01
Hyperspectral images have the capability of acquiring images of earth surface with several hundred of spectral bands. Providing such abundant spectral data should increase the abilities in classifying land use/cover type. However, due to the high dimensionality of hyperspectral data, traditional classification methods are not suitable for hyperspectral data classification. The common method to solve this problem is dimensionality reduction by using feature extraction before classification. Kernel methods such as support vector machine (SVM) and multiple kernel learning (MKL) have been successfully applied to hyperspectral images classification. In kernel methods applications, the selection of kernel function plays an important role. The wavelet kernel with multidimensional wavelet functions can find the optimal approximation of data in feature space for classification. The SVM with wavelet kernels (called WSVM) have been also applied to hyperspectral data and improve classification accuracy. In this study, wavelet kernel method combined multiple kernel learning algorithm and wavelet kernels was proposed for hyperspectral image classification. After the appropriate selection of a linear combination of kernel functions, the hyperspectral data will be transformed to the wavelet feature space, which should have the optimal data distribution for kernel learning and classification. Finally, the proposed methods were compared with the existing methods. A real hyperspectral data set was used to analyze the performance of wavelet kernel method. According to the results the proposed wavelet kernel methods in this study have well performance, and would be an appropriate tool for hyperspectral image classification.
Kernel abortion in maize. II. Distribution of /sup 14/C among kernel carboydrates
Hanft, J.M.; Jones, R.J.
1986-06-01
This study was designed to compare the uptake and distribution of /sup 14/C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35/sup 0/C were transferred to (/sup 14/C)sucrose media 10 days after pollination. Kernels cultured at 35/sup 0/C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on (/sup 14/C)sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35/sup 0/C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35/sup 0/C compared to kernels cultured at 30/sup 0/C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35/sup 0/C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30/sup 0/C (89%). Kernels cultured at 35/sup 0/C had a correspondingly higher proportion of /sup 14/C in endosperm fructose, glucose, and sucrose.
Accuracy of Reduced and Extended Thin-Wire Kernels
Burke, G J
2008-11-24
Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.
Asymptotic expansions of Mellin convolution integrals: An oscillatory case
NASA Astrophysics Data System (ADS)
López, José L.; Pagola, Pedro
2010-01-01
In a recent paper [J.L. López, Asymptotic expansions of Mellin convolution integrals, SIAM Rev. 50 (2) (2008) 275-293], we have presented a new, very general and simple method for deriving asymptotic expansions of for small x. It contains Watson's Lemma and other classical methods, Mellin transform techniques, McClure and Wong's distributional approach and the method of analytic continuation used in this approach as particular cases. In this paper we generalize that idea to the case of oscillatory kernels, that is, to integrals of the form , with c[set membership, variant]R, and we give a method as simple as the one given in the above cited reference for the case c=0. We show that McClure and Wong's distributional approach for oscillatory kernels and the summability method for oscillatory integrals are particular cases of this method. Some examples are given as illustration.
Fabrication of Uranium Oxycarbide Kernels for HTR Fuel
Charles Barnes; CLay Richardson; Scott Nagley; John Hunn; Eric Shaber
2010-10-01
Babcock and Wilcox (B&W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-µm, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B&W produced 425-µm, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B&W also produced 500-µm, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B&W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.
Kernel Partial Least Squares for Nonlinear Regression and Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
Kernel method and linear recurrence system
NASA Astrophysics Data System (ADS)
Hou, Qing-Hu; Mansour, Toufik
2008-06-01
Based on the kernel method, we present systematic methods to solve equation systems on generating functions of two variables. Using these methods, we get the generating functions for the number of permutations which avoid 1234 and 12k(k-1)...3 and permutations which avoid 1243 and 12...k.
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
INTACT OR UNIT-KERNEL SWEET CORN
This report evaluates process and product modifications in canned and frozen sweet corn manufacture with the objective of reducing the total effluent produced in processing. In particular it evaluates the proposed replacement of process steps that yield cut or whole kernel corn w...
Arbitrary-resolution global sensitivity kernels
NASA Astrophysics Data System (ADS)
Nissen-Meyer, T.; Fournier, A.; Dahlen, F.
2007-12-01
Extracting observables out of any part of a seismogram (e.g. including diffracted phases such as Pdiff) necessitates the knowledge of 3-D time-space wavefields for the Green functions that form the backbone of Fréchet sensitivity kernels. While known for a while, this idea is still computationally intractable in 3-D, facing major simulation and storage issues when high-frequency wavefields are considered at the global scale. We recently developed a new "collapsed-dimension" spectral-element method that solves the 3-D system of elastodynamic equations in a 2-D space, based on exploring symmetry considerations of the seismic-wave radiation patterns. We will present the technical background on the computation of waveform kernels, various examples of time- and frequency-dependent sensitivity kernels and subsequently extracted time-window kernels (e.g. banana- doughnuts). Given the computationally light-weighted 2-D nature, we will explore some crucial parameters such as excitation type, source time functions, frequency, azimuth, discontinuity locations, and phase type, i.e. an a priori view into how, when, and where seismograms carry 3-D Earth signature. A once-and-for-all database of 2-D waveforms for various source depths shall then serve as a complete set of global time-space sensitivity for a given spherically symmetric background model, thereby allowing for tomographic inversions with arbitrary frequencies, observables, and phases.
Application of the matrix exponential kernel
NASA Technical Reports Server (NTRS)
Rohach, A. F.
1972-01-01
A point matrix kernel for radiation transport, developed by the transmission matrix method, has been used to develop buildup factors and energy spectra through slab layers of different materials for a point isotropic source. Combinations of lead-water slabs were chosen for examples because of the extreme differences in shielding properties of these two materials.
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
Applying Single Kernel Sorting Technology to Developing Scab Resistant Lines
Technology Transfer Automated Retrieval System (TEKTRAN)
We are using automated single-kernel near-infrared (SKNIR) spectroscopy instrumentation to sort fusarium head blight (FHB) infected kernels from healthy kernels, and to sort segregating populations by hardness to enhance the development of scab resistant hard and soft wheat varieties. We sorted 3 r...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Tamarind seed kernel powder. 176.350 Section...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Tamarind seed kernel powder. 176.350 Section 176...) INDIRECT FOOD ADDITIVES: PAPER AND PAPERBOARD COMPONENTS Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Tamarind seed kernel powder. 176.350 Section...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
Thermomechanical property of rice kernels studied by DMA
Technology Transfer Automated Retrieval System (TEKTRAN)
The thermomechanical property of the rice kernels was investigated using a dynamic mechanical analyzer (DMA). The length change of rice kernel with a loaded constant force along the major axis direction was detected during temperature scanning. The thermomechanical transition occurred in rice kernel...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
Rock expansion caused by ultrasound
NASA Astrophysics Data System (ADS)
Hedberg, C.; Gray, A.
2013-12-01
It has during many years been reported that materials' elastic modulus decrease when exposed to influences like mechanical impacts, ultrasound, magnetic fields, electricity and even humidity. Non-perfect atomic structures like rocks, concrete, or damaged metals exhibit a larger effect. This softening has most often been recorded by wave resonance measurements. The motion towards equilibrium is slow - often taking hours or days, which is why the effect is called Slow Dynamics [1]. The question had been raised, if a material expansion also occurs. 'The most fundamental parameter to consider is the volume expansion predicted to occur when positive hole charge carriers become activated, causing a decrease of the electron density in the O2- sublattice of the rock-forming minerals. This decrease of electron density should affect essentially all physical parameters, including the volume.' [2]. A new type of configuration has measured expansion of a rock subjected to ultrasound. A PZT was used as a pressure sensor while the combined thickness of the rock sample and the PZT sensor was held fixed. The expansion increased the stress in both the rock and the PZT, which gave an out-put voltage from the PZT. Knowing its material properties then made it possible to calculate the rock expansion. The equivalent strain caused by the ultrasound was approximately 3 x 10-5. The temperature was monitored and accounted for during the tests and for the maximum expansion the increase was 0.7 C, which means the expansion is at least to some degree caused by heating of the material by the ultrasound. The fraction of bonds activated by ultrasound was estimated to be around 10-5. References: [1] Guyer, R.A., Johnson, P.A.: Nonlinear Mesoscopic Elasticity: The Complex Behaviour of Rocks, Soils, Concrete. Wiley-VCH 2009 [2] M.M. Freund, F.F. Freund, Manipulating the Toughness of Rocks through Electric Potentials, Final Report CIF 2011 Award NNX11AJ84A, NAS Ames 2012.
Heat kernel for Newton-Cartan trace anomalies
NASA Astrophysics Data System (ADS)
Auzzi, Roberto; Nardelli, Giuseppe
2016-07-01
We compute the leading part of the trace anomaly for a free non-relativistic scalar in 2 + 1 dimensions coupled to a background Newton-Cartan metric. The anomaly is proportional to 1 /m, where m is the mass of the scalar. We comment on the implications of a conjectured a-theorem for non-relativistic theories with boost invariance.
Kernel weights optimization for error diffusion halftoning method
NASA Astrophysics Data System (ADS)
Fedoseev, Victor
2015-02-01
This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.
Chare kernel; A runtime support system for parallel computations
Shu, W. ); Kale, L.V. )
1991-03-01
This paper presents the chare kernel system, which supports parallel computations with irregular structure. The chare kernel is a collection of primitive functions that manage chares, manipulative messages, invoke atomic computations, and coordinate concurrent activities. Programs written in the chare kernel language can be executed on different parallel machines without change. Users writing such programs concern themselves with the creation of parallel actions but not with assigning them to specific processors. The authors describe the design and implementation of the chare kernel. Performance of chare kernel programs on two hypercube machines, the Intel iPSC/2 and the NCUBE, is also given.
Difference image analysis: automatic kernel design using information criteria
NASA Astrophysics Data System (ADS)
Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.
2016-03-01
We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.
Selection and properties of alternative forming fluids for TRISO fuel kernel production
Baker, M. P.; King, J. C.; Gorman, B. P.; Marshall, Doug W.
2013-01-01
Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1- bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.
Selection and properties of alternative forming fluids for TRISO fuel kernel production
NASA Astrophysics Data System (ADS)
Baker, M. P.; King, J. C.; Gorman, B. P.; Marshall, D. W.
2013-01-01
Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ˜10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.
A meshfree unification: reproducing kernel peridynamics
NASA Astrophysics Data System (ADS)
Bessa, M. A.; Foster, J. T.; Belytschko, T.; Liu, Wing Kam
2014-06-01
This paper is the first investigation establishing the link between the meshfree state-based peridynamics method and other meshfree methods, in particular with the moving least squares reproducing kernel particle method (RKPM). It is concluded that the discretization of state-based peridynamics leads directly to an approximation of the derivatives that can be obtained from RKPM. However, state-based peridynamics obtains the same result at a significantly lower computational cost which motivates its use in large-scale computations. In light of the findings of this study, an update to the method is proposed such that the limitations regarding application of boundary conditions and the use of non-uniform grids are corrected by using the reproducing kernel approximation.
Wilson Dslash Kernel From Lattice QCD Optimization
Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.
Energy efficient perlite expansion process
Jenkins, K.L.
1982-08-31
A thermally efficient process for the expansion of perlite ore is described. The inlet port and burner of a perlite expansion chamber (Preferably a vertical expander) are enclosed such that no ambient air can enter the chamber. Air and fuel are metered to the burner with the amount of air being controlled such that the fuel/air premix contains at least enough air to start and maintain minimum combustion, but not enough to provide stoichiometric combustion. At a point immediately above the burner, additional air is metered into an insulated enclosure surrounding the expansion chamber where it is preheated by the heat passing through the chamber walls. This preheated additional air is then circulated back to the burner where it provides the remainder of the air needed for combustion, normally full combustion. Flow of the burner fuel/air premix and the preheated additional air is controlled so as to maintain a long luminous flame throughout a substantial portion of the expansion chamber and also to form a moving laminar layer of air on the inner surface of the expansion chamber. Preferably the burner is a delayed mixing gas burner which materially aids in the generation of the long luminous flame. The long luminous flame and the laminar layer of air at the chamber wall eliminate hot spots in the expansion chamber, result in relatively low and uniform temperature gradients across the chamber, significantly reduce the amount of fuel consumed per unit of perlite expanded, increase the yield of expanded perlite and prevent the formation of a layer of perlite sinter on the walls of the chamber.
Searching and Indexing Genomic Databases via Kernelization
Gagie, Travis; Puglisi, Simon J.
2015-01-01
The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper, we survey the 20-year history of this idea and discuss its relation to kernelization in parameterized complexity. PMID:25710001
Multiple kernel learning for dimensionality reduction.
Lin, Yen-Yu; Liu, Tyng-Luh; Fuh, Chiou-Shann
2011-06-01
In solving complex visual learning tasks, adopting multiple descriptors to more precisely characterize the data has been a feasible way for improving performance. The resulting data representations are typically high-dimensional and assume diverse forms. Hence, finding a way of transforming them into a unified space of lower dimension generally facilitates the underlying tasks such as object recognition or clustering. To this end, the proposed approach (termed MKL-DR) generalizes the framework of multiple kernel learning for dimensionality reduction, and distinguishes itself with the following three main contributions: first, our method provides the convenience of using diverse image descriptors to describe useful characteristics of various aspects about the underlying data. Second, it extends a broad set of existing dimensionality reduction techniques to consider multiple kernel learning, and consequently improves their effectiveness. Third, by focusing on the techniques pertaining to dimensionality reduction, the formulation introduces a new class of applications with the multiple kernel learning framework to address not only the supervised learning problems but also the unsupervised and semi-supervised ones. PMID:20921580
A Fast Reduced Kernel Extreme Learning Machine.
Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua
2016-04-01
In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. PMID:26829605
A Kernel Classification Framework for Metric Learning.
Wang, Faqiang; Zuo, Wangmeng; Zhang, Lei; Meng, Deyu; Zhang, David
2015-09-01
Learning a distance metric from the given training samples plays a crucial role in many machine learning tasks, and various models and optimization algorithms have been proposed in the past decade. In this paper, we generalize several state-of-the-art metric learning methods, such as large margin nearest neighbor (LMNN) and information theoretic metric learning (ITML), into a kernel classification framework. First, doublets and triplets are constructed from the training samples, and a family of degree-2 polynomial kernel functions is proposed for pairs of doublets or triplets. Then, a kernel classification framework is established to generalize many popular metric learning methods such as LMNN and ITML. The proposed framework can also suggest new metric learning methods, which can be efficiently implemented, interestingly, using the standard support vector machine (SVM) solvers. Two novel metric learning methods, namely, doublet-SVM and triplet-SVM, are then developed under the proposed framework. Experimental results show that doublet-SVM and triplet-SVM achieve competitive classification accuracies with state-of-the-art metric learning methods but with significantly less training time. PMID:25347887
Semi-Supervised Kernel Mean Shift Clustering.
Anand, Saket; Mittal, Sushil; Tuzel, Oncel; Meer, Peter
2014-06-01
Mean shift clustering is a powerful nonparametric technique that does not require prior knowledge of the number of clusters and does not constrain the shape of the clusters. However, being completely unsupervised, its performance suffers when the original distance metric fails to capture the underlying cluster structure. Despite recent advances in semi-supervised clustering methods, there has been little effort towards incorporating supervision into mean shift. We propose a semi-supervised framework for kernel mean shift clustering (SKMS) that uses only pairwise constraints to guide the clustering procedure. The points are first mapped to a high-dimensional kernel space where the constraints are imposed by a linear transformation of the mapped points. This is achieved by modifying the initial kernel matrix by minimizing a log det divergence-based objective function. We show the advantages of SKMS by evaluating its performance on various synthetic and real datasets while comparing with state-of-the-art semi-supervised clustering algorithms. PMID:26353281
NASA Astrophysics Data System (ADS)
Pope, Benjamin; Tuthill, Peter; Hinkley, Sasha; Ireland, Michael J.; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D.; Martinache, Frantz
2016-01-01
At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently developed adaptive optics post-processing technique, called kernel-phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here, we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the α Ophiuchi binary system near periastron, using the Palomar 200-Inch Telescope. This is the first case in which kernel phase has been used with a full aperture to resolve a system close to the diffraction limit with ground-based extreme adaptive optics observations. Excellent agreement in astrometric quantities is found between kernel phase and masking, and kernel phase significantly outperforms PSF fitting and bispectral analysis, demonstrating its viability as an alternative to conventional non-redundant masking under appropriate conditions.
Reaction Kernel Structure of a Slot Jet Diffusion Flame in Microgravity
NASA Technical Reports Server (NTRS)
Takahashi, F.; Katta, V. R.
2001-01-01
Diffusion flame stabilization in normal earth gravity (1 g) has long been a fundamental research subject in combustion. Local flame-flow phenomena, including heat and species transport and chemical reactions, around the flame base in the vicinity of condensed surfaces control flame stabilization and fire spreading processes. Therefore, gravity plays an important role in the subject topic because buoyancy induces flow in the flame zone, thus increasing the convective (and diffusive) oxygen transport into the flame zone and, in turn, reaction rates. Recent computations show that a peak reactivity (heat-release or oxygen-consumption rate) spot, or reaction kernel, is formed in the flame base by back-diffusion and reactions of radical species in the incoming oxygen-abundant flow at relatively low temperatures (about 1550 K). Quasi-linear correlations were found between the peak heat-release or oxygen-consumption rate and the velocity at the reaction kernel for cases including both jet and flat-plate diffusion flames in airflow. The reaction kernel provides a stationary ignition source to incoming reactants, sustains combustion, and thus stabilizes the trailing diffusion flame. In a quiescent microgravity environment, no buoyancy-induced flow exits and thus purely diffusive transport controls the reaction rates. Flame stabilization mechanisms in such purely diffusion-controlled regime remain largely unstudied. Therefore, it will be a rigorous test for the reaction kernel correlation if it can be extended toward zero velocity conditions in the purely diffusion-controlled regime. The objectives of this study are to reveal the structure of the flame-stabilizing region of a two-dimensional (2D) laminar jet diffusion flame in microgravity and develop a unified diffusion flame stabilization mechanism. This paper reports the recent progress in the computation and experiment performed in microgravity.
Protein interaction sentence detection using multiple semantic kernels
2011-01-01
Background Detection of sentences that describe protein-protein interactions (PPIs) in biomedical publications is a challenging and unresolved pattern recognition problem. Many state-of-the-art approaches for this task employ kernel classification methods, in particular support vector machines (SVMs). In this work we propose a novel data integration approach that utilises semantic kernels and a kernel classification method that is a probabilistic analogue to SVMs. Semantic kernels are created from statistical information gathered from large amounts of unlabelled text using lexical semantic models. Several semantic kernels are then fused into an overall composite classification space. In this initial study, we use simple features in order to examine whether the use of combinations of kernels constructed using word-based semantic models can improve PPI sentence detection. Results We show that combinations of semantic kernels lead to statistically significant improvements in recognition rates and receiver operating characteristic (ROC) scores over the plain Gaussian kernel, when applied to a well-known labelled collection of abstracts. The proposed kernel composition method also allows us to automatically infer the most discriminative kernels. Conclusions The results from this paper indicate that using semantic information from unlabelled text, and combinations of such information, can be valuable for classification of short texts such as PPI sentences. This study, however, is only a first step in evaluation of semantic kernels and probabilistic multiple kernel learning in the context of PPI detection. The method described herein is modular, and can be applied with a variety of feature types, kernels, and semantic models, in order to facilitate full extraction of interacting proteins. PMID:21569604
Alamaniotis, Miltiadis; Bargiotas, Dimitrios; Tsoukalas, Lefteri H
2016-01-01
Integration of energy systems with information technologies has facilitated the realization of smart energy systems that utilize information to optimize system operation. To that end, crucial in optimizing energy system operation is the accurate, ahead-of-time forecasting of load demand. In particular, load forecasting allows planning of system expansion, and decision making for enhancing system safety and reliability. In this paper, the application of two types of kernel machines for medium term load forecasting (MTLF) is presented and their performance is recorded based on a set of historical electricity load demand data. The two kernel machine models and more specifically Gaussian process regression (GPR) and relevance vector regression (RVR) are utilized for making predictions over future load demand. Both models, i.e., GPR and RVR, are equipped with a Gaussian kernel and are tested on daily predictions for a 30-day-ahead horizon taken from the New England Area. Furthermore, their performance is compared to the ARMA(2,2) model with respect to mean average percentage error and squared correlation coefficient. Results demonstrate the superiority of RVR over the other forecasting models in performing MTLF. PMID:26835237
Swenson, Paul F.; Moore, Paul B.
1979-01-01
An air heating and cooling system for a building includes an expansion-type refrigeration circuit and a heat engine. The refrigeration circuit includes two heat exchangers, one of which is communicated with a source of indoor air from the building and the other of which is communicated with a source of air from outside the building. The heat engine includes a heat rejection circuit having a source of rejected heat and a primary heat exchanger connected to the source of rejected heat. The heat rejection circuit also includes an evaporator in heat exchange relation with the primary heat exchanger, a heat engine indoor heat exchanger, and a heat engine outdoor heat exchanger. The indoor heat exchangers are disposed in series air flow relationship, with the heat engine indoor heat exchanger being disposed downstream from the refrigeration circuit indoor heat exchanger. The outdoor heat exchangers are also disposed in series air flow relationship, with the heat engine outdoor heat exchanger disposed downstream from the refrigeration circuit outdoor heat exchanger. A common fluid is used in both of the indoor heat exchangers and in both of the outdoor heat exchangers. In a first embodiment, the heat engine is a Rankine cycle engine. In a second embodiment, the heat engine is a non-Rankine cycle engine.
Swenson, Paul F.; Moore, Paul B.
1982-01-01
An air heating and cooling system for a building includes an expansion-type refrigeration circuit and a heat engine. The refrigeration circuit includes two heat exchangers, one of which is communicated with a source of indoor air from the building and the other of which is communicated with a source of air from outside the building. The heat engine includes a heat rejection circuit having a source of rejected heat and a primary heat exchanger connected to the source of rejected heat. The heat rejection circuit also includes an evaporator in heat exchange relation with the primary heat exchanger, a heat engine indoor heat exchanger, and a heat engine outdoor heat exchanger. The indoor heat exchangers are disposed in series air flow relationship, with the heat engine indoor heat exchanger being disposed downstream from the refrigeration circuit indoor heat exchanger. The outdoor heat exchangers are also disposed in series air flow relationship, with the heat engine outdoor heat exchanger disposed downstream from the refrigeration circuit outdoor heat exchanger. A common fluid is used in both of the indoor heat exchanges and in both of the outdoor heat exchangers. In a first embodiment, the heat engine is a Rankine cycle engine. In a second embodiment, the heat engine is a non-Rankine cycle engine.
Multiple kernel learning for sparse representation-based classification.
Shrivastava, Ashish; Patel, Vishal M; Chellappa, Rama
2014-07-01
In this paper, we propose a multiple kernel learning (MKL) algorithm that is based on the sparse representation-based classification (SRC) method. Taking advantage of the nonlinear kernel SRC in efficiently representing the nonlinearities in the high-dimensional feature space, we propose an MKL method based on the kernel alignment criteria. Our method uses a two step training method to learn the kernel weights and sparse codes. At each iteration, the sparse codes are updated first while fixing the kernel mixing coefficients, and then the kernel mixing coefficients are updated while fixing the sparse codes. These two steps are repeated until a stopping criteria is met. The effectiveness of the proposed method is demonstrated using several publicly available image classification databases and it is shown that this method can perform significantly better than many competitive image classification algorithms. PMID:24835226
Small convolution kernels for high-fidelity image restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1991-01-01
An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.
Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.
CHIBANI, OMAR
1999-05-12
Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twice the source particle range.
Scale-invariant Lipatov kernels from t-channel unitarity
Coriano, C.; White, A.R.
1994-11-14
The Lipatov equation can be regarded as a reggeon Bethe-Salpeter equation in which higher-order reggeon interactions give higher-order kernels. Infra-red singular contributions in a general kernel are produced by t-channel nonsense states and the allowed kinematic forms are determined by unitarity. Ward identity and infra-red finiteness gauge invariance constraints then determine the corresponding scale-invariant part of a general higher-order kernel.
Exact solution of a coagulation equation with a product kernel in the multicomponent case
NASA Astrophysics Data System (ADS)
Fernández-Díaz, Julio M.; Gómez-García, Germán J.
2010-03-01
In this paper, we obtain the general solution for the continuous Smoluchowski equation in the multicomponent case with a product kernel as a series expansion. The solution of the problem involves the Laplace transform in several dimensions. We obtain a nonlinear partial differential equation (PDE) of the advective kind generalizing the one previously given by other authors for the mono-component case. As in its relative mono-component case, gelation is produced at some point, the conditions for its occurrence being the same as those for the mono-component case, though substituting a sum of derivatives by a derivative in the Laplace transform field. We demonstrate that for a multicomponent particle size distribution (PSD) of multiplicative form, it is sufficient for one of the marginal PSDs to generate instantaneous gelation for the occurrence of instantaneous gelation in the multicomponent PSD. The general solution is applied to several specific cases, a discrete case that recovers a previously known solution, and another two continuous cases which can be used to check numerical methods designed to directly solve the Smoluchowski equation in more general cases. We have compared the solutions for the multicomponent PSD for constant, additive and product kernels and we conjecture about the relation existing between the functional forms for the solutions both in the mono-component and the multicomponent case. Finally, we have analysed the shape of the solutions for multicomponent PSD for constant, additive and product kernels for very small masses of components, obtaining a qualitatively different behaviour for the product kernel. This has effects in the mixing state of the sol phase as time passes.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
NASA Astrophysics Data System (ADS)
Lindemer, T. B.; Voit, S. L.; Silva, C. M.; Besmann, T. M.; Hunt, R. D.
2014-05-01
The US Department of Energy is developing a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with uranium nitride (UN) kernels with diameters near 825 μm. This effort explores factors involved in the conversion of uranium oxide-carbon microspheres into UN kernels. An analysis of previous studies with sufficient experimental details is provided. Thermodynamic calculations were made to predict pressures of carbon monoxide and other relevant gases for several reactions that can be involved in the conversion of uranium oxides and carbides into UN. Uranium oxide-carbon microspheres were heated in a microbalance with an attached mass spectrometer to determine details of calcining and carbothermic conversion in argon, nitrogen, and vacuum. A model was derived from experiments on the vacuum conversion to uranium oxide-carbide kernels. UN-containing kernels were fabricated using this vacuum conversion as part of the overall process. Carbonitride kernels of ∼89% of theoretical density were produced along with several observations concerning the different stages of the process.
Influence of wheat kernel physical properties on the pulverizing process.
Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula
2014-10-01
The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel. PMID:25328207
Isolation of bacterial endophytes from germinated maize kernels.
Rijavec, Tomaz; Lapanje, Ales; Dermastia, Marina; Rupnik, Maja
2007-06-01
The germination of surface-sterilized maize kernels under aseptic conditions proved to be a suitable method for isolation of kernel-associated bacterial endophytes. Bacterial strains identified by partial 16S rRNA gene sequencing as Pantoea sp., Microbacterium sp., Frigoribacterium sp., Bacillus sp., Paenibacillus sp., and Sphingomonas sp. were isolated from kernels of 4 different maize cultivars. Genus Pantoea was associated with a specific maize cultivar. The kernels of this cultivar were often overgrown with the fungus Lecanicillium aphanocladii; however, those exhibiting Pantoea growth were never colonized with it. Furthermore, the isolated bacterium strain inhibited fungal growth in vitro. PMID:17668041
A Kernel-based Account of Bibliometric Measures
NASA Astrophysics Data System (ADS)
Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji
The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.
Optimized Derivative Kernels for Gamma Ray Spectroscopy
Vlachos, D. S.; Kosmas, O. T.; Simos, T. E.
2007-12-26
In gamma ray spectroscopy, the photon detectors measure the number of photons with energy that lies in an interval which is called a channel. This accumulation of counts produce a measuring function that its deviation from the ideal one may produce high noise in the unfolded spectrum. In order to deal with this problem, the ideal accumulation function is interpolated with the use of special designed derivative kernels. Simulation results are presented which show that this approach is very effective even in spectra with low statistics.
Verification of Chare-kernel programs
Bhansali, S.; Kale, L.V. )
1989-01-01
Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.
Edgeworth expansions of stochastic trading time
NASA Astrophysics Data System (ADS)
Decamps, Marc; De Schepper, Ann
2010-08-01
Under most local and stochastic volatility models the underlying forward is assumed to be a positive function of a time-changed Brownian motion. It relates nicely the implied volatility smile to the so-called activity rate in the market. Following Young and DeWitt-Morette (1986) [8], we propose to apply the Duru-Kleinert process-cum-time transformation in path integral to formulate the transition density of the forward. The method leads to asymptotic expansions of the transition density around a Gaussian kernel corresponding to the average activity in the market conditional on the forward value. The approximation is numerically illustrated for pricing vanilla options under the CEV model and the popular normal SABR model. The asymptotics can also be used for Monte Carlo simulations or backward integration schemes.
Prediction of kernel density of corn using single-kernel near infrared spectroscopy
Technology Transfer Automated Retrieval System (TEKTRAN)
Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...
Linear and kernel methods for multi- and hypervariate change detection
NASA Astrophysics Data System (ADS)
Nielsen, Allan A.; Canty, Morton J.
2010-10-01
The iteratively re-weighted multivariate alteration detection (IR-MAD) algorithm may be used both for unsuper- vised change detection in multi- and hyperspectral remote sensing imagery as well as for automatic radiometric normalization of multi- or hypervariate multitemporal image sequences. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA), kernel MAF and kernel MNF analyses handle nonlinearities by implicitly transforming data into high (even innite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In image analysis the Gram matrix is often prohibitively large (its size is the number of pixels in the image squared). In this case we may sub-sample the image and carry out the kernel eigenvalue analysis on a set of training data samples only. To obtain a transformed version of the entire image we then project all pixels, which we call the test data, mapped nonlinearly onto the primal eigenvectors. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric normalization and kernel PCA/MAF/MNF transformations have been written
Scientific Computing Kernels on the Cell Processor
Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine
2007-04-04
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.
Stable Local Volatility Calibration Using Kernel Splines
NASA Astrophysics Data System (ADS)
Coleman, Thomas F.; Li, Yuying; Wang, Cheng
2010-09-01
We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.
Transcriptome analysis of Ginkgo biloba kernels
He, Bing; Gu, Yincong; Xu, Meng; Wang, Jianwen; Cao, Fuliang; Xu, Li-an
2015-01-01
Ginkgo biloba is a dioecious species native to China with medicinally and phylogenetically important characteristics; however, genomic resources for this species are limited. In this study, we performed the first transcriptome sequencing for Ginkgo kernels at five time points using Illumina paired-end sequencing. Approximately 25.08-Gb clean reads were obtained, and 68,547 unigenes with an average length of 870 bp were generated by de novo assembly. Of these unigenes, 29,987 (43.74%) were annotated in publicly available plant protein database. A total of 3,869 genes were identified as significantly differentially expressed, and enrichment analysis was conducted at different time points. Furthermore, metabolic pathway analysis revealed that 66 unigenes were responsible for terpenoid backbone biosynthesis, with up to 12 up-regulated unigenes involved in the biosynthesis of ginkgolide and bilobalide. Differential gene expression analysis together with real-time PCR experiments indicated that the synthesis of bilobalide may have interfered with the ginkgolide synthesis process in the kernel. These data can remarkably expand the existing transcriptome resources of Ginkgo, and provide a valuable platform to reveal more on developmental and metabolic mechanisms of this species. PMID:26500663
Delimiting Areas of Endemism through Kernel Interpolation
Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971
Aligning Biomolecular Networks Using Modular Graph Kernels
NASA Astrophysics Data System (ADS)
Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant
Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.
Technology Transfer Automated Retrieval System (TEKTRAN)
Maize kernel density impacts milling quality of the grain due to kernel hardness. Harder kernels are correlated with higher test weight and are more resistant to breakage during harvest and transport. Softer kernels, in addition to being susceptible to mechanical damage, are also prone to pathogen ...
Femtosecond dynamics of cluster expansion
NASA Astrophysics Data System (ADS)
Gao, Xiaohui; Wang, Xiaoming; Shim, Bonggu; Arefiev, Alexey; Tushentsov, Mikhail; Breizman, Boris; Downer, Mike
2010-03-01
Noble gas clusters irradiated by intense ultrafast laser expand quickly and become typical plasma in picosecond time scale. During the expansion, the clustered plasma demonstrates unique optical properties such as strong absorption and positive contribution to the refractive index. Here we studied cluster expansion dynamics by fs-time-resolved refractive index and absorption measurements in cluster gas jets after ionization and heating by an intense pump pulse. The refractive index measured by frequency domain interferometry (FDI) shows the transient positive peak of refractive index due to clustered plasma. By separating it from the negative contribution of the monomer plasma, we are able to determine the cluster fraction. The absorption measured by a delayed probe shows the contribution from clusters of various sizes. The plasma resonances in the cluster explain the enhancement of the absorption in our isothermal expanding cluster model. The cluster size distribution can be determined. A complete understanding of the femtosecond dynamics of cluster expansion is essential in the accurate interpretation and control of laser-cluster experiments such as phase-matched harmonic generation in cluster medium.
NASA Astrophysics Data System (ADS)
Avramidi, Ivan G.; Buckman, Benjamin J.
2016-06-01
We introduce and study new invariants associated with Laplace type elliptic partial differential operators on manifolds. These invariants are constructed by using the off-diagonal heat kernel; they are not pure spectral invariants, that is, they depend not only on the eigenvalues but also on the corresponding eigenfunctions in a non-trivial way. We compute the first three low-order invariants explicitly.
Development of an efficient solution method for solving the radiative heat transfer equation
Xing Ouyang; Minardi, A.; Kassab, A.
1996-12-31
The radiative heat transfer equation in a participating medium is a Fredholm integral equation of the second kind whose kernels are formally singular at the position where the incident radiation is to be determined. A general method is developed to remove this singularity by capitalizing on the mutual interactions between the source function and the exponential integral appearing in the kernel. The method is based on an interpolation of the unknown source functions, and the analytical integration of the resulting product in the integrand (source function expansion multiplied by the known exponential integral). As such, the method is considered semi-analytical. The method is superior to traditional solution techniques which employ quadratures approximating both the unknown and known functions appearing in the integrand, and which consequently, have numerical difficulties in addressing singularities. The general approach is presented in detail for one-dimensional problems, and extensions to two-dimensional enclosures are also given. One and two-dimensional numerical examples are considered, comparing the predictions to benchmark work. The method is shown to be computationally efficient and highly accurate. In comparison with traditional quadrature based techniques, the method readily handles the singularity of the exponential integral of first order at zero, converges rapidly under grid refinement, and provides superior prediction for radiative heat transfer. The technique is shown to be valid for a wide range of values of the scattering albedo and optical thickness. The proposed technique could be applied to a wide range of conservation problems which lend themselves to an integral formulation.
Introduction to Kernel Methods: Classification of Multivariate Data
NASA Astrophysics Data System (ADS)
Fauvel, M.
2016-05-01
In this chapter, kernel methods are presented for the classification of multivariate data. An introduction example is given to enlighten the main idea of kernel methods. Then emphasis is done on the Support Vector Machine. Structural risk minimization is presented, and linear and non-linear SVM are described. Finally, a full example of SVM classification is given on simulated hyperspectral data.
Comparison of Kernel Equating and Item Response Theory Equating Methods
ERIC Educational Resources Information Center
Meng, Yu
2012-01-01
The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…
High speed sorting of Fusarium-damaged wheat kernels
Technology Transfer Automated Retrieval System (TEKTRAN)
Recent studies have found that resistance to Fusarium fungal infection can be inherited in wheat from one generation to another. However, there is not yet available a cost effective method to separate Fusarium-damaged wheat kernels from undamaged kernels so that wheat breeders can take advantage of...
Evidence-Based Kernels: Fundamental Units of Behavioral Influence
ERIC Educational Resources Information Center
Embry, Dennis D.; Biglan, Anthony
2008-01-01
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
Integrating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Champagne, Nathan J.; Wilton, Donald R.
2008-01-01
A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form
Polynomial Kernels for Hard Problems on Disk Graphs
NASA Astrophysics Data System (ADS)
Jansen, Bart
Kernelization is a powerful tool to obtain fixed-parameter tractable algorithms. Recent breakthroughs show that many graph problems admit small polynomial kernels when restricted to sparse graph classes such as planar graphs, bounded-genus graphs or H-minor-free graphs. We consider the intersection graphs of (unit) disks in the plane, which can be arbitrarily dense but do exhibit some geometric structure. We give the first kernelization results on these dense graph classes. Connected Vertex Cover has a kernel with 12k vertices on unit-disk graphs and with 3k 2 + 7k vertices on disk graphs with arbitrary radii. Red-Blue Dominating Set parameterized by the size of the smallest color class has a linear-vertex kernel on planar graphs, a quadratic-vertex kernel on unit-disk graphs and a quartic-vertex kernel on disk graphs. Finally we prove that H -Matching on unit-disk graphs has a linear-vertex kernel for every fixed graph H.
Optimal Bandwidth Selection in Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Häggström, Jenny; Wiberg, Marie
2014-01-01
The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…
Evidence-based Kernels: Fundamental Units of Behavioral Influence
Biglan, Anthony
2008-01-01
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600
Sugar uptake into kernels of tunicate tassel-seed maize
Thomas, P.A.; Felker, F.C.; Crawford, C.G. )
1990-05-01
A maize (Zea mays L.) strain expressing both the tassel-seed (Ts-5) and tunicate (Tu) characters was developed which produces glume-covered kernels on the tassel, often born on 7-10 mm pedicels. Vigorous plants produce up to 100 such kernels interspersed with additional sessile kernels. This floral unit provides a potentially valuable experimental system for studying sugar uptake into developing maize seeds. When detached kernels (with glumes and pedicel intact) are placed in incubation solution, fluid flows up the pedicel and into the glumes, entering the pedicel apoplast near the kernel base. The unusual anatomical features of this maize strain permit experimental access to the pedicel apoplast with much less possibility of kernel base tissue damage than with kernels excised from the cob. ({sup 14}C)Fructose incorporation into soluble and insoluble fractions of endosperm increased for 8 days. Endosperm uptake of sucrose, fructose, and D-glucose was significantly greater than that of L-glucose. Fructose uptake was significantly inhibited by CCCP, DNP, and PCMBS. These results suggest the presence of an active, non-diffusion component of sugar transport in maize kernels.
Direct Measurement of Wave Kernels in Time-Distance Helioseismology
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.
2006-01-01
Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.
A Robustness Testing Campaign for IMA-SP Partitioning Kernels
NASA Astrophysics Data System (ADS)
Grixti, Stephen; Lopez Trecastro, Jorge; Sammut, Nicholas; Zammit-Mangion, David
2015-09-01
With time and space partitioned architectures becoming increasingly appealing to the European space sector, the dependability of partitioning kernel technology is a key factor to its applicability in European Space Agency projects. This paper explores the potential of the data type fault model, which injects faults through the Application Program Interface, in partitioning kernel robustness testing. This fault injection methodology has been tailored to investigate its relevance in uncovering vulnerabilities within partitioning kernels and potentially contributing towards fault removal campaigns within this domain. This is demonstrated through a robustness testing case study of the XtratuM partitioning kernel for SPARC LEON3 processors. The robustness campaign exposed a number of vulnerabilities in XtratuM, exhibiting the potential benefits of using such a methodology for the robustness assessment of partitioning kernels.
OSKI: A Library of Automatically Tuned Sparse Matrix Kernels
Vuduc, R; Demmel, J W; Yelick, K A
2005-07-19
The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.
PROPERTIES OF A SOLAR FLARE KERNEL OBSERVED BY HINODE AND SDO
Young, P. R.; Doschek, G. A.; Warren, H. P.; Hara, H.
2013-04-01
Flare kernels are compact features located in the solar chromosphere that are the sites of rapid heating and plasma upflow during the rise phase of flares. An example is presented from a M1.1 class flare in active region AR 11158 observed on 2011 February 16 07:44 UT for which the location of the upflow region seen by EUV Imaging Spectrometer (EIS) can be precisely aligned to high spatial resolution images obtained by the Atmospheric Imaging Assembly (AIA) and Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO). A string of bright flare kernels is found to be aligned with a ridge of strong magnetic field, and one kernel site is highlighted for which an upflow speed of Almost-Equal-To 400 km s{sup -1} is measured in lines formed at 10-30 MK. The line-of-sight magnetic field strength at this location is Almost-Equal-To 1000 G. Emission over a continuous range of temperatures down to the chromosphere is found, and the kernels have a similar morphology at all temperatures and are spatially coincident with sizes at the resolution limit of the AIA instrument ({approx}<400 km). For temperatures of 0.3-3.0 MK the EIS emission lines show multiple velocity components, with the dominant component becoming more blueshifted with temperature from a redshift of 35 km s{sup -1} at 0.3 MK to a blueshift of 60 km s{sup -1} at 3.0 MK. Emission lines from 1.5-3.0 MK show a weak redshifted component at around 60-70 km s{sup -1} implying multi-directional flows at the kernel site. Significant non-thermal broadening corresponding to velocities of Almost-Equal-To 120 km s{sup -1} is found at 10-30 MK, and the electron density in the kernel, measured at 2 MK, is 3.4 Multiplication-Sign 10{sup 10} cm{sup -3}. Finally, the Fe XXIV {lambda}192.03/{lambda}255.11 ratio suggests that the EIS calibration has changed since launch, with the long wavelength channel less sensitive than the short wavelength channel by around a factor two.
Technology Transfer Automated Retrieval System (TEKTRAN)
The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. The choice of methodology was based on the principle that many biological materials exhibit fluorescenc...
Modified kernel-based nonlinear feature extraction.
Ma, J.; Perkins, S. J.; Theiler, J. P.; Ahalt, S.
2002-01-01
Feature Extraction (FE) techniques are widely used in many applications to pre-process data in order to reduce the complexity of subsequent processes. A group of Kernel-based nonlinear FE ( H E ) algorithms has attracted much attention due to their high performance. However, a serious limitation that is inherent in these algorithms -- the maximal number of features extracted by them is limited by the number of classes involved -- dramatically degrades their flexibility. Here we propose a modified version of those KFE algorithms (MKFE), This algorithm is developed from a special form of scatter-matrix, whose rank is not determined by the number of classes involved, and thus breaks the inherent limitation in those KFE algorithms. Experimental results suggest that MKFE algorithm is .especially useful when the training set is small.
Privacy preserving RBF kernel support vector machine.
Li, Haoran; Xiong, Li; Ohno-Machado, Lucila; Jiang, Xiaoqian
2014-01-01
Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data. PMID:25013805
Kernel density estimation using graphical processing unit
NASA Astrophysics Data System (ADS)
Sunarko, Su'ud, Zaki
2015-09-01
Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.
Labeled Graph Kernel for Behavior Analysis.
Zhao, Ruiqi; Martinez, Aleix M
2016-08-01
Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data. PMID:26415154
Ernst, Donald M.
1984-10-23
A specially constructed heat pipe for use in fluidized bed combustors. Two distinct coatings are spray coated onto a heat pipe casing constructed of low thermal expansion metal, each coating serving a different purpose. The first coating forms aluminum oxide to prevent hydrogen permeation into the heat pipe casing, and the second coating contains stabilized zirconium oxide to provide abrasion resistance while not substantially affecting the heat transfer characteristics of the system.
Ernst, D.M.
1984-10-23
A specially constructed heat pipe is described for use in fluidized bed combustors. Two distinct coatings are spray coated onto a heat pipe casing constructed of low thermal expansion metal, each coating serving a different purpose. The first coating forms aluminum oxide to prevent hydrogen permeation into the heat pipe casing, and the second coating contains stabilized zirconium oxide to provide abrasion resistance while not substantially affecting the heat transfer characteristics of the system.
Preliminary thermal expansion screening data for tuffs
Lappin, A.R.
1980-03-01
A major variable in evaluating the potential of silicic tuffs for use in geologic disposal of heat-producing nuclear wastes is thermal expansion. Results of ambient-pressure linear expansion measurements on a group of tuffs that vary treatly in porosity and mineralogy are presente here. Thermal expansion of devitrified welded tuffs is generally linear with increasing temperature and independent of both porosity and heating rate. Mineralogic factors affecting behavior of these tuffs are limited to the presence or absence of cristobalite and altered biotite. The presence of cristobalite results in markedly nonlinear expansion above 200{sup 0}C. If biotite in biotite-hearing rocks alters even slightly to expandable clays, the behavior of these tuffs near the boiling point of water can be dominated by contraction of the expandable phase. Expansion of both high- and low-porosity tuffs containing hydrated silicic glass and/or expandable clays is complex. The behavior of these rocks appears to be completely dominated by dehydration of hydrous phases and, hence, should be critically dependent on fluid pressure. Valid extrapolation of the ambient-pressure results presented here to depths of interest for construction of a nuclear-waste repository will depend on a good understanding of the interaction of dehydration rates and fluid pressures, and of the effects of both micro- and macrofractures on the response of tuff masss.
Oil extraction from sheanut (Vitellaria paradoxa Gaertn C.F.) kernels assisted by microwaves.
Nde, Divine B; Boldor, Dorin; Astete, Carlos; Muley, Pranjali; Xu, Zhimin
2016-03-01
Shea butter, is highly solicited in cosmetics, pharmaceuticals, chocolates and biodiesel formulations. Microwave assisted extraction (MAE) of butter from sheanut kernels was carried using the Doehlert's experimental design. Factors studied were microwave heating time, temperature and solvent/solute ratio while the responses were the quantity of oil extracted and the acid number. Second order models were established to describe the influence of experimental parameters on the responses studied. Under optimum MAE conditions of heating time 23 min, temperature 75 °C and solvent/solute ratio 4:1 more than 88 % of the oil with a free fatty acid (FFA) value less than 2, was extracted compared to the 10 h and solvent/solute ratio of 10:1 required for soxhlet extraction. Scanning electron microscopy was used to elucidate the effect of microwave heating on the kernels' microstructure. Substantial reduction in extraction time and volumes of solvent used and oil of suitable quality are the main benefits derived from the MAE process. PMID:27570267
Weakly relativistic plasma expansion
Fermous, Rachid Djebli, Mourad
2015-04-15
Plasma expansion is an important physical process that takes place in laser interactions with solid targets. Within a self-similar model for the hydrodynamical multi-fluid equations, we investigated the expansion of both dense and under-dense plasmas. The weakly relativistic electrons are produced by ultra-intense laser pulses, while ions are supposed to be in a non-relativistic regime. Numerical investigations have shown that relativistic effects are important for under-dense plasma and are characterized by a finite ion front velocity. Dense plasma expansion is found to be governed mainly by quantum contributions in the fluid equations that originate from the degenerate pressure in addition to the nonlinear contributions from exchange and correlation potentials. The quantum degeneracy parameter profile provides clues to set the limit between under-dense and dense relativistic plasma expansions at a given density and temperature.
Nelson, E.A.; Christensen, E.J.; Mackey, H.E.; Sharitz, R.R.; Jensen, J.R.; Hodgson, M.E.
1984-02-01
Since 1954, cooling water discharges from K Reactor ({anti X} = 370 cfs {at} 59 C) to Pen Branch have altered vegetation and deposited sediment in the Savannah River Swamp forming the Pen Branch delta. Currently, the delta covers over 300 acres and continues to expand at a rate of about 16 acres/yr. Examination of delta expansion can provide important information on environmental impacts to wetlands exposed to elevated temperature and flow conditions. To assess the current status and predict future expansion of the Pen Branch delta, historic aerial photographs were analyzed using both basic photo interpretation and computer techniques to provide the following information: (1) past and current expansion rates; (2) location and changes of impacted areas; (3) total acreage presently affected. Delta acreage changes were then compared to historic reactor discharge temperature and flow data to see if expansion rate variations could be related to reactor operations.
Gaussian kernel width optimization for sparse Bayesian learning.
Mohsenzadeh, Yalda; Sheikhzadeh, Hamid
2015-04-01
Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters. PMID:25794377
Classification of maize kernels using NIR hyperspectral imaging.
Williams, Paul J; Kucheryavskiy, Sergey
2016-10-15
NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual kernels and did not give acceptable results because of high misclassification. However by using a predefined threshold and classifying entire kernels based on the number of correctly predicted pixels, improved results were achieved (sensitivity and specificity of 0.75 and 0.97). Object-wise classification was performed using two methods for feature extraction - score histograms and mean spectra. The model based on score histograms performed better for hard kernel classification (sensitivity and specificity of 0.93 and 0.97), while that of mean spectra gave better results for medium kernels (sensitivity and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale. PMID:27173544
Effects of sample size on KERNEL home range estimates
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
1999-01-01
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
Unrestrained Expansion - A Source of Entropy
NASA Astrophysics Data System (ADS)
Michaud, L. M.
2005-12-01
The paper examines the role of unrestrained expansion in atmospheric entropy production. Lack of mechanical equilibrium is shown to be a far larger producer of internally generated entropy than other internally generated entropy production processes. Isentropic expanders are used to explain atmospheric entropy production. Unrestrained expansion can account for the discrepancy between the energy that would be produced if the heat were carried by Carnot engines and the energy actually produced. Having an expander in more important to mechanical energy production than reducing friction losses. The method of analysis is also applicable to: the solar chimney and to the atmospheric vortex engine.
Initial-state splitting kernels in cold nuclear matter
NASA Astrophysics Data System (ADS)
Ovanesyan, Grigory; Ringer, Felix; Vitev, Ivan
2016-09-01
We derive medium-induced splitting kernels for energetic partons that undergo interactions in dense QCD matter before a hard-scattering event at large momentum transfer Q2. Working in the framework of the effective theory SCETG, we compute the splitting kernels beyond the soft gluon approximation. We present numerical studies that compare our new results with previous findings. We expect the full medium-induced splitting kernels to be most relevant for the extension of initial-state cold nuclear matter energy loss phenomenology in both p+A and A+A collisions.
Machine learning algorithms for damage detection: Kernel-based approaches
NASA Astrophysics Data System (ADS)
Santos, Adam; Figueiredo, Eloi; Silva, M. F. M.; Sales, C. S.; Costa, J. C. W. A.
2016-02-01
This paper presents four kernel-based algorithms for damage detection under varying operational and environmental conditions, namely based on one-class support vector machine, support vector data description, kernel principal component analysis and greedy kernel principal component analysis. Acceleration time-series from an array of accelerometers were obtained from a laboratory structure and used for performance comparison. The main contribution of this study is the applicability of the proposed algorithms for damage detection as well as the comparison of the classification performance between these algorithms and other four ones already considered as reliable approaches in the literature. All proposed algorithms revealed to have better classification performance than the previous ones.
Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.
Energy Science and Technology Software Center (ESTSC)
1999-05-12
Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twicemore » the source particle range.« less
Bridging the gap between the KERNEL and RT-11
Hendra, R.G.
1981-06-01
A software package is proposed to allow users of the PL-11 language, and the LSI-11 KERNEL in general, to use their PL-11 programs under RT-11. Further, some general purpose extensions to the KERNEL are proposed that facilitate some number conversions and strong manipulations. A Floating Point Package of procedures to allow full use of the hardware floating point capability of the LSI-11 computers is proposed. Extensions to the KERNEL that allow a user to read, write and delete disc files in the manner of RT-11 is also proposed. A device directory listing routine is also included.
Kernel simplex growing algorithm for hyperspectral endmember extraction
NASA Astrophysics Data System (ADS)
Zhao, Liaoying; Zheng, Junpeng; Li, Xiaorun; Wang, Lijiao
2014-01-01
In order to effectively extract endmembers for hyperspectral imagery where linear mixing model may not be appropriate due to multiple scattering effects, this paper extends the simplex growing algorithm (SGA) to its kernel version. A new simplex volume formula without dimension reduction is used in SGA to form a new simplex growing algorithm (NSGA). The original data are nonlinearly mapped into a high-dimensional space where the scatters can be ignored. To avoid determining complex nonlinear mapping, a kernel function is used to extend the NSGA to kernel NSGA (KNSGA). Experimental results of simulated and real data prove that the proposed KNSGA approach outperforms SGA and NSGA.
Microscale Regenerative Heat Exchanger
NASA Technical Reports Server (NTRS)
Moran, Matthew E.; Stelter, Stephan; Stelter, Manfred
2006-01-01
The device described herein is designed primarily for use as a regenerative heat exchanger in a miniature Stirling engine or Stirling-cycle heat pump. A regenerative heat exchanger (sometimes called, simply, a "regenerator" in the Stirling-engine art) is basically a thermal capacitor: Its role in the Stirling cycle is to alternately accept heat from, then deliver heat to, an oscillating flow of a working fluid between compression and expansion volumes, without introducing an excessive pressure drop. These volumes are at different temperatures, and conduction of heat between these volumes is undesirable because it reduces the energy-conversion efficiency of the Stirling cycle.
Biologic fluorescence decay characteristics: determination by Laguerre expansion technique
NASA Astrophysics Data System (ADS)
Snyder, Wendy J.; Maarek, Jean-Michel I.; Papaioannou, Thanassis; Marmarelis, Vasilis Z.; Grundfest, Warren S.
1996-04-01
Fluorescence decay characteristics are used to identify biologic fluorophores and to characterize interactions with the fluorophore environment. In many studies, fluorescence lifetimes are assessed by iterative reconvolution techniques. We investigated the use of a new approach: the Laguerre expansion of kernels technique (Marmarelis, V.Z., Ann. Biomed., Eng. 1993; 21, 573-589) which yields the fluorescence impulse response function by least- squares fitting of a discrete-time Laguerre functions expansion. Nitrogen (4 ns FWHM) and excimer (120 ns FWHM) laser pulses were used to excite the fluorescence of an anthracene and of type II collagen powder. After filtering (monochromator) and detection (MCP-PMT), the fluorescence response was digitized (digital storage oscilloscope) and transferred to a personal computer. Input and output data were deconvolved by the Laguerre expansion technique to compute the impulse response function which was then fitted to a multiexponential function for determination of the decay constants. A single exponential (time constant: 4.24 ns) best approximated the fluorescence decay of anthracene, whereas the Type II collagen response was best approximated by a double exponential (time constants: 2.24 and 9.92 ns) in agreement with previously reported data. The results of the Laguerre expansion technique were compared to the least-squares iterative reconvolution technique. The Laguerre expansion technique appeared computationally efficient and robust to experimental noise in the data. Furthermore, the proposed method does not impose a set multiexponential form to the decay.
Bilinear analysis for kernel selection and nonlinear feature extraction.
Yang, Shu; Yan, Shuicheng; Zhang, Chao; Tang, Xiaoou
2007-09-01
This paper presents a unified criterion, Fisher + kernel criterion (FKC), for feature extraction and recognition. This new criterion is intended to extract the most discriminant features in different nonlinear spaces, and then, fuse these features under a unified measurement. Thus, FKC can simultaneously achieve nonlinear discriminant analysis and kernel selection. In addition, we present an efficient algorithm Fisher + kernel analysis (FKA), which utilizes the bilinear analysis, to optimize the new criterion. This FKA algorithm can alleviate the ill-posed problem existed in traditional kernel discriminant analysis (KDA), and usually, has no singularity problem. The effectiveness of our proposed algorithm is validated by a series of face-recognition experiments on several different databases. PMID:18220192
A kernel adaptive algorithm for quaternion-valued inputs.
Paul, Thomas K; Ogunfunmi, Tokunbo
2015-10-01
The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations. PMID:25594982
Inheritance of Kernel Color in Corn: Explanations and Investigations.
ERIC Educational Resources Information Center
Ford, Rosemary H.
2000-01-01
Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)
Intelligent classification methods of grain kernels using computer vision analysis
NASA Astrophysics Data System (ADS)
Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo
2011-06-01
In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.
Kernel-based Linux emulation for Plan 9.
Minnich, Ronald G.
2010-09-01
CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9. In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.
Constructing Bayesian formulations of sparse kernel learning methods.
Cawley, Gavin C; Talbot, Nicola L C
2005-01-01
We present here a simple technique that simplifies the construction of Bayesian treatments of a variety of sparse kernel learning algorithms. An incomplete Cholesky factorisation is employed to modify the dual parameter space, such that the Gaussian prior over the dual model parameters is whitened. The regularisation term then corresponds to the usual weight-decay regulariser, allowing the Bayesian analysis to proceed via the evidence framework of MacKay. There is in addition a useful by-product associated with the incomplete Cholesky factorisation algorithm, it also identifies a subset of the training data forming an approximate basis for the entire dataset in the kernel-induced feature space, resulting in a sparse model. Bayesian treatments of the kernel ridge regression (KRR) algorithm, with both constant and heteroscedastic (input dependent) variance structures, and kernel logistic regression (KLR) are provided as illustrative examples of the proposed method, which we hope will be more widely applicable. PMID:16085387
Accelerating the loop expansion
Ingermanson, R.
1986-07-29
This thesis introduces a new non-perturbative technique into quantum field theory. To illustrate the method, I analyze the much-studied phi/sup 4/ theory in two dimensions. As a prelude, I first show that the Hartree approximation is easy to obtain from the calculation of the one-loop effective potential by a simple modification of the propagator that does not affect the perturbative renormalization procedure. A further modification then susggests itself, which has the same nice property, and which automatically yields a convex effective potential. I then show that both of these modifications extend naturally to higher orders in the derivative expansion of the effective action and to higher orders in the loop-expansion. The net effect is to re-sum the perturbation series for the effective action as a systematic ''accelerated'' non-perturbative expansion. Each term in the accelerated expansion corresponds to an infinite number of terms in the original series. Each term can be computed explicitly, albeit numerically. Many numerical graphs of the various approximations to the first two terms in the derivative expansion are given. I discuss the reliability of the results and the problem of spontaneous symmetry-breaking, as well as some potential applications to more interesting field theories. 40 refs.
Hairpin Vortex Dynamics in a Kernel Experiment
NASA Astrophysics Data System (ADS)
Meng, H.; Yang, W.; Sheng, J.
1998-11-01
A surface-mounted trapezoidal tab is known to shed hairpin-like vortices and generate a pair of counter-rotating vortices in its wake. Such a flow serves as a kernel experiment for studying the dynamics of these vortex structures. Created by and scaled with the tab, the vortex structures are more orderly and larger than those in the natural wall turbulence and thus suitable for measurement by Particle Image Velocimetry (PIV) and visualization by Planar Laser Induced Fluorescence (PLIF). Time-series PIV provides insight into the evolution, self-enhancement, regeneration, and interaction of hairpin vortices, as well as interactions of the hairpins with the pressure-induced counter-rotating vortex pair (CVP). The topology of the wake structure indicates that the hairpin "heads" are formed from lifted shear-layer instability and "legs" from stretching by the CVP, which passes the energy to the hairpins. The CVP diminishes after one tab height, while the hairpins persist until 10 20 tab heights downstream. It is concluded that the lift-up of the near-surface viscous fluids is the key to hairpin vortex dynamics. Whether from the pumping action of the CVP or the ejection by an existing hairpin, the 3D lift-up of near-surface vorticity contributes to the increase of hairpin vortex strength and creation of secondary hairpins. http://www.mne.ksu.edu/ meng/labhome.html
Kernel MAD Algorithm for Relative Radiometric Normalization
NASA Astrophysics Data System (ADS)
Bai, Yang; Tang, Ping; Hu, Changmiao
2016-06-01
The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.
Kernel spectral clustering with memory effect
NASA Astrophysics Data System (ADS)
Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.
2013-05-01
Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.
SCAP. Point Kernel Single or Albedo Scatter
Disney, R.K.; Bevan, S.E.
1982-08-05
SCAP solves for radiation transport in complex geometries using the single or albedo-scatter point kernel method. The program is designed to calculate the neutron or gamma-ray radiation level at detector points located within or outside a complex radiation scatter source geometry or a user-specified discrete scattering volume. The geometry is described by zones bounded by intersecting quadratic surfaces with an arbitrary maximum number of boundary surfaces per zone. The anisotropic point sources are described as point-wise energy dependent distributions of polar angles on a meridian; isotropic point sources may be specified also. The attenuation function for gamma rays is an exponential function on the primary source leg and the scatter leg with a buildup factor approximation to account for multiple scatter on the scatter leg. The neutron attenuation function is an exponential function using neutron removal cross sections on the primary source leg and scatter leg. Line or volumetric sources can be represented as distributions of isotropic point sources, with uncollided line-of-sight attenuation and buildup calculated between each source point and the detector point.
Local Kernel for Brains Classification in Schizophrenia
NASA Astrophysics Data System (ADS)
Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.
In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.
Temporal-kernel recurrent neural networks.
Sutskever, Ilya; Hinton, Geoffrey
2010-03-01
A Recurrent Neural Network (RNN) is a powerful connectionist model that can be applied to many challenging sequential problems, including problems that naturally arise in language and speech. However, RNNs are extremely hard to train on problems that have long-term dependencies, where it is necessary to remember events for many timesteps before using them to make a prediction. In this paper we consider the problem of training RNNs to predict sequences that exhibit significant long-term dependencies, focusing on a serial recall task where the RNN needs to remember a sequence of characters for a large number of steps before reconstructing it. We introduce the Temporal-Kernel Recurrent Neural Network (TKRNN), which is a variant of the RNN that can cope with long-term dependencies much more easily than a standard RNN, and show that the TKRNN develops short-term memory that successfully solves the serial recall task by representing the input string with a stable state of its hidden units. PMID:19932002
Phoneme recognition with kernel learning algorithms
NASA Astrophysics Data System (ADS)
Namarvar, Hassan H.; Berger, Theodore W.
2004-10-01
An isolated phoneme recognition system is proposed using time-frequency domain analysis and support vector machines (SVMs). The TIMIT corpus which contains a total of 6300 sentences, ten sentences spoken by each of 630 speakers from eight major dialect regions of the United States, was used in this experiment. Provided time-aligned phonetic transcription was used to extract phonemes from speech samples. A 55-output classifier system was designed corresponding to 55 classes of phonemes and trained with the kernel learning algorithms. The training dataset was extracted from clean training samples. A portion of the database, i.e., 65338 samples of training dataset, was used to train the system. The performance of the system on the training dataset was 76.4%. The whole test dataset of the TIMIT corpus was used to test the generalization of the system. All samples, i.e., 55655 samples of the test dataset, were used to test the system. The performance of the system on the test dataset was 45.3%. This approach is currently under development to extend the algorithm for continuous phoneme recognition. [Work supported in part by grants from DARPA, NASA, and ONR.
Technology Transfer Automated Retrieval System (TEKTRAN)
The effect of heat damage was estimated using Hard Red Winter (HRW) wheat varieties grown in Oklahoma. The testing was done on wheat kernels, flour, and isolated starch. Whole-wheat kernels were analyzed by Photoacoustic Spectroscopy (PAS). Flour was analyzed by DSC, Capillary Electrophoresis (CE...
Nonlinear stochastic system identification of skin using volterra kernels.
Chen, Yi; Hunter, Ian W
2013-04-01
Volterra kernel stochastic system identification is a technique that can be used to capture and model nonlinear dynamics in biological systems, including the nonlinear properties of skin during indentation. A high bandwidth and high stroke Lorentz force linear actuator system was developed and used to test the mechanical properties of bulk skin and underlying tissue in vivo using a non-white input force and measuring an output position. These short tests (5 s) were conducted in an indentation configuration normal to the skin surface and in an extension configuration tangent to the skin surface. Volterra kernel solution methods were used including a fast least squares procedure and an orthogonalization solution method. The practical modifications, such as frequency domain filtering, necessary for working with low-pass filtered inputs are also described. A simple linear stochastic system identification technique had a variance accounted for (VAF) of less than 75%. Representations using the first and second Volterra kernels had a much higher VAF (90-97%) as well as a lower Akaike information criteria (AICc) indicating that the Volterra kernel models were more efficient. The experimental second Volterra kernel matches well with results from a dynamic-parameter nonlinearity model with fixed mass as a function of depth as well as stiffness and damping that increase with depth into the skin. A study with 16 subjects showed that the kernel peak values have mean coefficients of variation (CV) that ranged from 3 to 8% and showed that the kernel principal components were correlated with location on the body, subject mass, body mass index (BMI), and gender. These fast and robust methods for Volterra kernel stochastic system identification can be applied to the characterization of biological tissues, diagnosis of skin diseases, and determination of consumer product efficacy. PMID:23264003
The Weighted Super Bergman Kernels Over the Supermatrix Spaces
NASA Astrophysics Data System (ADS)
Feng, Zhiming
2015-12-01
The purpose of this paper is threefold. Firstly, using Howe duality for , we obtain integral formulas of the super Schur functions with respect to the super standard Gaussian distributions. Secondly, we give explicit expressions of the super Szegö kernels and the weighted super Bergman kernels for the Cartan superdomains of type I. Thirdly, combining these results, we obtain duality relations of integrals over the unitary groups and the Cartan superdomains, and the marginal distributions of the weighted measure.
Simple randomized algorithms for online learning with kernels.
He, Wenwu; Kwok, James T
2014-12-01
In online learning with kernels, it is vital to control the size (budget) of the support set because of the curse of kernelization. In this paper, we propose two simple and effective stochastic strategies for controlling the budget. Both algorithms have an expected regret that is sublinear in the horizon. Experimental results on a number of benchmark data sets demonstrate encouraging performance in terms of both efficacy and efficiency. PMID:25108150
NASA Technical Reports Server (NTRS)
Widener, Edward L.
1992-01-01
The objective is to introduce some concepts of thermodynamics in existing heat-treating experiments using available items. The specific objectives are to define the thermal properties of materials and to visualize expansivity, conductivity, heat capacity, and the melting point of common metals. The experimental procedures are described.
Thermal expansion in nanoresonators
NASA Astrophysics Data System (ADS)
Mancardo Viotti, Agustín; Monastra, Alejandro G.; Moreno, Mariano F.; Florencia Carusela, M.
2016-08-01
Inspired by some recent experiments and numerical works related to nanoresonators, we perform classical molecular dynamics simulations to investigate the thermal expansion and the ability of the device to act as a strain sensor assisted by thermally-induced vibrations. The proposed model consists in a chain of atoms interacting anharmonically with both ends clamped to thermal reservoirs. We analyze the thermal expansion and resonant frequency shifts as a function of temperature and the applied strain. For the transversal modes the shift is approximately linear with strain. We also present analytical results from canonical calculations in the harmonic approximation showing that thermal expansion is uniform along the device. This prediction also works when the system operates in a nonlinear oscillation regime at moderate and high temperatures.
Novel Foraminal Expansion Technique
Senturk, Salim; Ciplak, Mert; Oktenoglu, Tunc; Sasani, Mehdi; Egemen, Emrah; Yaman, Onur; Suzer, Tuncer
2016-01-01
The technique we describe was developed for cervical foraminal stenosis for cases in which a keyhole foraminotomy would not be effective. Many cervical stenosis cases are so severe that keyhole foraminotomy is not successful. However, the technique outlined in this study provides adequate enlargement of an entire cervical foraminal diameter. This study reports on a novel foraminal expansion technique. Linear drilling was performed in the middle of the facet joint. A small bone graft was placed between the divided lateral masses after distraction. A lateral mass stabilization was performed with screws and rods following the expansion procedure. A cervical foramen was linearly drilled medially to laterally, then expanded with small bone grafts, and a lateral mass instrumentation was added with surgery. The patient was well after the surgery. The novel foraminal expansion is an effective surgical method for severe foraminal stenosis. PMID:27559460
Optimal Electric Utility Expansion
Energy Science and Technology Software Center (ESTSC)
1989-10-10
SAGE-WASP is designed to find the optimal generation expansion policy for an electrical utility system. New units can be automatically selected from a user-supplied list of expansion candidates which can include hydroelectric and pumped storage projects. The existing system is modeled. The calculational procedure takes into account user restrictions to limit generation configurations to an area of economic interest. The optimization program reports whether the restrictions acted as a constraint on the solution. All expansionmore » configurations considered are required to pass a user supplied reliability criterion. The discount rate and escalation rate are treated separately for each expansion candidate and for each fuel type. All expenditures are separated into local and foreign accounts, and a weighting factor can be applied to foreign expenditures.« less
Novel Foraminal Expansion Technique.
Ozer, Ali Fahir; Senturk, Salim; Ciplak, Mert; Oktenoglu, Tunc; Sasani, Mehdi; Egemen, Emrah; Yaman, Onur; Suzer, Tuncer
2016-08-01
The technique we describe was developed for cervical foraminal stenosis for cases in which a keyhole foraminotomy would not be effective. Many cervical stenosis cases are so severe that keyhole foraminotomy is not successful. However, the technique outlined in this study provides adequate enlargement of an entire cervical foraminal diameter. This study reports on a novel foraminal expansion technique. Linear drilling was performed in the middle of the facet joint. A small bone graft was placed between the divided lateral masses after distraction. A lateral mass stabilization was performed with screws and rods following the expansion procedure. A cervical foramen was linearly drilled medially to laterally, then expanded with small bone grafts, and a lateral mass instrumentation was added with surgery. The patient was well after the surgery. The novel foraminal expansion is an effective surgical method for severe foraminal stenosis. PMID:27559460
Sparse kernel learning with LASSO and Bayesian inference algorithm.
Gao, Junbin; Kwan, Paul W; Shi, Daming
2010-03-01
Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two separate recent papers [Gao, J., Antolovich, M., & Kwan, P. H. (2008). L1 LASSO and its Bayesian inference. In W. Wobcke, & M. Zhang (Eds.), Lecture notes in computer science: Vol. 5360 (pp. 318-324); Wang, G., Yeung, D. Y., & Lochovsky, F. (2007). The kernel path in kernelized LASSO. In International conference on artificial intelligence and statistics (pp. 580-587). San Juan, Puerto Rico: MIT Press]. This paper is concerned with learning kernels under the LASSO formulation via adopting a generative Bayesian learning and inference approach. A new robust learning algorithm is proposed which produces a sparse kernel model with the capability of learning regularized parameters and kernel hyperparameters. A comparison with state-of-the-art methods for constructing sparse regression models such as the relevance vector machine (RVM) and the local regularization assisted orthogonal least squares regression (LROLS) is given. The new algorithm is also demonstrated to possess considerable computational advantages. PMID:19604671
Enzymatic treatment of peanut kernels to reduce allergen levels.
Yu, Jianmei; Ahmedna, Mohamed; Goktepe, Ipek; Cheng, Hsiaopo; Maleki, Soheila
2011-08-01
This study investigated the use of enzymatic treatment to reduce peanut allergens in peanut kernels as affected by processing conditions. Two major peanut allergens, Ara h 1 and Ara h 2, were used as indicators of process effectiveness. Enzymatic treatment effectively reduced Ara h 1 and Ara h 2 in roasted peanut kernels by up to 100% under optimal conditions. For instance, treatment of roasted peanut kernels with α-chymotrypsin and trypsin for 1-3h significantly increased the solubility of peanut protein while reducing Ara h 1 and Ara h 2 in peanut kernel extracts by 100% and 98%, respectively, based on ELISA readings. Ara h 1 and Ara h 2 levels in peanut protein extracts were inversely correlated with protein solubility in roasted peanut. Blanching of kernels enhanced the effectiveness of enzyme treatment in roasted peanuts but not in raw peanuts. The optimal concentration of enzyme was determined by response surface to be in the range of 0.1-0.2%. No consistent results were obtained for raw peanut kernels since Ara h 1 and Ara h 2 increased in peanut protein extracts under some treatment conditions and decreased in others. PMID:25214091
Integrodifference equations in patchy landscapes : I. Dispersal Kernels.
Musgrave, Jeffrey; Lutscher, Frithjof
2014-09-01
What is the effect of individual movement behavior in patchy landscapes on redistribution kernels? To answer this question, we derive a number of redistribution kernels from a random walk model with patch dependent diffusion, settling, and mortality rates. At the interface of two patch types, we integrate recent results on individual behavior at the interface. In general, these interface conditions result in the probability density function of the random walker being discontinuous at an interface. We show that the dispersal kernel can be characterized as the Green's function of a second-order differential operator. Using this characterization, we illustrate the kind of (discontinuous) dispersal kernels that result from our approach, using three scenarios. First, we assume that dispersal distance is small compared to patch size, so that a typical disperser crosses at most one interface during the dispersal phase. Then we consider a single bounded patch and generate kernels that will be useful to study the critical patch size problem in our sequel paper. Finally, we explore dispersal kernels in a periodic landscape and study the dependence of certain dispersal characteristics on model parameters. PMID:23907527
An Ensemble Approach to Building Mercer Kernels with Prior Information
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2005-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.
Fast O1 bilateral filtering using trigonometric range kernels.
Chaudhury, Kunal Narayan; Sage, Daniel; Unser, Michael
2011-12-01
It is well known that spatial averaging can be realized (in space or frequency domain) using algorithms whose complexity does not scale with the size or shape of the filter. These fast algorithms are generally referred to as constant-time or O(1) algorithms in the image-processing literature. Along with the spatial filter, the edge-preserving bilateral filter involves an additional range kernel. This is used to restrict the averaging to those neighborhood pixels whose intensity are similar or close to that of the pixel of interest. The range kernel operates by acting on the pixel intensities. This makes the averaging process nonlinear and computationally intensive, particularly when the spatial filter is large. In this paper, we show how the O(1) averaging algorithms can be leveraged for realizing the bilateral filter in constant time, by using trigonometric range kernels. This is done by generalizing the idea presented by Porikli, i.e., using polynomial kernels. The class of trigonometric kernels turns out to be sufficiently rich, allowing for the approximation of the standard Gaussian bilateral filter. The attractive feature of our approach is that, for a fixed number of terms, the quality of approximation achieved using trigonometric kernels is much superior to that obtained by Porikli using polynomials. PMID:21659022
Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.
Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K
2016-03-01
Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005 ). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens ( 2014 ) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically. PMID:26735744
Stirling engine heating system
Johansson, L.N.; Houtman, W.H.; Percival, W.H.
1988-06-28
A hot gas engine is described wherein a working gas flows back and forth in a closed path between a relatively cooler compression cylinder side of the engine and a relatively hotter expansion cylinder side of the engine and the path contains means including a heat source and a heat sink acting upon the gas in cooperation with the compression and expansion cylinders to cause the gas to execute a thermodynamic cycle wherein useful mechanical output power is developed by the engine, the improvement in the heat source which comprises a plurality of individual tubes each forming a portion of the closed path for the working gas.
Fouss, François; Francoisse, Kevin; Yen, Luh; Pirotte, Alain; Saerens, Marco
2012-07-01
This paper presents a survey as well as an empirical comparison and evaluation of seven kernels on graphs and two related similarity matrices, that we globally refer to as "kernels on graphs" for simplicity. They are the exponential diffusion kernel, the Laplacian exponential diffusion kernel, the von Neumann diffusion kernel, the regularized Laplacian kernel, the commute-time (or resistance-distance) kernel, the random-walk-with-restart similarity matrix, and finally, a kernel first introduced in this paper (the regularized commute-time kernel) and two kernels defined in some of our previous work and further investigated in this paper (the Markov diffusion kernel and the relative-entropy diffusion matrix). The kernel-on-graphs approach is simple and intuitive. It is illustrated by applying the nine kernels to a collaborative-recommendation task, viewed as a link prediction problem, and to a semisupervised classification task, both on several databases. The methods compute proximity measures between nodes that help study the structure of the graph. Our comparisons suggest that the regularized commute-time and the Markov diffusion kernels perform best on the investigated tasks, closely followed by the regularized Laplacian kernel. PMID:22497802
Tomlinson, John J.
2006-04-18
A water-heating dehumidifier includes a refrigerant loop including a compressor, at least one condenser, an expansion device and an evaporator including an evaporator fan. The condenser includes a water inlet and a water outlet for flowing water therethrough or proximate thereto, or is affixed to the tank or immersed into the tank to effect water heating without flowing water. The immersed condenser design includes a self-insulated capillary tube expansion device for simplicity and high efficiency. In a water heating mode air is drawn by the evaporator fan across the evaporator to produce cooled and dehumidified air and heat taken from the air is absorbed by the refrigerant at the evaporator and is pumped to the condenser, where water is heated. When the tank of water heater is full of hot water or a humidistat set point is reached, the water-heating dehumidifier can switch to run as a dehumidifier.
Simulating non-Newtonian flows with the moving particle semi-implicit method with an SPH kernel
NASA Astrophysics Data System (ADS)
Xiang, Hao; Chen, Bin
2015-02-01
The moving particle semi-implicit (MPS) method and smoothed particle hydrodynamics (SPH) are commonly used mesh-free particle methods for free surface flows. The MPS method has superiority in incompressible flow simulation and simple programing. However, the crude kernel function is not accurate enough for the discretization of the divergence of the shear stress tensor by the particle inconsistency when the MPS method is extended to non-Newtonian flows. This paper presents an improved MPS method with an SPH kernel to simulate non-Newtonian flows. To improve the consistency of the partial derivative, the SPH cubic spline kernel and the Taylor series expansion are combined with the MPS method. This approach is suitable for all non-Newtonian fluids that can be described with τ = μ(|γ|) Δ (where τ is the shear stress tensor, μ is the viscosity, |γ| is the shear rate, and Δ is the strain tensor), e.g., the Casson and Cross fluids. Two examples are simulated including the Newtonian Poiseuille flow and container filling process of the Cross fluid. The results of Poiseuille flow are more accurate than the traditional MPS method, and different filling processes are obtained with good agreement with previous results, which verified the validation of the new algorithm. For the Cross fluid, the jet fracture length can be correlated with We0.28Fr0.78 (We is the Weber number, Fr is the Froude number).
NASA Astrophysics Data System (ADS)
Bates, Jefferson; Laricchia, Savio; Ruzsinszky, Adrienn
The Random Phase Approximation (RPA) is quickly becoming a standard method beyond semi-local Density Functional Theory that naturally incorporates weak interactions and eliminates self-interaction error. RPA is not perfect, however, and suffers from self-correlation error as well as an incorrect description of short-ranged correlation typically leading to underbinding. To improve upon RPA we introduce a short-ranged, exchange-like kernel that is one-electron self-correlation free for one and two electron systems in the high-density limit. By tuning the one free parameter in our model to recover an exact limit of the homogeneous electron gas correlation energy we obtain a non-local, energy-optimized kernel that reduces the errors of RPA for both homogeneous and inhomogeneous solids. To reduce the computational cost of the standard kernel-corrected RPA, we also implement RPA renormalized perturbation theory for extended systems, and demonstrate its capability to describe the dominant correlation effects with a low-order expansion in both metallic and non-metallic systems. Furthermore we stress that for norm-conserving implementations the accuracy of RPA and beyond RPA structural properties compared to experiment is inherently limited by the choice of pseudopotential. Current affiliation: King's College London.
Kar, Arindam; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita; Kundu, Mahantapas
2012-01-01
In this paper a nonlinear Gabor Wavelet Transform (GWT) discriminant feature extraction approach for enhanced face recognition is proposed. Firstly, the low-energized blocks from Gabor wavelet transformed images are extracted. Secondly, the nonlinear discriminating features are analyzed and extracted from the selected low-energized blocks by the generalized Kernel Discriminative Common Vector (KDCV) method. The KDCV method is extended to include cosine kernel function in the discriminating method. The KDCV with the cosine kernels is then applied on the extracted low-energized discriminating feature vectors to obtain the real component of a complex quantity for face recognition. In order to derive positive kernel discriminative vectors, we apply only those kernel discriminative eigenvectors that are associated with nonzero eigenvalues. The feasibility of the low-energized Gabor-block-based generalized KDCV method with cosine kernel function models has been successfully tested for classification using the L(1), L(2) distance measures; and the cosine similarity measure on both frontal and pose-angled face recognition. Experimental results on the FRAV2D and the FERET database demonstrate the effectiveness of this new approach. PMID:23365559
Probing the physical determinants of thermal expansion of folded proteins.
Dellarole, Mariano; Kobayashi, Kei; Rouget, Jean-Baptiste; Caro, José Alfredo; Roche, Julien; Islam, Mohammad M; Garcia-Moreno E, Bertrand; Kuroda, Yutaka; Royer, Catherine A
2013-10-24
The magnitude and sign of the volume change upon protein unfolding are strongly dependent on temperature. This temperature dependence reflects differences in the thermal expansivity of the folded and unfolded states. The factors that determine protein molar expansivities and the large differences in thermal expansivity for proteins of similar molar volume are not well understood. Model compound studies have suggested that a major contribution is made by differences in the molar volume of water molecules as they transfer from the protein surface to the bulk upon heating. The expansion of internal solvent-excluded voids upon heating is another possible contributing factor. Here, the contribution from hydration density to the molar thermal expansivity of a protein was examined by comparing bovine pancreatic trypsin inhibitor and variants with alanine substitutions at or near the protein-water interface. Variants of two of these proteins with an additional mutation that unfolded them under native conditions were also examined. A modest decrease in thermal expansivity was observed in both the folded and unfolded states for the alanine variants compared with the parent protein, revealing that large changes can be made to the external polarity of a protein without causing large ensuing changes in thermal expansivity. This modest effect is not surprising, given the small molar volume of the alanine residue. Contributions of the expansion of the internal void volume were probed by measuring the thermal expansion for cavity-containing variants of a highly stable form of staphylococcal nuclease. Significantly larger (2-3-fold) molar expansivities were found for these cavity-containing proteins relative to the reference protein. Taken together, these results suggest that a key determinant of the thermal expansivities of folded proteins lies in the expansion of internal solvent-excluded voids. PMID:23646824
Volcano clustering determination: Bivariate Gauss vs. Fisher kernels
NASA Astrophysics Data System (ADS)
Cañón-Tapia, Edgardo
2013-05-01
Underlying many studies of volcano clustering is the implicit assumption that vent distribution can be studied by using kernels originally devised for distribution in plane surfaces. Nevertheless, an important change in topology in the volcanic context is related to the distortion that is introduced when attempting to represent features found on the surface of a sphere that are being projected into a plane. This work explores the extent to which different topologies of the kernel used to study the spatial distribution of vents can introduce significant changes in the obtained density functions. To this end, a planar (Gauss) and a spherical (Fisher) kernels are mutually compared. The role of the smoothing factor in these two kernels is also explored with some detail. The results indicate that the topology of the kernel is not extremely influential, and that either type of kernel can be used to characterize a plane or a spherical distribution with exactly the same detail (provided that a suitable smoothing factor is selected in each case). It is also shown that there is a limitation on the resolution of the Fisher kernel relative to the typical separation between data that can be accurately described, because data sets with separations lower than 500 km are considered as a single cluster using this method. In contrast, the Gauss kernel can provide adequate resolutions for vent distributions at a wider range of separations. In addition, this study also shows that the numerical value of the smoothing factor (or bandwidth) of both the Gauss and Fisher kernels has no unique nor direct relationship with the relevant separation among data. In order to establish the relevant distance, it is necessary to take into consideration the value of the respective smoothing factor together with a level of statistical significance at which the contributions to the probability density function will be analyzed. Based on such reference level, it is possible to create a hierarchy of
ERIC Educational Resources Information Center
Ayoub, Ayoub B.
2006-01-01
In this article, the author takes up the special trinomial (1 + x + x[squared])[superscript n] and shows that the coefficients of its expansion are entries of a Pascal-like triangle. He also shows how to calculate these entries recursively and explicitly. This article could be used in the classroom for enrichment. (Contains 1 table.)
Guzek, J.C.; Lujan, R.A.
1984-01-01
Disclosed is a cooler for television cameras and other temperature sensitive equipment. The cooler uses compressed gas ehich is accelerated to a high velocity by passing it through flow passageways having nozzle portions which expand the gas. This acceleration and expansion causes the gas to undergo a decrease in temperature thereby cooling the cooler body and adjacent temperature sensitive equipment.
NASA Technical Reports Server (NTRS)
1985-01-01
Under an Egyptian government contract, PADCO studies urban growth in the Nile Area. They were assisted by LANDSAT survey maps and measurements provided by TAC. TAC had classified the raw LANDSAT data and processed it into various categories to detail urban expansion. PADCO crews spot checked the results, and correlations were established.
Physics suggests that the interplay of momentum, continuity, and geometry in outward radial flow must produce density and concomitant pressure reductions. In other words, this flow is intrinsically auto-expansive. It has been proposed that this process is the key to understanding...
For the Long Island, New Jersey, and southern New England region, one facet of marsh drowning as a result of accelerated sea level rise is the expansion of salt marsh ponds and pannes. Over the past century, marsh ponds and pannes have formed and expanded in areas of poor drainag...
Gabor-based kernel PCA with fractional power polynomial models for face recognition.
Liu, Chengjun
2004-05-01
This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power
Local coding based matching kernel method for image classification.
Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong
2014-01-01
This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method. PMID:25119982
Thermal-to-visible face recognition using multiple kernel learning
NASA Astrophysics Data System (ADS)
Hu, Shuowen; Gurram, Prudhvi; Kwon, Heesung; Chan, Alex L.
2014-06-01
Recognizing faces acquired in the thermal spectrum from a gallery of visible face images is a desired capability for the military and homeland security, especially for nighttime surveillance and intelligence gathering. However, thermal-tovisible face recognition is a highly challenging problem, due to the large modality gap between thermal and visible imaging. In this paper, we propose a thermal-to-visible face recognition approach based on multiple kernel learning (MKL) with support vector machines (SVMs). We first subdivide the face into non-overlapping spatial regions or blocks using a method based on coalitional game theory. For comparison purposes, we also investigate uniform spatial subdivisions. Following this subdivision, histogram of oriented gradients (HOG) features are extracted from each block and utilized to compute a kernel for each region. We apply sparse multiple kernel learning (SMKL), which is a MKLbased approach that learns a set of sparse kernel weights, as well as the decision function of a one-vs-all SVM classifier for each of the subjects in the gallery. We also apply equal kernel weights (non-sparse) and obtain one-vs-all SVM models for the same subjects in the gallery. Only visible images of each subject are used for MKL training, while thermal images are used as probe images during testing. With subdivision generated by game theory, we achieved Rank-1 identification rate of 50.7% for SMKL and 93.6% for equal kernel weighting using a multimodal dataset of 65 subjects. With uniform subdivisions, we achieved a Rank-1 identification rate of 88.3% for SMKL, but 92.7% for equal kernel weighting.
Bounding the heat trace of a Calabi-Yau manifold
NASA Astrophysics Data System (ADS)
Fiset, Marc-Antoine; Walcher, Johannes
2015-09-01
The SCHOK bound states that the number of marginal deformations of certain two-dimensional conformal field theories is bounded linearly from above by the number of relevant operators. In conformal field theories defined via sigma models into Calabi-Yau manifolds, relevant operators can be estimated, in the point-particle approximation, by the low-lying spectrum of the scalar Laplacian on the manifold. In the strict large volume limit, the standard asymptotic expansion of Weyl and Minakshisundaram-Pleijel diverges with the higher-order curvature invariants. We propose that it would be sufficient to find an a priori uniform bound on the trace of the heat kernel for large but finite volume. As a first step in this direction, we then study the heat trace asymptotics, as well as the actual spectrum of the scalar Laplacian, in the vicinity of a conifold singularity. The eigenfunctions can be written in terms of confluent Heun functions, the analysis of which gives evidence that regions of large curvature will not prevent the existence of a bound of this type. This is also in line with general mathematical expectations about spectral continuity for manifolds with conical singularities. A sharper version of our results could, in combination with the SCHOK bound, provide a basis for a global restriction on the dimension of the moduli space of Calabi-Yau manifolds.
The Kernel Adaptive Autoregressive-Moving-Average Algorithm.
Li, Kan; Príncipe, José C
2016-02-01
In this paper, we present a novel kernel adaptive recurrent filtering algorithm based on the autoregressive-moving-average (ARMA) model, which is trained with recurrent stochastic gradient descent in the reproducing kernel Hilbert spaces. This kernelized recurrent system, the kernel adaptive ARMA (KAARMA) algorithm, brings together the theories of adaptive signal processing and recurrent neural networks (RNNs), extending the current theory of kernel adaptive filtering (KAF) using the representer theorem to include feedback. Compared with classical feedforward KAF methods, the KAARMA algorithm provides general nonlinear solutions for complex dynamical systems in a state-space representation, with a deferred teacher signal, by propagating forward the hidden states. We demonstrate its capabilities to provide exact solutions with compact structures by solving a set of benchmark nondeterministic polynomial-complete problems involving grammatical inference. Simulation results show that the KAARMA algorithm outperforms equivalent input-space recurrent architectures using first- and second-order RNNs, demonstrating its potential as an effective learning solution for the identification and synthesis of deterministic finite automata. PMID:25935049
Input space versus feature space in kernel-based methods.
Schölkopf, B; Mika, S; Burges, C C; Knirsch, P; Müller, K R; Rätsch, G; Smola, A J
1999-01-01
This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the Kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data. PMID:18252603
Phase discontinuity predictions using a machine-learning trained kernel.
Sawaf, Firas; Groves, Roger M
2014-08-20
Phase unwrapping is one of the key steps of interferogram analysis, and its accuracy relies primarily on the correct identification of phase discontinuities. This can be especially challenging for inherently noisy phase fields, such as those produced through shearography and other speckle-based interferometry techniques. We showed in a recent work how a relatively small 10×10 pixel kernel was trained, through machine learning methods, for predicting the locations of phase discontinuities within noisy wrapped phase maps. We describe here how this kernel can be applied in a sliding-window fashion, such that each pixel undergoes 100 phase-discontinuity examinations--one test for each of its possible positions relative to its neighbors within the kernel's extent. We explore how the resulting predictions can be accumulated, and aggregated through a voting system, and demonstrate that the reliability of this method outperforms processing the image by segmenting it into more conventional 10×10 nonoverlapping tiles. When used in this way, we demonstrate that our 10×10 pixel kernel is large enough for effective processing of full-field interferograms. Avoiding, thus, the need for substantially more formidable computational resources which otherwise would have been necessary for training a kernel of a significantly larger size. PMID:25321117
Multiple kernel sparse representations for supervised and unsupervised learning.
Thiagarajan, Jayaraman J; Ramamurthy, Karthikeyan Natesan; Spanias, Andreas
2014-07-01
In complex visual recognition tasks, it is typical to adopt multiple descriptors, which describe different aspects of the images, for obtaining an improved recognition performance. Descriptors that have diverse forms can be fused into a unified feature space in a principled manner using kernel methods. Sparse models that generalize well to the test data can be learned in the unified kernel space, and appropriate constraints can be incorporated for application in supervised and unsupervised learning. In this paper, we propose to perform sparse coding and dictionary learning in the multiple kernel space, where the weights of the ensemble kernel are tuned based on graph-embedding principles such that class discrimination is maximized. In our proposed algorithm, dictionaries are inferred using multiple levels of 1D subspace clustering in the kernel space, and the sparse codes are obtained using a simple levelwise pursuit scheme. Empirical results for object recognition and image clustering show that our algorithm outperforms existing sparse coding based approaches, and compares favorably to other state-of-the-art methods. PMID:24833593
Bivariate discrete beta Kernel graduation of mortality data.
Mazza, Angelo; Punzo, Antonio
2015-07-01
Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors. PMID:25084764
[Utilizable value of wild economic plant resource--acron kernel].
He, R; Wang, K; Wang, Y; Xiong, T
2000-04-01
Peking whites breeding hens were selected. Using true metabolizable energy method (TME) to evaluate the available nutritive value of acorn kernel, while maize and rice were used as control. The results showed that the contents of gross energy (GE), apparent metabolizable energy (AME), true metabolizable energy (TME) and crude protein (CP) in the acorn kernel were 16.53 mg/kg-1, 11.13 mg.kg-1, 11.66 mg.kg-1 and 10.63%, respectively. The apparent availability and true availability of crude protein were 45.55% and 49.83%. The gross content of 17 amino acids, essential amino acids and semiessential amino acids were 9.23% and 4.84%. The true availability of amino acid and the content of true available amino acid were 60.85% and 6.09%. The contents of tannin and hydrocyanic acid were 4.55% and 0.98% in acorn kernel. The available nutritive value of acorn kernel is similar to maize or slightly lower, but slightly higher than that of rice. Acorn kernel is a wild economic plant resource to exploit and utilize but it contains higher tannin and hydrocyanic acid. PMID:11767593
Expansion tube test time predictions
NASA Technical Reports Server (NTRS)
Gourlay, Christopher M.
1988-01-01
The interaction of an interface between two gases and strong expansion is investigated and the effect on flow in an expansion tube is examined. Two mechanisms for the unsteady Pitot-pressure fluctuations found in the test section of an expansion tube are proposed. The first mechanism depends on the Rayleigh-Taylor instability of the driver-test gas interface in the presence of a strong expansion. The second mechanism depends on the reflection of the strong expansion from the interface. Predictions compare favorably with experimental results. The theory is expected to be independent of the absolute values of the initial expansion tube filling pressures.
Kernel Manifold Alignment for Domain Adaptation.
Tuia, Devis; Camps-Valls, Gustau
2016-01-01
The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors' knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational
Kernel Manifold Alignment for Domain Adaptation
Tuia, Devis; Camps-Valls, Gustau
2016-01-01
The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors’ knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational
Bigravity from gradient expansion
NASA Astrophysics Data System (ADS)
Yamashita, Yasuho; Tanaka, Takahiro
2016-05-01
We discuss how the ghost-free bigravity coupled with a single scalar field can be derived from a braneworld setup. We consider DGP two-brane model without radion stabilization. The bulk configuration is solved for given boundary metrics, and it is substituted back into the action to obtain the effective four-dimensional action. In order to obtain the ghost-free bigravity, we consider the gradient expansion in which the brane separation is supposed to be sufficiently small so that two boundary metrics are almost identical. The obtained effective theory is shown to be ghost free as expected, however, the interaction between two gravitons takes the Fierz-Pauli form at the leading order of the gradient expansion, even though we do not use the approximation of linear perturbation. We also find that the radion remains as a scalar field in the four-dimensional effective theory, but its coupling to the metrics is non-trivial.
Thermal Expansion of Vacuum Plasma Sprayed Coatings
NASA Technical Reports Server (NTRS)
Raj, S V.; Palczer, A. R.
2010-01-01
Metallic Cu-8%Cr, Cu-26%Cr, Cu-8%Cr-1%Al, NiAl and NiCrAlY monolithic coatings were fabricated by vacuum plasma spray deposition processes for thermal expansion property measurements between 293 and 1223 K. The corrected thermal expansion, (DL/L(sub 0) varies with the absolute temperature, T, as (DL/L(sub 0) = A(T - 293)(sup 3) + BIT - 293)(sup 2) + C(T - 293) + D, where, A, B, C and D are thermal, regression constants. Excellent reproducibility was observed for all of the coatings except for data obtained on the Cu-8%Cr and Cu-26%Cr coatings in the first heat-up cycle, which deviated from those determined in the subsequent cycles. This deviation is attributed to the presence of residual stresses developed during the spraying of the coatings, which are relieved after the first heat-up cycle. In the cases of Cu-8%Cr and NiAl, the thermal expansion data were observed to be reproducible for three specimens. The linear expansion data for Cu-8% Cr and Cu-26%Cr agree extremely well with rule of mixture (ROM) predictions. Comparison of the data for the Cu-8%Cr coating with literature data for Cr and Cu revealed that the thermal expansion behavior of this alloy is determined by the Cu-rich matrix. The data for NiAl and NiCrAlY are in excellent agreement with published results irrespective of composition and the methods used for processing the materials. The implications of these results on coating GRCop-84 copper alloy combustor liners for reusable launch vehicles are discussed.
China petrochemical expansion progressing
Not Available
1991-08-05
This paper reports on China's petrochemical expansion surge which is picking up speed. A worldscale petrochemical complex is emerging at Shanghai with an eye to expanding China's petrochemical exports, possibly through joint ventures with foreign companies, China Features reported. In other action, Beijing and Henan province have approved plans for a $1.2 billion chemical fibers complex at the proposed Luoyang refinery, China Daily reported.
Yaffe, Mark J.; Steinert, Yvonne
1990-01-01
Postgraduate training for family physicians has become increasingly centred on 2-year residency programs. The expansion of family medicine residency programs in Quebec raises challenges: to uphold program standards, to recruit and develop new teachers, to recognize and respect the needs of students, to balance program objectives with service requirements for house staff, and to adapt to change within family medicine centers and their affiliated hospitals. Imagesp2054-ap2057-a PMID:21233950
Ultraprecise thermal expansion measurements of seven low expansion materials
NASA Technical Reports Server (NTRS)
Berthold, J. W., III; Jacobs, S. F.
1976-01-01
We summarize a large number of ultraprecise thermal expansion measurements made on seven different low expansivity materials. Expansion coefficients in the -150-300 C temperature range are shown for Owens-Illinois Cer-Vit C-101, Corning ULE 7971 (titanium silicate) and fused silica 7940, Heraeus-Schott Zerodur low-expansion material and Homosil fused silica, Universal Cyclops Invar LR-35, and Simonds Saw and Steel Super Invar.
Digestibility of solvent-treated Jatropha curcas kernel by broiler chickens in Senegal.
Nesseim, Thierry Daniel Tamsir; Dieng, Abdoulaye; Mergeai, Guy; Ndiaye, Saliou; Hornick, Jean-Luc
2015-12-01
Jatropha curcas is a drought-resistant shrub belonging to the Euphorbiaceae family. The kernel contains approximately 60 % lipid in dry matter, and the meal obtained after oil extraction could be an exceptional source of protein for family poultry farming, in the absence of curcin and, especially, some diterpene derivatives phorbol esters that are partially lipophilic. The nutrient digestibility of J. curcas kernel meal (JKM), obtained after partial physicochemical deoiling was thus evaluated in broiler chickens. Twenty broiler chickens, 6 weeks old, were maintained in individual metabolic cages and divided into four groups of five animals, according to a 4 × 4 Latin square design where deoiled JKM was incorporated into grinded corn at 0, 4, 8, and 12 % levels (diets 0, 4, 8, and 12 J), allowing measurement of nutrient digestibility by the differential method. The dry matter (DM) and organic matter (OM) digestibility of diets was affected to a low extent by JKM (85 and 86 % in 0 J and 81 % in 12 J, respectively) in such a way that DM and OM digestibility of JKM was estimated to be close to 50 %. The ether extract (EE) digestibility of JKM remained high, at about 90 %, while crude protein (CP) and crude fiber (CF) digestibility were largely impacted by JKM, with values closed to 40 % at the highest levels of incorporation. J. curcas kernel presents various nutrient digestibilities but has adverse effects on CP and CF digestibility of the diet. The effects of an additional heat or biological treatment on JKM remain to be assessed. PMID:26255184
Improved Online Support Vector Machines Spam Filtering Using String Kernels
NASA Astrophysics Data System (ADS)
Amayri, Ola; Bouguila, Nizar
A major bottleneck in electronic communications is the enormous dissemination of spam emails. Developing of suitable filters that can adequately capture those emails and achieve high performance rate become a main concern. Support vector machines (SVMs) have made a large contribution to the development of spam email filtering. Based on SVMs, the crucial problems in email classification are feature mapping of input emails and the choice of the kernels. In this paper, we present thorough investigation of several distance-based kernels and propose the use of string kernels and prove its efficiency in blocking spam emails. We detail a feature mapping variants in text classification (TC) that yield improved performance for the standard SVMs in filtering task. Furthermore, to cope for realtime scenarios we propose an online active framework for spam filtering.
Recurrent kernel machines: computing with infinite echo state networks.
Hermans, Michiel; Schrauwen, Benjamin
2012-01-01
Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks. PMID:21851278
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562
Kernel weighted joint collaborative representation for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Du, Qian; Li, Wei
2015-05-01
Collaborative representation classifier (CRC) has been applied to hyperspectral image classification, which intends to use all the atoms in a dictionary to represent a testing pixel for label assignment. However, some atoms that are very dissimilar to the testing pixel should not participate in the representation, or their contribution should be very little. The regularized version of CRC imposes strong penalty to prevent dissimilar atoms with having large representation coefficients. To utilize spatial information, the weighted sum of local spatial neighbors is considered as a joint spatial-spectral feature, which is actually for regularized CRC-based classification. This paper proposes its kernel version to further improve classification accuracy, which can be higher than those from the traditional support vector machine with composite kernel and the kernel version of sparse representation classifier.
A method of smoothed particle hydrodynamics using spheroidal kernels
NASA Technical Reports Server (NTRS)
Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.
1995-01-01
We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562
Kernel approximate Bayesian computation in population genetic inferences.
Nakagome, Shigeki; Fukumizu, Kenji; Mano, Shuhei
2013-12-01
Approximate Bayesian computation (ABC) is a likelihood-free approach for Bayesian inferences based on a rejection algorithm method that applies a tolerance of dissimilarity between summary statistics from observed and simulated data. Although several improvements to the algorithm have been proposed, none of these improvements avoid the following two sources of approximation: 1) lack of sufficient statistics: sampling is not from the true posterior density given data but from an approximate posterior density given summary statistics; and 2) non-zero tolerance: sampling from the posterior density given summary statistics is achieved only in the limit of zero tolerance. The first source of approximation can be improved by adding a summary statistic, but an increase in the number of summary statistics could introduce additional variance caused by the low acceptance rate. Consequently, many researchers have attempted to develop techniques to choose informative summary statistics. The present study evaluated the utility of a kernel-based ABC method [Fukumizu, K., L. Song and A. Gretton (2010): "Kernel Bayes' rule: Bayesian inference with positive definite kernels," arXiv, 1009.5736 and Fukumizu, K., L. Song and A. Gretton (2011): "Kernel Bayes' rule. Advances in Neural Information Processing Systems 24." In: J. Shawe-Taylor and R. S. Zemel and P. Bartlett and F. Pereira and K. Q. Weinberger, (Eds.), pp. 1549-1557., NIPS 24: 1549-1557] for complex problems that demand many summary statistics. Specifically, kernel ABC was applied to population genetic inference. We demonstrate that, in contrast to conventional ABCs, kernel ABC can incorporate a large number of summary statistics while maintaining high performance of the inference. PMID:24150124
Broadband Waveform Sensitivity Kernels for Large-Scale Seismic Tomography
NASA Astrophysics Data System (ADS)
Nissen-Meyer, T.; Stähler, S. C.; van Driel, M.; Hosseini, K.; Auer, L.; Sigloch, K.
2015-12-01
Seismic sensitivity kernels, i.e. the basis for mapping misfit functionals to structural parameters in seismic inversions, have received much attention in recent years. Their computation has been conducted via ray-theory based approaches (Dahlen et al., 2000) or fully numerical solutions based on the adjoint-state formulation (e.g. Tromp et al., 2005). The core problem is the exuberant computational cost due to the large number of source-receiver pairs, each of which require solutions to the forward problem. This is exacerbated in the high-frequency regime where numerical solutions become prohibitively expensive. We present a methodology to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (abstract ID# 77891, www.axisem.info), and thus on spherically symmetric models. As a consequence of this method's numerical efficiency even in high-frequency regimes, kernels can be computed in a time- and frequency-dependent manner, thus providing the full generic mapping from perturbed waveform to perturbed structure. Such waveform kernels can then be used for a variety of misfit functions, structural parameters and refiltered into bandpasses without recomputing any wavefields. A core component of the kernel method presented here is the mapping from numerical wavefields to inversion meshes. This is achieved by a Monte-Carlo approach, allowing for convergent and controllable accuracy on arbitrarily shaped tetrahedral and hexahedral meshes. We test and validate this accuracy by comparing to reference traveltimes, show the projection onto various locally adaptive inversion meshes and discuss computational efficiency for ongoing tomographic applications in the range of millions of observed body-wave data between periods of 2-30s.
Single aflatoxin contaminated corn kernel analysis with fluorescence hyperspectral image
NASA Astrophysics Data System (ADS)
Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Ononye, Ambrose; Brown, Robert L.; Cleveland, Thomas E.
2010-04-01
Aflatoxins are toxic secondary metabolites of the fungi Aspergillus flavus and Aspergillus parasiticus, among others. Aflatoxin contaminated corn is toxic to domestic animals when ingested in feed and is a known carcinogen associated with liver and lung cancer in humans. Consequently, aflatoxin levels in food and feed are regulated by the Food and Drug Administration (FDA) in the US, allowing 20 ppb (parts per billion) limits in food and 100 ppb in feed for interstate commerce. Currently, aflatoxin detection and quantification methods are based on analytical tests including thin-layer chromatography (TCL) and high performance liquid chromatography (HPLC). These analytical tests require the destruction of samples, and are costly and time consuming. Thus, the ability to detect aflatoxin in a rapid, nondestructive way is crucial to the grain industry, particularly to corn industry. Hyperspectral imaging technology offers a non-invasive approach toward screening for food safety inspection and quality control based on its spectral signature. The focus of this paper is to classify aflatoxin contaminated single corn kernels using fluorescence hyperspectral imagery. Field inoculated corn kernels were used in the study. Contaminated and control kernels under long wavelength ultraviolet excitation were imaged using a visible near-infrared (VNIR) hyperspectral camera. The imaged kernels were chemically analyzed to provide reference information for image analysis. This paper describes a procedure to process corn kernels located in different images for statistical training and classification. Two classification algorithms, Maximum Likelihood and Binary Encoding, were used to classify each corn kernel into "control" or "contaminated" through pixel classification. The Binary Encoding approach had a slightly better performance with accuracy equals to 87% or 88% when 20 ppb or 100 ppb was used as classification threshold, respectively.
Studies on counterstreaming plasma expansion
NASA Technical Reports Server (NTRS)
Singh, N.; Thiemann, H.; Schunk, R. W.
1986-01-01
Recent studies on counterstreaming plasma expansions are summarized. The basic phenomenon of plasma expansion is reviewed, and results from one-dimensional simulations of counterstreaming plasma expansion are discussed. Results from simulations based on an electrostatic particle-in-cell code, in which the dynamics of both the electrons and ions are exactly followed, are discussed. The formation of electrostatic shocks is addressed. Finally, results are presented on the ionospheric plasma expansion along the geomagnetic flux tubes by solving the hydrodynamic equations.
Magnetic expansion of cosmic plasmas
NASA Technical Reports Server (NTRS)
Yang, Wei-Hong
1995-01-01
Plasma expansion is common in many astrophysical phenomena. The understanding of the driving mechanism has usually been focused on the gas pressure that implies conversion of thermal energy into flow kinetic energy. However, 'cool' expansions have been indicated in stellar/solar winds and other expanding processes. Magnetic expansion may be the principal driving mechanism. Magnetic energy in the potential form can be converted into kinetic energy during global expansion of magnetized plasmas.
A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896
An information theoretic approach of designing sparse kernel adaptive filters.
Liu, Weifeng; Park, Il; Principe, José C
2009-12-01
This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented. PMID:19923047
Iris Image Blur Detection with Multiple Kernel Learning
NASA Astrophysics Data System (ADS)
Pan, Lili; Xie, Mei; Mao, Ling
In this letter, we analyze the influence of motion and out-of-focus blur on both frequency spectrum and cepstrum of an iris image. Based on their characteristics, we define two new discriminative blur features represented by Energy Spectral Density Distribution (ESDD) and Singular Cepstrum Histogram (SCH). To merge the two features for blur detection, a merging kernel which is a linear combination of two kernels is proposed when employing Support Vector Machine. Extensive experiments demonstrate the validity of our method by showing the improved blur detection performance on both synthetic and real datasets.
Expansion: A Plan for Success.
ERIC Educational Resources Information Center
Callahan, A.P.
This report provides selling brokers' guidelines for the successful expansion of their operations outlining a basic method of preparing an expansion plan. Topic headings are: The Pitfalls of Expansion (The Language of Business, Timely Financial Reporting, Regulatory Agencies of Government, Preoccupation with the Facade of Business, A Business Is a…
Thermal expansion of Neapolitan Yellow Tuff
NASA Astrophysics Data System (ADS)
Aversa, S.; Evangelista, A.
1993-10-01
In saturated rocks and soils it is possible to define different coefficients of thermal expansion depending on the drainage conditions. This topic is first examined from the theoretical point of view with regard to an ideal isotropic thermo-elastic porous medium. Some special features of the behaviour of natural soils and rocks during thermal expansion tests are subsequently discussed. An experimental evaluation of some of these coefficients is presented in the second part of the paper. The material investigated is a pyroclastic rock, the so-called Neapolitan Yellow Tuff. Thermal expansion coefficient in drairend conditions has been evaluated, when this material is saturated with water. The e pressure increase induced by heating has been measured in undrained tes temperatures investigated range between room temperature up to 225°C. Different types of apparatus have been used and, when possible, a comparison between the results has been proposed. The results obtained in undrained thermal expansion tests are in agreement with theoretical predictions. This research is part of an on-going study of the complex phenomena known as Bradyseism, which is occurring in a volcanic area a few kilometers from Naples (Italy). Some considerations on this phenomenon are drawn in the last paragraph of the paper.
Smith, Kevin W; Cain, Fred W; Talbot, Geoff
2004-08-25
Palm kernel stearin and hydrogenated palm kernel stearin can be used to prepare compound chocolate bars or coatings. The objective of this study was to characterize the chemical composition, polymorphism, and melting behavior of the bloom that develops on bars of compound chocolate prepared using these fats. Bars were stored for 1 year at 15, 20, or 25 degrees C. At 15 and 20 degrees C the bloom was enriched in cocoa butter triacylglycerols, with respect to the main fat phase, whereas at 25 degrees C the enrichment was with palm kernel triacylglycerols. The bloom consisted principally of solid fat and was sharper melting than was the fat in the chocolate. Polymorphic transitions from the initial beta' phase to the beta phase accompanied the formation of bloom at all temperatures. PMID:15315397
Chakrabarti, J.; Sajjad Zahir, M.
1985-03-01
We show that the product of local current operators in quantum chromodynamics (QCD), when expanded in terms of condensates, such as psi-barpsi, G/sup a//sub munu/ G/sup a//sub munu/, psi-barGAMMA psipsi-barGAMMApsi, f/sub a/bcG/sup a//sub munu/G/sup b//sub nualpha/ x G/sup c//sub alphamu/, etc., yields a series in Planck's constant. This, however, provides no hint that the higher terms in such an expansion may be less significant.
Expansible quantum secret sharing network
NASA Astrophysics Data System (ADS)
Sun, Ying; Xu, Sheng-Wei; Chen, Xiu-Bo; Niu, Xin-Xin; Yang, Yi-Xian
2013-08-01
In the practical applications, member expansion is a usual demand during the development of a secret sharing network. However, there are few consideration and discussion on network expansibility in the existing quantum secret sharing schemes. We propose an expansible quantum secret sharing scheme with relatively simple and economical quantum resources and show how to split and reconstruct the quantum secret among an expansible user group in our scheme. Its trait, no requirement of any agent's assistant during the process of member expansion, can help to prevent potential menaces of insider cheating. We also give a discussion on the security of this scheme from three aspects.
Immediate versus chronic tissue expansion.
Machida, B K; Liu-Shindo, M; Sasaki, G H; Rice, D H; Chandrasoma, P
1991-03-01
A quantitative comparison of the effects on tissues is performed between chronic tissue expansion, intraoperative expansion, and load cycling in a guinea pig model. Intra-operative expansion, which was developed by Sasaki as a method of immediate tissue expansion for small- to medium-sized defects, and load cycling, which was described by Gibson as a method using intraoperative pull, are compared with chronic tissue expansion on the basis of the following four parameters: amount of skin produced, flap viability, intraoperative tissue pressures, and histological changes. The chronically expanded group, which included booster and nonbooster expansions, produced a 137% increase in surface area, or a 52% increase in flap arc length, whereas intraoperative expansion resulted in a 31% increase in surface area, or a 15% increase in flap arc length. The load-cycled group, however, resulted in an almost negligible amount of skin increase. All three techniques exhibit immediate postexpansion stretchback. Flap viability is not impaired by any of the three techniques, in spite of the elevated pressures observed during expansion. Therefore, intraoperative expansion is effective primarily for limited expansion of small defects, whereas chronic tissue expansion still provides the greatest amount of skin increase when compared with other techniques. PMID:2029132
Working fluids and expansion machines for ORC
NASA Astrophysics Data System (ADS)
Richter, Lukáš; Linhart, Jiří
2016-06-01
This paper discusses the key technical aspects of the Organic Rankin - Clausius cycle (ORC), unconventional technology with great potential for the use of low-potential heat and the use of geothermal and solar energy, and in connection with the burning of biomass. The principle of ORC has been known since the late 19th century. The development of new organic substances and improvements to the expansion device now allows full commercial exploitation of ORC. The right choice of organic working substances has the most important role in the design of ORC, depending on the specific application. The chosen working substance and achieved operating parameters will affect the selection and construction of the expansion device. For this purpose the screw engine, inversion of the screw compressor, can be used.
Logarithmic radiative effect of water vapor and spectral kernels
NASA Astrophysics Data System (ADS)
Bani Shahabadi, Maziar; Huang, Yi
2014-05-01
Radiative kernels have become a useful tool in climate analysis. A set of spectral kernels is calculated using a moderate resolution atmospheric transmission code MODTRAN and implemented in diagnosing spectrally decomposed global outgoing longwave radiation (OLR) changes. It is found that the effect of water vapor on the OLR is in proportion to the logarithm of its concentration. Spectral analysis discloses that this logarithmic dependency mainly results from water vapor absorption bands (0-560 cm-1 and 1250-1850 cm-1), while in the window region (800-1250 cm-1), the effect scales more linearly to its concentration. The logarithmic and linear effects in the respective spectral regions are validated by the calculations of a benchmark line-by-line radiative transfer model LBLRTM. The analysis based on LBLRTM-calculated second-order kernels shows that the nonlinear (logarithmic) effect results from the damping of the OLR sensitivity to layer-wise water vapor perturbation by both intra- and inter-layer effects. Given that different scaling approaches suit different spectral regions, it is advisable to apply the kernels in a hybrid manner in diagnosing the water vapor radiative effect. Applying logarithmic scaling in the water vapor absorption bands where absorption is strong and linear scaling in the window region where absorption is weak can generally constrain the error to within 10% of the overall OLR change for up to eightfold water vapor perturbations.
Multiobjective optimization for model selection in kernel methods in regression.
You, Di; Benitez-Quiroz, Carlos Fabian; Martinez, Aleix M
2014-10-01
Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-versus-variance tradeoff. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a tradeoff between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition, and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared with methods in the state of the art. PMID:25291740
Wheat kernel black point and fumonisin contamination by Fusarium proliferatum
Technology Transfer Automated Retrieval System (TEKTRAN)
Fusarium proliferatum is a major cause of maize ear rot and fumonisin contamination and also can cause wheat kernel black point disease. The primary objective of this study was to characterize nine F. proliferatum strains from wheat from Nepal for ability to cause black point and fumonisin contamin...
Notes on a storage manager for the Clouds kernel
NASA Technical Reports Server (NTRS)
Pitts, David V.; Spafford, Eugene H.
1986-01-01
The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.
Microwave moisture meter for in-shell peanut kernels
Technology Transfer Automated Retrieval System (TEKTRAN)
. A microwave moisture meter built with off-the-shelf components was developed, calibrated and tested in the laboratory and in the field for nondestructive and instantaneous in-shell peanut kernel moisture content determination from dielectric measurements on unshelled peanut pod samples. The meter ...
Matrix kernels for MEG and EEG source localization and imaging
Mosher, J.C.; Lewis, P.S.; Leahy, R.M.
1994-12-31
The most widely used model for electroencephalography (EEG) and magnetoencephalography (MEG) assumes a quasi-static approximation of Maxwell`s equations and a piecewise homogeneous conductor model. Both models contain an incremental field element that linearly relates an incremental source element (current dipole) to the field or voltage at a distant point. The explicit form of the field element is dependent on the head modeling assumptions and sensor configuration. Proper characterization of this incremental element is crucial to the inverse problem. The field element can be partitioned into the product of a vector dependent on sensor characteristics and a matrix kernel dependent only on head modeling assumptions. We present here the matrix kernels for the general boundary element model (BEM) and for MEG spherical models. We show how these kernels are easily interchanged in a linear algebraic framework that includes sensor specifics such as orientation and gradiometer configuration. We then describe how this kernel is easily applied to ``gain`` or ``transfer`` matrices used in multiple dipole and source imaging models.
Classification of oat and groat kernels using NIR hyperspectral imaging.
Serranti, Silvia; Cesare, Daniela; Marini, Federico; Bonifazi, Giuseppe
2013-01-15
An innovative procedure to classify oat and groat kernels based on coupling hyperspectral imaging (HSI) in the near infrared (NIR) range (1006-1650 nm) and chemometrics was designed, developed and validated. According to market requirements, the amount of groat, that is the hull-less oat kernels, is one of the most important quality characteristics of oats. Hyperspectral images of oat and groat samples have been acquired by using a NIR spectral camera (Specim, Finland) and the resulting data hypercubes were analyzed applying Principal Component Analysis (PCA) for exploratory purposes and Partial Least Squares-Discriminant Analysis (PLS-DA) to build the classification models to discriminate the two kernel typologies. Results showed that it is possible to accurately recognize oat and groat single kernels by HSI (prediction accuracy was almost 100%). The study demonstrated also that good classification results could be obtained using only three wavelengths (1132, 1195 and 1608 nm), selected by means of a bootstrap-VIP procedure, allowing to speed up the classification processing for industrial applications. The developed objective and non-destructive method based on HSI can be utilized for quality control purposes and/or for the definition of innovative sorting logics of oat grains. PMID:23200388
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND STANDARDS FOR CERTAIN...
Estimating Filtering Errors Using the Peano Kernel Theorem
Jerome Blair
2009-02-20
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.
Estimating Filtering Errors Using the Peano Kernel Theorem
Jerome Blair
2008-03-01
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.
Stereotype Measurement and the "Kernel of Truth" Hypothesis.
ERIC Educational Resources Information Center
Gordon, Randall A.
1989-01-01
Describes a stereotype measurement suitable for classroom demonstration. Illustrates C. McCauley and C. L. Stitt's diagnostic ratio measure and examines the validity of the "kernel of truth" hypothesis. Uses this as a starting point for class discussion. Reports results and gives suggestions for discussion of related concepts. (Author/NL)
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND STANDARDS FOR CERTAIN...
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
Music emotion detection using hierarchical sparse kernel machines.
Chin, Yu-Hao; Lin, Chang-Hong; Siahaan, Ernestasia; Wang, Jia-Ching
2014-01-01
For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA) is implemented to reduce the dimension. The acoustical features are utilized to generate the first-level decision vector, which is a vector with each element being a significant value of an emotion. The significant values of eight main emotional classes are utilized in this paper. To calculate the significant value of an emotion, we construct its 2-class SVM with calm emotion as the global (non-target) side of the SVM. The probability distributions of the adopted acoustical features are calculated and the probability product kernel is applied in the first-level SVMs to obtain first-level decision vector feature. In the second level of the hierarchical system, we merely construct a 2-class relevance vector machine (RVM) with happiness as the target side and other emotions as the background side of the RVM. The first-level decision vector is used as the feature with conventional radial basis function kernel. The happiness verification threshold is built on the probability value. In the experimental results, the detection error tradeoff (DET) curve shows that the proposed system has a good performance on verifying if a music clip reveals happiness emotion. PMID:24729748
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Kernel color classification. 51.1403 Section 51.1403 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Kernel color classification. 51.1403 Section 51.1403 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...
Online multiple kernel similarity learning for visual search.
Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin
2014-03-01
Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in content-based image retrieval (CBIR). Despite their successes, most existing methods on distance metric learning are limited in two aspects. First, they usually assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multimodal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel similarity learning framework for learning kernel-based proximity functions which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel online multiple kernel similarity (OMKS) learning method which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets in which encouraging results show that OMKS outperforms the state-of-the-art techniques significantly. PMID:24457509
High-Speed Tracking with Kernelized Correlation Filters.
Henriques, João F; Caseiro, Rui; Martins, Pedro; Batista, Jorge
2015-03-01
The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies-any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new kernelized correlation filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call dual correlation filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source. PMID:26353263
Multiobjective Optimization for Model Selection in Kernel Methods in Regression
You, Di; Benitez-Quiroz, C. Fabian; Martinez, Aleix M.
2016-01-01
Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-vs-variance trade-off. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a trade-off between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared to methods in the state of the art. PMID:25291740
Chebyshev moment problems: Maximum entropy and kernel polynomial methods
Silver, R.N.; Roeder, H.; Voter, A.F.; Kress, J.D.
1995-12-31
Two Chebyshev recursion methods are presented for calculations with very large sparse Hamiltonians, the kernel polynomial method (KPM) and the maximum entropy method (MEM). They are applicable to physical properties involving large numbers of eigenstates such as densities of states, spectral functions, thermodynamics, total energies for Monte Carlo simulations and forces for tight binding molecular dynamics. this paper emphasizes efficient algorithms.
Acetolactate Synthase Activity in Developing Maize (Zea mays L.) Kernels
Muhitch, Michael J.
1988-01-01
Acetolactate synthase (EC 4.1.3.18) activity was examined in maize (Zea mays L.) endosperm and embryos as a function of kernel development. When assayed using unpurified homogenates, embryo acetolactate synthase activity appeared less sensitive to inhibition by leucine + valine and by the imidazolinone herbicide imazapyr than endosperm acetolactate synthase activity. Evidence is presented to show that pyruvate decarboxylase contributes to apparent acetolactate synthase activity in crude embryo extracts and a modification of the acetolactate synthase assay is proposed to correct for the presence of pyruvate decarboxylase in unpurified plant homogenates. Endosperm acetolactate synthase activity increased rapidly during early kernel development, reaching a maximum of 3 micromoles acetoin per hour per endosperm at 25 days after pollination. In contrast, embryo activity was low in young kernels and steadily increased throughout development to a maximum activity of 0.24 micromole per hour per embryo by 45 days after pollination. The sensitivity of both endosperm and embryo acetolactate synthase activities to feedback inhibition by leucine + valine did not change during kernel development. The results are compared to those found for other enzymes of nitrogen metabolism and discussed with respect to the potential roles of the embryo and endosperm in providing amino acids for storage protein synthesis. PMID:16665871
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken... pass through a round opening 8/64 of an inch (3.2 mm) in diameter....
Metabolite identification through multiple kernel learning on fragmentation trees
Shen, Huibin; Dührkop, Kai; Böcker, Sebastian; Rousu, Juho
2014-01-01
Motivation: Metabolite identification from tandem mass spectrometric data is a key task in metabolomics. Various computational methods have been proposed for the identification of metabolites from tandem mass spectra. Fragmentation tree methods explore the space of possible ways in which the metabolite can fragment, and base the metabolite identification on scoring of these fragmentation trees. Machine learning methods have been used to map mass spectra to molecular fingerprints; predicted fingerprints, in turn, can be used to score candidate molecular structures. Results: Here, we combine fragmentation tree computations with kernel-based machine learning to predict molecular fingerprints and identify molecular structures. We introduce a family of kernels capturing the similarity of fragmentation trees, and combine these kernels using recently proposed multiple kernel learning approaches. Experiments on two large reference datasets show that the new methods significantly improve molecular fingerprint prediction accuracy. These improvements result in better metabolite identification, doubling the number of metabolites ranked at the top position of the candidates list. Contact: huibin.shen@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931979
Classification of Microarray Data Using Kernel Fuzzy Inference System
Kumar Rath, Santanu
2014-01-01
The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature selection method. Kernel functions are used to map original data points into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function ϕ through a mathematical process called the kernel trick. This paper also presents a comparative study for classification using K-FIS along with support vector machine (SVM) for different set of features (genes). Performance parameters available in the literature such as precision, recall, specificity, F-measure, ROC curve, and accuracy are considered to analyze the efficiency of the classification model. From the proposed approach, it is apparent that K-FIS model obtains similar results when compared with SVM model. This is an indication that the proposed approach relies on kernel function.
Low Cost Real-Time Sorting of in Shell Pistachio Nuts from Kernels
Technology Transfer Automated Retrieval System (TEKTRAN)
A high speed sorter for separating pistachio nuts with (in shell) and without (kernels) shells is reported. Testing indicates 95% accuracy in removing kernels from the in shell stream with no false positive results out of 1000 kernels tested. Testing with 1000 each of in shell, shell halves, and ker...
Technology Transfer Automated Retrieval System (TEKTRAN)
An automated NIR system was used over a two-month storage period to detect single wheat kernels that contained live or dead internal rice weevils at various stages of growth. Correct classification of sound kernels and kernels containing live pupae, large larvae, medium-sized larvae, and small larv...
Size distributions of different orders of kernels within the oat spikelet
Technology Transfer Automated Retrieval System (TEKTRAN)
Oat kernel size uniformity is of interest to the oat milling industry because of the importance of kernel size in the dehulling process. Previous studies have indicated that oat kernel size distributions fit a bimodal better than a normal distribution. Here we have demonstrated by spikelet dissectio...
Technology Transfer Automated Retrieval System (TEKTRAN)
The Perten Single Kernel Characterization System (SKCS) is the current reference method to determine single wheat kernel texture. However, the SKCS calibration method is based on bulk samples, and there is no method to determine the measurement error on single kernel hardness. The objective of thi...
Automated Single-Kernel Sorting to Select for Quality Traits in Wheat Breeding Lines
Technology Transfer Automated Retrieval System (TEKTRAN)
An automated single kernel near-infrared system was used to select kernels to enhance the end-use quality of hard red wheat breeder samples. Twenty breeding populations and advanced lines were sorted for hardness index, protein content, and kernel color. To determine if the phenotypic sorting was b...
Genome Mapping of Kernel Characteristics in Hard Red Spring Wheat Breeding Lines
Technology Transfer Automated Retrieval System (TEKTRAN)
Kernel characteristics, particularly kernel weight, kernel size, and grain protein content, are important components of grain yield and quality in wheat. Development of high performing wheat cultivars, with high grain yield and quality, is a major focus in wheat breeding programs worldwide. Here, we...
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
Single-kernel NIR analysis for evaluating wheat samples for fusarium head blight resistance
Technology Transfer Automated Retrieval System (TEKTRAN)
A method to estimate bulk deoxynivalenol (DON) content of wheat grain samples using single kernel DON levels estimated by a single kernel near infrared (SKNIR) system combined with single kernel weights is described. This method estimated bulk DON levels in 90% of 160 grain samples within 6.7 ppm DO...
High-Throughput Sequencing Reveals Single Nucleotide Variants in Longer-Kernel Bread Wheat
Chen, Feng; Zhu, Zibo; Zhou, Xiaobian; Yan, Yan; Dong, Zhongdong; Cui, Dangqun
2016-01-01
The transcriptomes of bread wheat Yunong 201 and its ethyl methanesulfonate derivative Yunong 3114 were obtained by next-sequencing technology. Single nucleotide variants (SNVs) in the wheat strains were explored and compared. A total of 5907 and 6287 non-synonymous SNVs were acquired for Yunong 201 and 3114, respectively. A total of 4021 genes with SNVs were obtained. The genes that underwent non-synonymous SNVs were significantly involved in ATP binding, protein phosphorylation, and cellular protein metabolic process. The heat map analysis also indicated that most of these mutant genes were significantly differentially expressed at different developmental stages. The SNVs in these genes possibly contribute to the longer kernel length of Yunong 3114. Our data provide useful information on wheat transcriptome for future studies on wheat functional genomics. This study could also help in illustrating the gene functions of the non-synonymous SNVs of Yunong 201 and 3114. PMID:27551288
High-Throughput Sequencing Reveals Single Nucleotide Variants in Longer-Kernel Bread Wheat.
Chen, Feng; Zhu, Zibo; Zhou, Xiaobian; Yan, Yan; Dong, Zhongdong; Cui, Dangqun
2016-01-01
The transcriptomes of bread wheat Yunong 201 and its ethyl methanesulfonate derivative Yunong 3114 were obtained by next-sequencing technology. Single nucleotide variants (SNVs) in the wheat strains were explored and compared. A total of 5907 and 6287 non-synonymous SNVs were acquired for Yunong 201 and 3114, respectively. A total of 4021 genes with SNVs were obtained. The genes that underwent non-synonymous SNVs were significantly involved in ATP binding, protein phosphorylation, and cellular protein metabolic process. The heat map analysis also indicated that most of these mutant genes were significantly differentially expressed at different developmental stages. The SNVs in these genes possibly contribute to the longer kernel length of Yunong 3114. Our data provide useful information on wheat transcriptome for future studies on wheat functional genomics. This study could also help in illustrating the gene functions of the non-synonymous SNVs of Yunong 201 and 3114. PMID:27551288
Global Monte Carlo Simulation with High Order Polynomial Expansions
William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin
2007-12-13
The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as “local” piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi’s method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source
FFBSKAT: fast family-based sequence kernel association test.
Svishcheva, Gulnara R; Belonogova, Nadezhda M; Axenovich, Tatiana I
2014-01-01
The kernel machine-based regression is an efficient approach to region-based association analysis aimed at identification of rare genetic variants. However, this method is computationally complex. The running time of kernel-based association analysis becomes especially long for samples with genetic (sub) structures, thus increasing the need to develop new and effective methods, algorithms, and software packages. We have developed a new R-package called fast family-based sequence kernel association test (FFBSKAT) for analysis of quantitative traits in samples of related individuals. This software implements a score-based variance component test to assess the association of a given set of single nucleotide polymorphisms with a continuous phenotype. We compared the performance of our software with that of two existing software for family-based sequence kernel association testing, namely, ASKAT and famSKAT, using the Genetic Analysis Workshop 17 family sample. Results demonstrate that FFBSKAT is several times faster than other available programs. In addition, the calculations of the three-compared software were similarly accurate. With respect to the available analysis modes, we combined the advantages of both ASKAT and famSKAT and added new options to empower FFBSKAT users. The FFBSKAT package is fast, user-friendly, and provides an easy-to-use method to perform whole-exome kernel machine-based regression association analysis of quantitative traits in samples of related individuals. The FFBSKAT package, along with its manual, is available for free download at http://mga.bionet.nsc.ru/soft/FFBSKAT/. PMID:24905468
Kernel-based least squares policy iteration for reinforcement learning.
Xu, Xin; Hu, Dewen; Lu, Xicheng
2007-07-01
In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating
A locally adaptive kernel regression method for facies delineation
NASA Astrophysics Data System (ADS)
Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.
2015-12-01
Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.
Pallaver, Carl B.; Morgan, Michael W.
1978-01-01
A cryogenic expansion engine includes intake and exhaust poppet valves each controlled by a cam having adjustable dwell, the valve seats for the valves being threaded inserts in the valve block. Each cam includes a cam base and a ring-shaped cam insert disposed at an exterior corner of the cam base, the cam base and cam insert being generally circular but including an enlarged cam dwell, the circumferential configuration of the cam base and cam dwell being identical, the cam insert being rotatable with respect to the cam base. GI CONTRACTUAL ORIGIN OF THE INVENTION The invention described herein was made in the course of, or under, a contract with the UNITED STATES ENERGY RESEARCH AND DEVELOPMENT ADMINISTRATION.
Short distance expansion for fluctuation induced interactions
NASA Astrophysics Data System (ADS)
Emig, Thorsten; Bimonte, Giuseppe
Fluctuation induced interactions become most prominent in close to proximity to surfaces. Examples include van der Waals and Casimir forces, heat transfer, and spectral shifts for atoms and molecules. In many situations, the surfaces are curved or structured which makes the computation of the interaction in general complicated. Here we present a versatile and powerful approach to this problem which is based on a derivative expansion. It applies to distances much smaller than the radii of surface curvature. Explicit results include orientational effects for anisotropic particles, thermal effects, and spectral modifications.
High flux expansion divertor studies in NSTX
Soukhanovskii, V A; Maingi, R; Bell, R E; Gates, D A; Kaita, R; Kugel, H W; LeBlanc, B P; Maqueda, R; Menard, J E; Mueller, D; Paul, S F; Raman, R; Roquemore, A L
2009-06-29
Projections for high-performance H-mode scenarios in spherical torus (ST)-based devices assume low electron collisionality for increased efficiency of the neutral beam current drive. At lower collisionality (lower density), the mitigation techniques based on induced divertor volumetric power and momentum losses may not be capable of reducing heat and material erosion to acceptable levels in a compact ST divertor. Divertor geometry can also be used to reduce high peak heat and particle fluxes by flaring a scrape-off layer (SOL) flux tube at the divertor plate, and by optimizing the angle at which the flux tube intersects the divertor plate, or reduce heat flow to the divertor by increasing the length of the flux tube. The recently proposed advanced divertor concepts [1, 2] take advantage of these geometry effects. In a high triangularity ST plasma configuration, the magnetic flux expansion at the divertor strike point (SP) is inherently high, leading to a reduction of heat and particle fluxes and a facilitated access to the outer SP detachment, as has been demonstrated recently in NSTX [3]. The natural synergy of the highly-shaped high-performance ST plasmas with beneficial divertor properties motivated a further systematic study of the high flux expansion divertor. The National Spherical Torus Experiment (NSTX) is a mid-sized device with the aspect ratio A = 1.3-1.5 [4]. In NSTX, the graphite tile divertor has an open horizontal plate geometry. The divertor magnetic configuration geometry was systematically changed in an experiment by either (1) changing the distance between the lower divertor X-point and the divertor plate (X-point height h{sub X}), or by (2) keeping the X-point height constant and increasing the outer SP radius. An initial analysis of the former experiment is presented below. Since in the divertor the poloidal field B{sub {theta}} strength is proportional to h{sub X}, the X-point height variation changed the divertor plasma wetted area due to
Aerodynamic heated steam generating apparatus
Kim, K.
1986-08-12
An aerodynamic heated steam generating apparatus is described which consists of: an aerodynamic heat immersion coil steam generator adapted to be located on the leading edge of an airframe of a hypersonic aircraft and being responsive to aerodynamic heating of water by a compression shock airstream to produce steam pressure; an expansion shock air-cooled condensor adapted to be located in the airframe rearward of and operatively coupled to the aerodynamic heat immersion coil steam generator to receive and condense the steam pressure; and an aerodynamic heated steam injector manifold adapted to distribute heated steam into the airstream flowing through an exterior generating channel of an air-breathing, ducted power plant.
Burial Ground Expansion Hydrogeologic Characterization
Gaughan , T.F.
1999-02-26
Sirrine Environmental Consultants provided technical oversight of the installation of eighteen groundwater monitoring wells and six exploratory borings around the location of the Burial Ground Expansion.
Cumulant expansions for atmospheric flows
NASA Astrophysics Data System (ADS)
Ait-Chaalal, Farid; Schneider, Tapio; Meyer, Bettina; Marston, J. B.
2016-02-01
Atmospheric flows are governed by the equations of fluid dynamics. These equations are nonlinear, and consequently the hierarchy of cumulant equations is not closed. But because atmospheric flows are inhomogeneous and anisotropic, the nonlinearity may manifest itself only weakly through interactions of nontrivial mean fields with disturbances such as thermals or eddies. In such situations, truncations of the hierarchy of cumulant equations hold promise as a closure strategy. Here we show how truncations at second order can be used to model and elucidate the dynamics of turbulent atmospheric flows. Two examples are considered. First, we study the growth of a dry convective boundary layer, which is heated from below, leading to turbulent upward energy transport and growth of the boundary layer. We demonstrate that a quasilinear truncation of the equations of motion, in which interactions of disturbances among each other are neglected but interactions with mean fields are taken into account, can capture the growth of the convective boundary layer. However, it does not capture important turbulent transport terms in the turbulence kinetic energy budget. Second, we study the evolution of two-dimensional large-scale waves, which are representative of waves seen in Earth's upper atmosphere. We demonstrate that a cumulant expansion truncated at second order (CE2) can capture the evolution of such waves and their nonlinear interaction with the mean flow in some circumstances, for example, when the wave amplitude is small enough or the planetary rotation rate is large enough. However, CE2 fails to capture the flow evolution when strongly nonlinear eddy-eddy interactions that generate small-scale filaments in surf zones around critical layers become important. Higher-order closures can capture these missing interactions. The results point to new ways in which the dynamics of turbulent boundary layers may be represented in climate models, and they illustrate different classes
NASA Astrophysics Data System (ADS)
Riggs, Lloyd Stephen
In this work the transient currents induced on an arbitrary system of thin linear scatterers by an electromagnetic plane wave are solved by using an electric field integral equation (EFIE) formulation. The transient analysis is carried out using the singularity expansion method (SEM). The general analysis developed here is useful for assessing the vulnerability of military aircraft to a nuclear generated electromagnetic pulse (EMP). It is also useful as a modal synthesis tool in the analysis and design of frequency selective surfaces (FSS). SEM parameters for a variety of thin cylindrical geometries have been computed. Specifically, SEM poles, modes, coupling coefficients, and transient currents are given for the two and three element planar array. Poles and modes for planar arrays with a larger number (as many as eight) of identical equally spaced elements are also considered. SEM pole-mode results are given for identical parallel elements with ends located at the vertices of a regular N-agon. Pole-mode patterns are found for symmetric (and slightly perturbed) single junction N-arm elements and for the five junction Jerusalem cross. The Jerusalem cross element has been used extensively in FSS.
Concentric tubes cold-bonded by drawing and internal expansion
NASA Technical Reports Server (NTRS)
Hymes, L. C.; Stone, C. C.
1971-01-01
Metal tubes bonded together without heat application or brazing materials retain strength at elevated temperatures, and when subjected to constant or cyclic temperature gradients. Combination drawing and expansion process produces residual tangential tensile stress in the outer tube and tangential compressive stress in the inner tube.
Technology Transfer Automated Retrieval System (TEKTRAN)
Gray kernel is an important disease of macadamia that affects the quality of kernels, causing gray discoloration and a permeating, foul odor. Gray kernel symptoms were produced in raw, in-shell kernels of three cultivars of macadamia that were inoculated with strains of Enterobacter cloacae. Koch’...
FRIT characterized hierarchical kernel memory arrangement for multiband palmprint recognition
NASA Astrophysics Data System (ADS)
Kisku, Dakshina R.; Gupta, Phalguni; Sing, Jamuna K.
2015-10-01
In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.
Effective face recognition using bag of features with additive kernels
NASA Astrophysics Data System (ADS)
Yang, Shicai; Bebis, George; Chu, Yongjie; Zhao, Lindu
2016-01-01
In past decades, many techniques have been used to improve face recognition performance. The most common and well-studied ways are to use the whole face image to build a subspace based on the reduction of dimensionality. Differing from methods above, we consider face recognition as an image classification problem. The face images of the same person are considered to fall into the same category. Each category and each face image could be both represented by a simple pyramid histogram. Spatial dense scale-invariant feature transform features and bag of features method are used to build categories and face representations. In an effort to make the method more efficient, a linear support vector machine solver, Pegasos, is used for the classification in the kernel space with additive kernels instead of nonlinear SVMs. Our experimental results demonstrate that the proposed method can achieve very high recognition accuracy on the ORL, YALE, and FERET databases.
Some physical properties of ginkgo nuts and kernels
NASA Astrophysics Data System (ADS)
Ch'ng, P. E.; Abdullah, M. H. R. O.; Mathai, E. J.; Yunus, N. A.
2013-12-01
Some data of the physical properties of ginkgo nuts at a moisture content of 45.53% (±2.07) (wet basis) and of their kernels at 60.13% (± 2.00) (wet basis) are presented in this paper. It consists of the estimation of the mean length, width, thickness, the geometric mean diameter, sphericity, aspect ratio, unit mass, surface area, volume, true density, bulk density, and porosity measures. The coefficient of static friction for nuts and kernels was determined by using plywood, glass, rubber, and galvanized steel sheet. The data are essential in the field of food engineering especially dealing with design and development of machines, and equipment for processing and handling agriculture products.
Analyzing Sparse Dictionaries for Online Learning With Kernels
NASA Astrophysics Data System (ADS)
Honeine, Paul
2015-12-01
Many signal processing and machine learning methods share essentially the same linear-in-the-parameter model, with as many parameters as available samples as in kernel-based machines. Sparse approximation is essential in many disciplines, with new challenges emerging in online learning with kernels. To this end, several sparsity measures have been proposed in the literature to quantify sparse dictionaries and constructing relevant ones, the most prolific ones being the distance, the approximation, the coherence and the Babel measures. In this paper, we analyze sparse dictionaries based on these measures. By conducting an eigenvalue analysis, we show that these sparsity measures share many properties, including the linear independence condition and inducing a well-posed optimization problem. Furthermore, we prove that there exists a quasi-isometry between the parameter (i.e., dual) space and the dictionary's induced feature space.
Semi-supervised kernel learning based optical image recognition
NASA Astrophysics Data System (ADS)
Li, Jun-Bao; Yang, Zhi-Ming; Yu, Yang; Sun, Zhen
2012-08-01
This paper is to propose semi-supervised kernel learning based optical image recognition, called Semi-supervised Graph-based Global and Local Preserving Projection (SGGLPP) through integrating graph construction with the specific DR process into one unified framework. SGGLPP preserves not only the positive and negative constraints but also the local and global structure of the data in the low dimensional space. In SGGLPP, the intrinsic and cost graphs are constructed using the positive and negative constraints from side-information and k nearest neighbor criterion from unlabeled samples. Moreover, kernel trick is applied to extend SGGLPP called KSGGLPP by on the performance of nonlinear feature extraction. Experiments are implemented on UCI database and two real image databases to testify the feasibility and performance of the proposed algorithm.
Born Sensitivity Kernels in Spherical Geometry for Meridional Flows
NASA Astrophysics Data System (ADS)
Jackiewicz, Jason; Boening, Vincent; Roth, Markus; Kholikov, Shukur
2016-05-01
Measuring meridional flows deep in the solar convection zone is challenging because of their small amplitudes compared to other background signals. Typically such inferences are made using a ray theory that is best suited for slowly-varying flows. The implementation of finite-frequency Born theory has been shown to be more accurate for modeling flows of complex spatial structure in the near-surface region. Only until recently were such functions available in spherical geometry, which is necessary for applications to meridional flows. Here we compare these sensitivity kernels with corresponding ray kernels in a forward and inverse problem using numerical simulations. We show that they are suitable for inverting travel-time measurements and are more sensitive to small-scale variations of deep circulations.
Undersampled dynamic magnetic resonance imaging using kernel principal component analysis.
Wang, Yanhua; Ying, Leslie
2014-01-01
Compressed sensing (CS) is a promising approach to accelerate dynamic magnetic resonance imaging (MRI). Most existing CS methods employ linear sparsifying transforms. The recent developments in non-linear or kernel-based sparse representations have been shown to outperform the linear transforms. In this paper, we present an iterative non-linear CS dynamic MRI reconstruction framework that uses the kernel principal component analysis (KPCA) to exploit the sparseness of the dynamic image sequence in the feature space. Specifically, we apply KPCA to represent the temporal profiles of each spatial location and reconstruct the images through a modified pre-image problem. The underlying optimization algorithm is based on variable splitting and fixed-point iteration method. Simulation results show that the proposed method outperforms conventional CS method in terms of aliasing artifact reduction and kinetic information preservation. PMID:25570262
Polynomial Kernels for 3-Leaf Power Graph Modification Problems
NASA Astrophysics Data System (ADS)
Bessy, Stéphane; Paul, Christophe; Perez, Anthony
A graph G = (V,E) is a 3-leaf power iff there exists a tree T the leaf set of which is V and such that (u,v) ∈ E iff u and v are at distance at most 3 in T. The 3-leaf power edge modification problems, i.e. edition (also known as the CLOSEST 3-LEAF POWER), completion and edge-deletion are FPT when parameterized by the size of the edge set modification. However, a polynomial kernel was known for none of these three problems. For each of them, we provide a kernel with O(k 3) vertices that can be computed in linear time. We thereby answer an open question first mentioned by Dom, Guo, Hüffner and Niedermeier [9].