Bose-Einstein condensation on a manifold with non-negative Ricci curvature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akant, Levent, E-mail: levent.akant@boun.edu.tr; Ertuğrul, Emine, E-mail: emine.ertugrul@boun.edu.tr; Tapramaz, Ferzan, E-mail: waskhez@gmail.com
The Bose-Einstein condensation for an ideal Bose gas and for a dilute weakly interacting Bose gas in a manifold with non-negative Ricci curvature is investigated using the heat kernel and eigenvalue estimates of the Laplace operator. The main focus is on the nonrelativistic gas. However, special relativistic ideal gas is also discussed. The thermodynamic limit of the heat kernel and eigenvalue estimates is taken and the results are used to derive bounds for the depletion coefficient. In the case of a weakly interacting gas, Bogoliubov approximation is employed. The ground state is analyzed using heat kernel methods and finite sizemore » effects on the ground state energy are proposed. The justification of the c-number substitution on a manifold is given.« less
Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K
2015-05-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.
7 CFR 810.206 - Grades and grade requirements for barley.
Code of Federal Regulations, 2010 CFR
2010-01-01
... weight per bushel (pounds) Sound barley (percent) Maximum Limits of— Damaged kernels 1 (percent) Heat damaged kernels (percent) Foreign material (percent) Broken kernels (percent) Thin barley (percent) U.S... or otherwise of distinctly low quality. 1 Includes heat-damaged kernels. Injured-by-frost kernels and...
Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.
2014-01-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435
Computational investigation of intense short-wavelength laser interaction with rare gas clusters
NASA Astrophysics Data System (ADS)
Bigaouette, Nicolas
Current Very High Temperature Reactor designs incorporate TRi-structural ISOtropic (TRISO) particle fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel by dropping a cold precursor solution into a column of hot trichloroethylene (TCE). The temperature difference drives the liquid precursor solution to precipitate the metal solution into gel spheres before reaching the bottom of a production column. Over time, gelation byproducts inhibit complete gelation and the TCE must be purified or discarded. The resulting mixed-waste stream is expensive to dispose of or recycle, and changing the forming fluid to a non-hazardous alternative could greatly improve the economics of kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacements. The physical properties of the alternatives were measured as a function of temperature between 25 °C and 80 °C. Calculated terminal velocities and heat transfer rates provided an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane were selected for further testing, and surrogate yttria-stabilized zirconia (YSZ) kernels were produced using these selected fluids. The kernels were characterized for density, geometry, composition, and crystallinity and compared to a control group of kernels produced in silicone oil. Production in 1-bromotetradecane showed positive results, producing dense (93.8 %TD) and spherical (1.03 aspect ratio) kernels, but proper gelation did not occur in the other alternative forming fluids. With many of the YSZ kernels not properly gelling within the length of the column, this project further investigated the heat transfer properties of the forming fluids and precursor solution. A sensitivity study revealed that the heat transfer properties of the precursor solution have the strongest impact on gelation time. A COMSOL heat transfer model estimated an effective thermal diffusivity range for the YSZ precursor solution as 1.13x10 -8 m2/s to 3.35x10-8 m 2/s, which is an order of magnitude smaller than the value used in previous studies. 1-bromotetradecane is recommended for further investigation with the production of uranium-based kernels.
7 CFR 868.210 - Grades and grade requirements for the classes of Rough Rice. (See also § 868.212.)
Code of Federal Regulations, 2013 CFR
2013-01-01
... and heat-damaged kernels Total (singly or combined) (Number in 500 grams) Heat-damaged kernels and objectionable seeds (singly or combined) (Number in 500 grams) Heat-damaged kernels (Number in 500 grams) Red...
7 CFR 868.210 - Grades and grade requirements for the classes of Rough Rice. (See also § 868.212.)
Code of Federal Regulations, 2012 CFR
2012-01-01
... and heat-damaged kernels Total (singly or combined) (Number in 500 grams) Heat-damaged kernels and objectionable seeds (singly or combined) (Number in 500 grams) Heat-damaged kernels (Number in 500 grams) Red...
7 CFR 868.210 - Grades and grade requirements for the classes of Rough Rice. (See also § 868.212.)
Code of Federal Regulations, 2014 CFR
2014-01-01
... and heat-damaged kernels Total (singly or combined) (Number in 500 grams) Heat-damaged kernels and objectionable seeds (singly or combined) (Number in 500 grams) Heat-damaged kernels (Number in 500 grams) Red...
7 CFR 810.802 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...
7 CFR 810.602 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...
Code of Federal Regulations, 2012 CFR
2012-01-01
..., heat damaged, and paddy kernels (singly or combined) Total (number in 500 grams) Heat damaged kernels and objectionable seeds (number in 500 grams) Red rice and damaged kernels (singly or combined...
7 CFR 810.2202 - Definition of other terms.
Code of Federal Regulations, 2014 CFR
2014-01-01
... kernels, foreign material, and shrunken and broken kernels. The sum of these three factors may not exceed... the removal of dockage and shrunken and broken kernels. (g) Heat-damaged kernels. Kernels, pieces of... sample after the removal of dockage and shrunken and broken kernels. (h) Other grains. Barley, corn...
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.
Adaptive kernel function using line transect sampling
NASA Astrophysics Data System (ADS)
Albadareen, Baker; Ismail, Noriszura
2018-04-01
The estimation of f(0) is crucial in the line transect method which is used for estimating population abundance in wildlife survey's. The classical kernel estimator of f(0) has a high negative bias. Our study proposes an adaptation in the kernel function which is shown to be more efficient than the usual kernel estimator. A simulation study is adopted to compare the performance of the proposed estimators with the classical kernel estimators.
Heat kernel for the elliptic system of linear elasticity with boundary conditions
NASA Astrophysics Data System (ADS)
Taylor, Justin; Kim, Seick; Brown, Russell
2014-10-01
We consider the elliptic system of linear elasticity with bounded measurable coefficients in a domain where the second Korn inequality holds. We construct heat kernel of the system subject to Dirichlet, Neumann, or mixed boundary condition under the assumption that weak solutions of the elliptic system are Hölder continuous in the interior. Moreover, we show that if weak solutions of the mixed problem are Hölder continuous up to the boundary, then the corresponding heat kernel has a Gaussian bound. In particular, if the domain is a two dimensional Lipschitz domain satisfying a corkscrew or non-tangential accessibility condition on the set where we specify Dirichlet boundary condition, then we show that the heat kernel has a Gaussian bound. As an application, we construct Green's function for elliptic mixed problem in such a domain.
7 CFR 810.2003 - Basis of determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Basis of determination. Each determination of heat-damaged kernels, damaged kernels, material other than... shrunken and broken kernels. Other determinations not specifically provided for under the general...
NASA Astrophysics Data System (ADS)
Christe, Steven; Inglis, A.; Aschwanden, M.; Dennis, B.
2011-05-01
On 2010 October 16th SDO/AIA observed its first flare using automatic exposure control. Coincidentally, this flare also exhibited a large number of interesting features. Firstly, a large ribbon significantly to the solar west of the flare kernel was ignited and was visible in all AIA wavelengths, posing the question as to how this energy was deposited and how it relates to the main flare site. A faint blast wave also emanates from the flare kernel, visible in AIA and observed traveling to the solar west at an estimated speed of 1000 km/s. This blast wave is associated with a weak white-light CME observed with STEREO B and a Type II radio burst observed from Green Bank Observatory (GBSRBS). One possibility is that this blast wave is responsible for the heating of the ribbon. However, closer scrutiny reveals that the flare site and the ribbon are in fact connected magnetically via coronal loops which are heated during the main energy release. These loops are distinct from the expected hot, post-flare loops present within the main flare kernel. RHESSI spectra indicate that these loops are heated to approximately 10 MK in the immediate flare aftermath. Using the multi-temperature capabilities of AIA in combination with RHESSI, and by employing the cross-correlation mapping technique, we are able to measure the loop temperatures as a function of time over several post-flare hours and hence measure the loop cooling rate. We find that the time delay between the appearance of loops in the hottest channel, 131 A, and the cool 171 A channel, is 70 minutes. Yet the causality of this event remains unclear. Is the ribbon heated via these interconnected loops or via a blast wave?
7 CFR 868.203 - Basis of determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Principles Governing..., heat-damaged kernels, red rice and damaged kernels, chalky kernels, other types, color, and the special grade Parboiled rough rice shall be on the basis of the whole and large broken kernels of milled rice...
7 CFR 868.203 - Basis of determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Principles Governing..., heat-damaged kernels, red rice and damaged kernels, chalky kernels, other types, color, and the special grade Parboiled rough rice shall be on the basis of the whole and large broken kernels of milled rice...
Completeness relations for Maass Laplacians and heat kernels on the super Poincaré upper half-plane
NASA Astrophysics Data System (ADS)
Oshima, Kazuto
1990-12-01
Simple completeness relations are proposed for Maass Laplacians. With the help of these completeness relations, correct heat kernels of (super) Maass Laplacians are derived on the (super) Poincaré upper half-plane.
Bose–Einstein condensation temperature of finite systems
NASA Astrophysics Data System (ADS)
Xie, Mi
2018-05-01
In studies of the Bose–Einstein condensation of ideal gases in finite systems, the divergence problem usually arises in the equation of state. In this paper, we present a technique based on the heat kernel expansion and zeta function regularization to solve the divergence problem, and obtain the analytical expression of the Bose–Einstein condensation temperature for general finite systems. The result is represented by the heat kernel coefficients, where the asymptotic energy spectrum of the system is used. Besides the general case, for systems with exact spectra, e.g. ideal gases in an infinite slab or in a three-sphere, the sums of the spectra can be obtained exactly and the calculation of corrections to the critical temperatures is more direct. For a system confined in a bounded potential, the form of the heat kernel is different from the usual heat kernel expansion. We show that as long as the asymptotic form of the global heat kernel can be found, our method works. For Bose gases confined in three- and two-dimensional isotropic harmonic potentials, we obtain the higher-order corrections to the usual results of the critical temperatures. Our method can also be applied to the problem of generalized condensation, and we give the correction of the boundary on the second critical temperature in a highly anisotropic slab.
7 CFR 868.253 - Basis of determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Brown Rice for Processing Principles... heat, heat-damaged kernels, parboiled kernels in nonparboiled rice, and the special grade Parboiled brown rice for processing shall be on the basis of the brown rice for processing after it has been...
7 CFR 868.253 - Basis of determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Brown Rice for Processing Principles... heat, heat-damaged kernels, parboiled kernels in nonparboiled rice, and the special grade Parboiled brown rice for processing shall be on the basis of the brown rice for processing after it has been...
Code of Federal Regulations, 2012 CFR
2012-01-01
... combined) Total (number in 500 grams) Heat-damaged kernels and objectionable seeds (number in 500 grams) Red rice and damaged kernels (singly or combined) (percent) Chalky kernels 1,3 (percent) Color...
Effects of sample size on KERNEL home range estimates
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
1999-01-01
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
7 CFR 810.202 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Classes. There are two classes of barley: Malting barley and Barley. (1) Malting barley. Barley of a six...: (i) Six-rowed Malting barley. Barley that has a minimum of 95.0 percent of a six-rowed suitable...-heat kernels, and 0.1 percent heat-damaged kernels. Six-rowed Malting barley shall not be infested...
General heat kernel coefficients for massless free spin-3/2 Rarita-Schwinger field
NASA Astrophysics Data System (ADS)
Karan, Sudip; Kumar, Shashank; Panda, Binata
2018-04-01
We review the general heat kernel method for the Dirac spinor field as an elementary example in any arbitrary background. We, then compute the first three Seeley-DeWitt coefficients for the massless free spin-3/2 Rarita-Schwinger field without imposing any limitations on the background geometry.
Makanza, R; Zaman-Allah, M; Cairns, J E; Eyre, J; Burgueño, J; Pacheco, Ángela; Diepenbrock, C; Magorokosho, C; Tarekegne, A; Olsen, M; Prasanna, B M
2018-01-01
Grain yield, ear and kernel attributes can assist to understand the performance of maize plant under different environmental conditions and can be used in the variety development process to address farmer's preferences. These parameters are however still laborious and expensive to measure. A low-cost ear digital imaging method was developed that provides estimates of ear and kernel attributes i.e., ear number and size, kernel number and size as well as kernel weight from photos of ears harvested from field trial plots. The image processing method uses a script that runs in a batch mode on ImageJ; an open source software. Kernel weight was estimated using the total kernel number derived from the number of kernels visible on the image and the average kernel size. Data showed a good agreement in terms of accuracy and precision between ground truth measurements and data generated through image processing. Broad-sense heritability of the estimated parameters was in the range or higher than that for measured grain weight. Limitation of the method for kernel weight estimation is discussed. The method developed in this work provides an opportunity to significantly reduce the cost of selection in the breeding process, especially for resource constrained crop improvement programs and can be used to learn more about the genetic bases of grain yield determinants.
USDA-ARS?s Scientific Manuscript database
Solid-phase microextraction (SPME) in conjunction with GC/MS was used to distinguish non-aromatic rice (Oryza sativa, L.) kernels from aromatic rice kernels. In this method, single kernels along with 10 µl of 0.1 ng 2,4,6-Trimethylpyridine (TMP) were placed in sealed vials and heated to 80oC for 18...
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
A heat kernel proof of the index theorem for deformation quantization
NASA Astrophysics Data System (ADS)
Karabegov, Alexander
2017-11-01
We give a heat kernel proof of the algebraic index theorem for deformation quantization with separation of variables on a pseudo-Kähler manifold. We use normalizations of the canonical trace density of a star product and of the characteristic classes involved in the index formula for which this formula contains no extra constant factors.
7 CFR 810.1403 - Basis of determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Basis of determination. Each determination of broken kernels and foreign material is made on the basis of the grain when free from dockage. Each determination of class, damaged kernels, heat-damaged kernels, and stones is made on the basis of the grain when free from dockage and that portion of the...
Volterra series truncation and kernel estimation of nonlinear systems in the frequency domain
NASA Astrophysics Data System (ADS)
Zhang, B.; Billings, S. A.
2017-02-01
The Volterra series model is a direct generalisation of the linear convolution integral and is capable of displaying the intrinsic features of a nonlinear system in a simple and easy to apply way. Nonlinear system analysis using Volterra series is normally based on the analysis of its frequency-domain kernels and a truncated description. But the estimation of Volterra kernels and the truncation of Volterra series are coupled with each other. In this paper, a novel complex-valued orthogonal least squares algorithm is developed. The new algorithm provides a powerful tool to determine which terms should be included in the Volterra series expansion and to estimate the kernels and thus solves the two problems all together. The estimated results are compared with those determined using the analytical expressions of the kernels to validate the method. To further evaluate the effectiveness of the method, the physical parameters of the system are also extracted from the measured kernels. Simulation studies demonstrates that the new approach not only can truncate the Volterra series expansion and estimate the kernels of a weakly nonlinear system, but also can indicate the applicability of the Volterra series analysis in a severely nonlinear system case.
Selecting good regions to deblur via relative total variation
NASA Astrophysics Data System (ADS)
Li, Lerenhan; Yan, Hao; Fan, Zhihua; Zheng, Hanqing; Gao, Changxin; Sang, Nong
2018-03-01
Image deblurring is to estimate the blur kernel and to restore the latent image. It is usually divided into two stage, including kernel estimation and image restoration. In kernel estimation, selecting a good region that contains structure information is helpful to the accuracy of estimated kernel. Good region to deblur is usually expert-chosen or in a trial-anderror way. In this paper, we apply a metric named relative total variation (RTV) to discriminate the structure regions from smooth and texture. Given a blurry image, we first calculate the RTV of each pixel to determine whether it is the pixel in structure region, after which, we sample the image in an overlapping way. At last, the sampled region that contains the most structure pixels is the best region to deblur. Both qualitative and quantitative experiments show that our proposed method can help to estimate the kernel accurately.
Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.
Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I
2016-03-01
The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor.
A survey of kernel-type estimators for copula and their applications
NASA Astrophysics Data System (ADS)
Sumarjaya, I. W.
2017-10-01
Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.
NASA Astrophysics Data System (ADS)
Fadly Nurullah Rasedee, Ahmad; Ahmedov, Anvarjon; Sathar, Mohammad Hasan Abdul
2017-09-01
The mathematical models of the heat and mass transfer processes on the ball type solids can be solved using the theory of convergence of Fourier-Laplace series on unit sphere. Many interesting models have divergent Fourier-Laplace series, which can be made convergent by introducing Riesz and Cesaro means of the series. Partial sums of the Fourier-Laplace series summed by Riesz method are integral operators with the kernel known as Riesz means of the spectral function. In order to obtain the convergence results for the partial sums by Riesz means we need to know an asymptotic behavior of the latter kernel. In this work the estimations for Riesz means of spectral function of Laplace-Beltrami operator which guarantees the convergence of the Fourier-Laplace series by Riesz method are obtained.
Asnaashari, Maryam; Hashemi, Seyed Mohammad Bagher; Mehr, Hamed Mahdavian; Yousefabad, Seyed Hossein Asadi
2015-03-01
In this study, in order to introduce natural antioxidative vegetable oil in food industry, the kolkhoung hull oil and kernel oil were extracted. To evaluate their antioxidant efficiency, gas chromatography analysis of the composition of kolkhoung hull and kernel oil fatty acids and high-performance liquid chromatography analysis of tocopherols were done. Also, the oxidative stability of the oil was considered based on the peroxide value and anisidine value during heating at 100, 110 and 120 °C. Gas chromatography analysis showed that oleic acid was the major fatty acid of both types of oil (hull and kernel) and based on a low content of saturated fatty acids, high content of monounsaturated fatty acids, and the ratio of ω-6 and ω-3 polyunsaturated fatty acids, they were nutritionally well--balanced. Moreover, both hull and kernel oil showed high oxidative stability during heating, which can be attributed to high content of tocotrienols. Based on the results, kolkhoung hull oil acted slightly better than its kernel oil. However, both of them can be added to oxidation-sensitive oils to improve their shelf life.
Eigenfunctions and heat kernels of super Maass Laplacians on the super Poincaré upper half-plane
NASA Astrophysics Data System (ADS)
Oshima, Kazuto
1992-03-01
Heat kernels of ``super Maass Laplacians'' are explicitly constructed on super Poincaré upper half-plane by a serious treatment of a complete set of eigenfunctions. By component decomposition an explicit treatment can be done for arbitrary weight and a knowledge of classical Maass Laplacians becomes helpful. The result coincides with that of Aoki [Commun. Math. Phys. 117, 405 (1988)] which was obtained by solving differential equations.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
Simultaneous multiple non-crossing quantile regression estimation using kernel constraints
Liu, Yufeng; Wu, Yichao
2011-01-01
Quantile regression (QR) is a very useful statistical tool for learning the relationship between the response variable and covariates. For many applications, one often needs to estimate multiple conditional quantile functions of the response variable given covariates. Although one can estimate multiple quantiles separately, it is of great interest to estimate them simultaneously. One advantage of simultaneous estimation is that multiple quantiles can share strength among them to gain better estimation accuracy than individually estimated quantile functions. Another important advantage of joint estimation is the feasibility of incorporating simultaneous non-crossing constraints of QR functions. In this paper, we propose a new kernel-based multiple QR estimation technique, namely simultaneous non-crossing quantile regression (SNQR). We use kernel representations for QR functions and apply constraints on the kernel coefficients to avoid crossing. Both unregularised and regularised SNQR techniques are considered. Asymptotic properties such as asymptotic normality of linear SNQR and oracle properties of the sparse linear SNQR are developed. Our numerical results demonstrate the competitive performance of our SNQR over the original individual QR estimation. PMID:22190842
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Half-blind remote sensing image restoration with partly unknown degradation
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
The problem of image restoration has been extensively studied for its practical importance and theoretical interest. This paper mainly discusses the problem of image restoration with partly unknown kernel. In this model, the degraded kernel function is known but its parameters are unknown. With this model, we should estimate the parameters in Gaussian kernel and the real image simultaneity. For this new problem, a total variation restoration model is put out and an intersect direction iteration algorithm is designed. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measurement (SSIM) are used to measure the performance of the method. Numerical results show that we can estimate the parameters in kernel accurately, and the new method has both much higher PSNR and much higher SSIM than the expectation maximization (EM) method in many cases. In addition, the accuracy of estimation is not sensitive to noise. Furthermore, even though the support of the kernel is unknown, we can also use this method to get accurate estimation.
Nonparametric entropy estimation using kernel densities.
Lake, Douglas E
2009-01-01
The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.
NASA Technical Reports Server (NTRS)
Kahler, S. W.; Petrasso, R. D.; Kane, S. R.
1976-01-01
The physical parameters for the kernels of three solar X-ray flare events have been deduced using photographic data from the S-054 X-ray telescope on Skylab as the primary data source and 1-8 and 8-20 A fluxes from Solrad 9 as the secondary data source. The kernels had diameters of about 5-7 seconds of arc and in two cases electron densities at least as high as 0.3 trillion per cu cm. The lifetimes of the kernels were 5-10 min. The presence of thermal conduction during the decay phases is used to argue: (1) that kernels are entire, not small portions of, coronal loop structures, and (2) that flare heating must continue during the decay phase. We suggest a simple geometric model to explain the role of kernels in flares in which kernels are identified with emerging flux regions.
NASA Astrophysics Data System (ADS)
Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai
2016-07-01
Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.
Oil extraction from sheanut (Vitellaria paradoxa Gaertn C.F.) kernels assisted by microwaves.
Nde, Divine B; Boldor, Dorin; Astete, Carlos; Muley, Pranjali; Xu, Zhimin
2016-03-01
Shea butter, is highly solicited in cosmetics, pharmaceuticals, chocolates and biodiesel formulations. Microwave assisted extraction (MAE) of butter from sheanut kernels was carried using the Doehlert's experimental design. Factors studied were microwave heating time, temperature and solvent/solute ratio while the responses were the quantity of oil extracted and the acid number. Second order models were established to describe the influence of experimental parameters on the responses studied. Under optimum MAE conditions of heating time 23 min, temperature 75 °C and solvent/solute ratio 4:1 more than 88 % of the oil with a free fatty acid (FFA) value less than 2, was extracted compared to the 10 h and solvent/solute ratio of 10:1 required for soxhlet extraction. Scanning electron microscopy was used to elucidate the effect of microwave heating on the kernels' microstructure. Substantial reduction in extraction time and volumes of solvent used and oil of suitable quality are the main benefits derived from the MAE process.
Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi
2017-01-18
Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
Automated skin lesion segmentation with kernel density estimation
NASA Astrophysics Data System (ADS)
Pardo, A.; Real, E.; Fernandez-Barreras, G.; Madruga, F. J.; López-Higuera, J. M.; Conde, O. M.
2017-07-01
Skin lesion segmentation is a complex step for dermoscopy pathological diagnosis. Kernel density estimation is proposed as a segmentation technique based on the statistic distribution of color intensities in the lesion and non-lesion regions.
Fu, Guanfu; Feng, Baohua; Zhang, Caixia; Yang, Yongjie; Yang, Xueqin; Chen, Tingting; Zhao, Xia; Zhang, Xiufu; Jin, Qianyu; Tao, Longxing
2016-01-01
In general, the fertility and kernel weight of inferior spikelets of rice (Oryza Sativa L.) are obviously lower than those of superior spikelets, especially under abiotic stress. However, different responses to heat stress are seemed to show between the superior and inferior spikelet, and this response is scarcely documented that the intrinsic factors remain elusive. In order to reveal the mechanism underlying, two rice plants with different heat tolerance were subjected to heat stress of 40°C at anthesis. The results indicated that a greater decrease in fertility and kernel weight was observed in superior spikelets compared to inferior spikelets. This decrease was primarily ascribed to their different organ temperatures, in which the temperature of the superior spikelets was significantly higher than that of inferior spikelets. We inferred the differences in canopy temperature, light intensity and panicle types, were the primary reasons for the temperature difference between superior and inferior spikelets. Under heat stress, the fertility and kernel weight of superior and inferior spikelets decreased as the panicle numbers per plant were reduced, which was accompanied by significantly increasing the canopy temperatures. Thus, it was suggested that the rice plant with characteristic features of an upright growth habit and loose panicles might be more susceptible to heat stress resulting from their higher canopy and spikelets temperatures. PMID:27877180
Devaux, C; Lavigne, C; Austerlitz, F; Klein, E K
2007-02-01
Understanding patterns of pollen movement at the landscape scale is important for establishing management rules following the release of genetically modified (GM) crops. We use here a mating model adapted to cultivated species to estimate dispersal kernels from the genotypes of the progenies of male-sterile plants positioned at different sampling sites within a 10 x 10-km oilseed rape production area. Half of the pollen clouds sampled by the male-sterile plants originated from uncharacterized pollen sources that could consist of both large volunteer and feral populations, and fields within and outside the study area. The geometric dispersal kernel was the most appropriate to predict pollen movement in the study area. It predicted a much larger proportion of long-distance pollination than previously fitted dispersal kernels. This best-fitting mating model underestimated the level of differentiation among pollen clouds but could predict its spatial structure. The estimation method was validated on simulated genotypic data, and proved to provide good estimates of both the shape of the dispersal kernel and the rate and composition of pollen issued from uncharacterized pollen sources. The best dispersal kernel fitted here, the geometric kernel, should now be integrated into models that aim at predicting gene flow at the landscape level, in particular between GM and non-GM crops.
NASA Astrophysics Data System (ADS)
Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid
2018-06-01
This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.
Characterization of a maximum-likelihood nonparametric density estimator of kernel type
NASA Technical Reports Server (NTRS)
Geman, S.; Mcclure, D. E.
1982-01-01
Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).
A Comparison of Methods for Nonparametric Estimation of Item Characteristic Curves for Binary Items
ERIC Educational Resources Information Center
Lee, Young-Sun
2007-01-01
This study compares the performance of three nonparametric item characteristic curve (ICC) estimation procedures: isotonic regression, smoothed isotonic regression, and kernel smoothing. Smoothed isotonic regression, employed along with an appropriate kernel function, provides better estimates and also satisfies the assumption of strict…
7 CFR 810.403 - Basis of determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... GRAIN United States Standards for Corn Principles Governing the Application of Standards § 810.403 Basis of determination. Each determination of class, damaged kernels, heat-damaged kernels, waxy corn, flint corn, and flint and dent corn is made on the basis of the grain after the removal of the broken...
7 CFR 810.403 - Basis of determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... GRAIN United States Standards for Corn Principles Governing the Application of Standards § 810.403 Basis of determination. Each determination of class, damaged kernels, heat-damaged kernels, waxy corn, flint corn, and flint and dent corn is made on the basis of the grain after the removal of the broken...
Ha, Jae-Won
2015-01-01
The aim of this study was to investigate the efficacy of near-infrared radiation (NIR) heating combined with lactic acid (LA) sprays for inactivating Salmonella enterica serovar Enteritidis on almond and pine nut kernels and to elucidate the mechanisms of the lethal effect of the NIR-LA combined treatment. Also, the effect of the combination treatment on product quality was determined. Separately prepared S. Enteritidis phage type (PT) 30 and non-PT 30 S. Enteritidis cocktails were inoculated onto almond and pine nut kernels, respectively, followed by treatments with NIR or 2% LA spray alone, NIR with distilled water spray (NIR-DW), and NIR with 2% LA spray (NIR-LA). Although surface temperatures of nuts treated with NIR were higher than those subjected to NIR-DW or NIR-LA treatment, more S. Enteritidis survived after NIR treatment alone. The effectiveness of NIR-DW and NIR-LA was similar, but significantly more sublethally injured cells were recovered from NIR-DW-treated samples. We confirmed that the enhanced bactericidal effect of the NIR-LA combination may not be attributable to cell membrane damage per se. NIR heat treatment might allow S. Enteritidis cells to become permeable to applied LA solution. The NIR-LA treatment (5 min) did not significantly (P > 0.05) cause changes in the lipid peroxidation parameters, total phenolic contents, color values, moisture contents, and sensory attributes of nut kernels. Given the results of the present study, NIR-LA treatment may be a potential intervention for controlling food-borne pathogens on nut kernel products. PMID:25911473
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
Dynamic characteristics of oxygen consumption.
Ye, Lin; Argha, Ahmadreza; Yu, Hairong; Celler, Branko G; Nguyen, Hung T; Su, Steven
2018-04-23
Previous studies have indicated that oxygen uptake ([Formula: see text]) is one of the most accurate indices for assessing the cardiorespiratory response to exercise. In most existing studies, the response of [Formula: see text] is often roughly modelled as a first-order system due to the inadequate stimulation and low signal to noise ratio. To overcome this difficulty, this paper proposes a novel nonparametric kernel-based method for the dynamic modelling of [Formula: see text] response to provide a more robust estimation. Twenty healthy non-athlete participants conducted treadmill exercises with monotonous stimulation (e.g., single step function as input). During the exercise, [Formula: see text] was measured and recorded by a popular portable gas analyser ([Formula: see text], COSMED). Based on the recorded data, a kernel-based estimation method was proposed to perform the nonparametric modelling of [Formula: see text]. For the proposed method, a properly selected kernel can represent the prior modelling information to reduce the dependence of comprehensive stimulations. Furthermore, due to the special elastic net formed by [Formula: see text] norm and kernelised [Formula: see text] norm, the estimations are smooth and concise. Additionally, the finite impulse response based nonparametric model which estimated by the proposed method can optimally select the order and fit better in terms of goodness-of-fit comparing to classical methods. Several kernels were introduced for the kernel-based [Formula: see text] modelling method. The results clearly indicated that the stable spline (SS) kernel has the best performance for [Formula: see text] modelling. Particularly, based on the experimental data from 20 participants, the estimated response from the proposed method with SS kernel was significantly better than the results from the benchmark method [i.e., prediction error method (PEM)] ([Formula: see text] vs [Formula: see text]). The proposed nonparametric modelling method is an effective method for the estimation of the impulse response of VO 2 -Speed system. Furthermore, the identified average nonparametric model method can dynamically predict [Formula: see text] response with acceptable accuracy during treadmill exercise.
Dielectric properties of almond kernels associated with radio frequency and microwave pasteurization
NASA Astrophysics Data System (ADS)
Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin
2017-02-01
To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10-300 MHz), but gradually over the measured MW range (300-3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R2 greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27-2450 MHz), moisture content (4.2-19.6% w.b.) and temperature (20-90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity.
Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin
2017-02-10
To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10-300 MHz), but gradually over the measured MW range (300-3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R 2 greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27-2450 MHz), moisture content (4.2-19.6% w.b.) and temperature (20-90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity.
Aflatoxin and nutrient contents of peanut collected from local market and their processed foods
NASA Astrophysics Data System (ADS)
Ginting, E.; Rahmianna, A. A.; Yusnawan, E.
2018-01-01
Peanut is succeptable to aflatoxin contamination and the sources of peanut as well as processing methods considerably affect aflatoxin content of the products. Therefore, the study on aflatoxin and nutrient contents of peanut collected from local market and their processed foods were performed. Good kernels of peanut were prepared into fried peanut, pressed-fried peanut, peanut sauce, peanut press cake, fermented peanut press cake (tempe) and fried tempe, while blended kernels (good and poor kernels) were processed into peanut sauce and tempe and poor kernels were only processed into tempe. The results showed that good and blended kernels which had high number of sound/intact kernels (82,46% and 62,09%), contained 9.8-9.9 ppb of aflatoxin B1, while slightly higher level was seen in poor kernels (12.1 ppb). However, the moisture, ash, protein, and fat contents of the kernels were similar as well as the products. Peanut tempe and fried tempe showed the highest increase in protein content, while decreased fat contents were seen in all products. The increase in aflatoxin B1 of peanut tempe prepared from poor kernels > blended kernels > good kernels. However, it averagely decreased by 61.2% after deep-fried. Excluding peanut tempe and fried tempe, aflatoxin B1 levels in all products derived from good kernels were below the permitted level (15 ppb). This suggests that sorting peanut kernels as ingredients and followed by heat processing would decrease the aflatoxin content in the products.
Reaction Kernel Structure of a Slot Jet Diffusion Flame in Microgravity
NASA Technical Reports Server (NTRS)
Takahashi, F.; Katta, V. R.
2001-01-01
Diffusion flame stabilization in normal earth gravity (1 g) has long been a fundamental research subject in combustion. Local flame-flow phenomena, including heat and species transport and chemical reactions, around the flame base in the vicinity of condensed surfaces control flame stabilization and fire spreading processes. Therefore, gravity plays an important role in the subject topic because buoyancy induces flow in the flame zone, thus increasing the convective (and diffusive) oxygen transport into the flame zone and, in turn, reaction rates. Recent computations show that a peak reactivity (heat-release or oxygen-consumption rate) spot, or reaction kernel, is formed in the flame base by back-diffusion and reactions of radical species in the incoming oxygen-abundant flow at relatively low temperatures (about 1550 K). Quasi-linear correlations were found between the peak heat-release or oxygen-consumption rate and the velocity at the reaction kernel for cases including both jet and flat-plate diffusion flames in airflow. The reaction kernel provides a stationary ignition source to incoming reactants, sustains combustion, and thus stabilizes the trailing diffusion flame. In a quiescent microgravity environment, no buoyancy-induced flow exits and thus purely diffusive transport controls the reaction rates. Flame stabilization mechanisms in such purely diffusion-controlled regime remain largely unstudied. Therefore, it will be a rigorous test for the reaction kernel correlation if it can be extended toward zero velocity conditions in the purely diffusion-controlled regime. The objectives of this study are to reveal the structure of the flame-stabilizing region of a two-dimensional (2D) laminar jet diffusion flame in microgravity and develop a unified diffusion flame stabilization mechanism. This paper reports the recent progress in the computation and experiment performed in microgravity.
Oskoueian, Ehsan; Abdullah, Norhani; Idrus, Zulkifli; Ebrahimi, Mahdi; Goh, Yong Meng; Shakeri, Majid; Oskoueian, Armin
2014-10-02
Palm kernel cake (PKC), the most abundant by-product of oil palm industry is believed to contain bioactive compounds with hepatoprotective potential. These compounds may serve as hepatoprotective agents which could help the poultry industry to alleviate adverse effects of heat stress on liver function in chickens. This study was performed to evaluate the hepatoprotective potential of PKC extract in heat-induced oxidative stress in chicken hepatocytes. The nature of the active metabolites and elucidation of the possible mechanism involved were also investigated. The PKC extract possessed free radical scavenging activity with values significantly (p < 0.05) lower than silymarin as the reference antioxidant. Heat-induced oxidative stress in chicken hepatocyte impaired the total protein, lipid peroxidation and antioxidant enzymes activity significantly (p < 0.05). Treatment of heat-induced hepatocytes with PKC extract (125 μg/ml) and silymarin as positive control increased these values significantly (p < 0.05). The real time PCR and western blot analyses revealed the significant (p < 0.05) up-regulation of oxidative stress biomarkers including TNF-like, IFN-γ and IL-1β genes; NF-κB, COX-2, iNOS and Hsp70 proteins expression upon heat stress in chicken hepatocytes. The PKC extract and silymarin were able to alleviate the expression of all of these biomarkers in heat-induced chicken hepatocytes. The gas chromatography-mass spectrometry analysis of PKC extract showed the presence of fatty acids, phenolic compounds, sugar derivatives and other organic compounds such as furfural which could be responsible for the observed hepatoprotective activity. Palm kernel cake extract could be a potential agent to protect hepatocytes function under heat induced oxidative stress.
Popping the Kernel Modeling the States of Matter
ERIC Educational Resources Information Center
Hitt, Austin; White, Orvil; Hanson, Debbie
2005-01-01
This article discusses how to use popcorn to engage students in model building and to teach them about the nature of matter. Popping kernels is a simple and effective method to connect the concepts of heat, motion, and volume with the different phases of matter. Before proceeding with the activity the class should discuss the nature of scientific…
An alternative covariance estimator to investigate genetic heterogeneity in populations.
Heslot, Nicolas; Jannink, Jean-Luc
2015-11-26
For genomic prediction and genome-wide association studies (GWAS) using mixed models, covariance between individuals is estimated using molecular markers. Based on the properties of mixed models, using available molecular data for prediction is optimal if this covariance is known. Under this assumption, adding individuals to the analysis should never be detrimental. However, some empirical studies showed that increasing training population size decreased prediction accuracy. Recently, results from theoretical models indicated that even if marker density is high and the genetic architecture of traits is controlled by many loci with small additive effects, the covariance between individuals, which depends on relationships at causal loci, is not always well estimated by the whole-genome kinship. We propose an alternative covariance estimator named K-kernel, to account for potential genetic heterogeneity between populations that is characterized by a lack of genetic correlation, and to limit the information flow between a priori unknown populations in a trait-specific manner. This is similar to a multi-trait model and parameters are estimated by REML and, in extreme cases, it can allow for an independent genetic architecture between populations. As such, K-kernel is useful to study the problem of the design of training populations. K-kernel was compared to other covariance estimators or kernels to examine its fit to the data, cross-validated accuracy and suitability for GWAS on several datasets. It provides a significantly better fit to the data than the genomic best linear unbiased prediction model and, in some cases it performs better than other kernels such as the Gaussian kernel, as shown by an empirical null distribution. In GWAS simulations, alternative kernels control type I errors as well as or better than the classical whole-genome kinship and increase statistical power. No or small gains were observed in cross-validated prediction accuracy. This alternative covariance estimator can be used to gain insight into trait-specific genetic heterogeneity by identifying relevant sub-populations that lack genetic correlation between them. Genetic correlation can be 0 between identified sub-populations by performing automatic selection of relevant sets of individuals to be included in the training population. It may also increase statistical power in GWAS.
Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K
2017-10-17
Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p < 0.001) when compared with images reconstructed using the bone-sharpening kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p < 0.001, and 18.2%, p < 0.001, respectively) when compared with the image reconstructed by the bone-sharpening kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
Richardson-Lucy deblurring for the star scene under a thinning motion path
NASA Astrophysics Data System (ADS)
Su, Laili; Shao, Xiaopeng; Wang, Lin; Wang, Haixin; Huang, Yining
2015-05-01
This paper puts emphasis on how to model and correct image blur that arises from a camera's ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera's path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera's ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.
Increasing the Size of Microwave Popcorn
NASA Astrophysics Data System (ADS)
Smoyer, Justin
2005-03-01
Each year Americans consume approximately 17 billion quarts of popcorn. Since the 1940s, microwaves have been the heating source of choice for most. By treating the popcorn mechanism as a thermodynamic system, it has been shown mathematically and experimentally that reducing the surrounding pressure of the unpopped kernels, results in an increased volume of the kernels [Quinn et al, http://xxx.lanl.gov/abs/cond-mat/0409434 v1 2004]. In this project an alternate method of popping with the microwave was used to further test and confirm this hypothesis. Numerous experimental trials where run to test the validity of the theory. The results show that there is a significant increase in the average kernel size as well as a reduction in the number of unpopped kernels.
A fast and objective multidimensional kernel density estimation method: fastKDE
O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...
2016-03-07
Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less
Heat kernel and Weyl anomaly of Schrödinger invariant theory
NASA Astrophysics Data System (ADS)
Pal, Sridip; Grinstein, Benjamín
2017-12-01
We propose a method inspired by discrete light cone quantization to determine the heat kernel for a Schrödinger field theory (Galilean boost invariant with z =2 anisotropic scaling symmetry) living in d +1 dimensions, coupled to a curved Newton-Cartan background, starting from a heat kernel of a relativistic conformal field theory (z =1 ) living in d +2 dimensions. We use this method to show that the Schrödinger field theory of a complex scalar field cannot have any Weyl anomalies. To be precise, we show that the Weyl anomaly Ad+1 G for Schrödinger theory is related to the Weyl anomaly of a free relativistic scalar CFT Ad+2 R via Ad+1 G=2 π δ (m )Ad+2 R , where m is the charge of the scalar field under particle number symmetry. We provide further evidence of the vanishing anomaly by evaluating Feynman diagrams in all orders of perturbation theory. We present an explicit calculation of the anomaly using a regulated Schrödinger operator, without using the null cone reduction technique. We generalize our method to show that a similar result holds for theories with a single time-derivative and with even z >2 .
Spectral imaging using consumer-level devices and kernel-based regression.
Heikkinen, Ville; Cámara, Clara; Hirvonen, Tapani; Penttinen, Niko
2016-06-01
Hyperspectral reflectance factor image estimations were performed in the 400-700 nm wavelength range using a portable consumer-level laptop display as an adjustable light source for a trichromatic camera. Targets of interest were ColorChecker Classic samples, Munsell Matte samples, geometrically challenging tempera icon paintings from the turn of the 20th century, and human hands. Measurements and simulations were performed using Nikon D80 RGB camera and Dell Vostro 2520 laptop screen as a light source. Estimations were performed without spectral characteristics of the devices and by emphasizing simplicity for training sets and estimation model optimization. Spectral and color error images are shown for the estimations using line-scanned hyperspectral images as the ground truth. Estimations were performed using kernel-based regression models via a first-degree inhomogeneous polynomial kernel and a Matérn kernel, where in the latter case the median heuristic approach for model optimization and link function for bounded estimation were evaluated. Results suggest modest requirements for a training set and show that all estimation models have markedly improved accuracy with respect to the DE00 color distance (up to 99% for paintings and hands) and the Pearson distance (up to 98% for paintings and 99% for hands) from a weak training set (Digital ColorChecker SG) case when small representative training data were used in the estimation.
ERIC Educational Resources Information Center
Ferrando, Pere J.
2004-01-01
This study used kernel-smoothing procedures to estimate the item characteristic functions (ICFs) of a set of continuous personality items. The nonparametric ICFs were compared with the ICFs estimated (a) by the linear model and (b) by Samejima's continuous-response model. The study was based on a conditioned approach and used an error-in-variables…
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.
Kernel Wiener filter and its application to pattern recognition.
Yoshino, Hirokazu; Dong, Chen; Washizawa, Yoshikazu; Yamashita, Yukihiko
2010-11-01
The Wiener filter (WF) is widely used for inverse problems. From an observed signal, it provides the best estimated signal with respect to the squared error averaged over the original and the observed signals among linear operators. The kernel WF (KWF), extended directly from WF, has a problem that an additive noise has to be handled by samples. Since the computational complexity of kernel methods depends on the number of samples, a huge computational cost is necessary for the case. By using the first-order approximation of kernel functions, we realize KWF that can handle such a noise not by samples but as a random variable. We also propose the error estimation method for kernel filters by using the approximations. In order to show the advantages of the proposed methods, we conducted the experiments to denoise images and estimate errors. We also apply KWF to classification since KWF can provide an approximated result of the maximum a posteriori classifier that provides the best recognition accuracy. The noise term in the criterion can be used for the classification in the presence of noise or a new regularization to suppress changes in the input space, whereas the ordinary regularization for the kernel method suppresses changes in the feature space. In order to show the advantages of the proposed methods, we conducted experiments of binary and multiclass classifications and classification in the presence of noise.
Graphene-like carbon synthesized from popcorn flakes
NASA Astrophysics Data System (ADS)
Mendoza, D.; Flores, C. B.; Berrú, R. Y. Sato
2015-01-01
The synthesis of graphene-like carbon using popcorn kernels as a renewable resource is presented. In a first step popcorn kernels were heated to produce popcorn flakes with a spongy appearance consisting of a polygonal cellular structure. In a second step, the flakes were treated at high temperature in an inert atmosphere to produce carbonization. Raman spectroscopy shows graphene-like structure with a high degree of disorder.
Kernel and divergence techniques in high energy physics separations
NASA Astrophysics Data System (ADS)
Bouř, Petr; Kůs, Václav; Franc, Jiří
2017-10-01
Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.
Estimation of kernels mass ratio to total in-shell peanuts using low-cost RF impedance meter
USDA-ARS?s Scientific Manuscript database
In this study estimation of percentage of total kernel mass within a given mass of in-shell peanuts was determined nondestructively using a low-cost RF impedance meter. Peanut samples were divided into two groups one the calibration and the other the validation group. Each group contained 25 samples...
NASA Astrophysics Data System (ADS)
Lindemer, T. B.; Voit, S. L.; Silva, C. M.; Besmann, T. M.; Hunt, R. D.
2014-05-01
The US Department of Energy is developing a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with uranium nitride (UN) kernels with diameters near 825 μm. This effort explores factors involved in the conversion of uranium oxide-carbon microspheres into UN kernels. An analysis of previous studies with sufficient experimental details is provided. Thermodynamic calculations were made to predict pressures of carbon monoxide and other relevant gases for several reactions that can be involved in the conversion of uranium oxides and carbides into UN. Uranium oxide-carbon microspheres were heated in a microbalance with an attached mass spectrometer to determine details of calcining and carbothermic conversion in argon, nitrogen, and vacuum. A model was derived from experiments on the vacuum conversion to uranium oxide-carbide kernels. UN-containing kernels were fabricated using this vacuum conversion as part of the overall process. Carbonitride kernels of ∼89% of theoretical density were produced along with several observations concerning the different stages of the process.
Accurately estimating PSF with straight lines detected by Hough transform
NASA Astrophysics Data System (ADS)
Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong
2018-04-01
This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.
Vokoun, Jason C.; Rabeni, Charles F.
2005-01-01
Flathead catfish Pylodictis olivaris were radio-tracked in the Grand River and Cuivre River, Missouri, from late July until they moved to overwintering habitats in late October. Fish moved within a definable area, and although occasional long-distance movements occurred, the fish typically returned to the previously occupied area. Seasonal home range was calculated with the use of kernel density estimation, which can be interpreted as a probabilistic utilization distribution that documents the internal structure of the estimate by delineating portions of the range that was used a specified percentage of the time. A traditional linear range also was reported. Most flathead catfish (89%) had one 50% kernel-estimated core area, whereas 11% of the fish split their time between two core areas. Core areas were typically in the middle of the 90% kernel-estimated home range (58%), although several had core areas in upstream (26%) and downstream (16%) portions of the home range. Home-range size did not differ based on river, sex, or size and was highly variable among individuals. The median 95% kernel estimate was 1,085 m (range, 70– 69,090 m) for all fish. The median 50% kernel-estimated core area was 135 m (10–2,260 m). The median linear range was 3,510 m (150–50,400 m). Fish pairs with core areas in the same and neighboring pools had static joint space use values of up to 49% (area of intersection index), indicating substantial overlap and use of the same area. However, all fish pairs had low dynamic joint space use values (<0.07; coefficient of association), indicating that fish pairs were temporally segregated, rarely occurring in the same location at the same time.
Heat damage and in vitro starch digestibility of puffed wheat kernels.
Cattaneo, Stefano; Hidalgo, Alyssa; Masotti, Fabio; Stuknytė, Milda; Brandolini, Andrea; De Noni, Ivano
2015-12-01
The effect of processing conditions on heat damage, starch digestibility, release of advanced glycation end products (AGEs) and antioxidant capacity of puffed cereals was studied. The determination of several markers arising from Maillard reaction proved pyrraline (PYR) and hydroxymethylfurfural (HMF) as the most reliable indices of heat load applied during puffing. The considerable heat load was evidenced by the high levels of both PYR (57.6-153.4 mg kg(-1) dry matter) and HMF (13-51.2 mg kg(-1) dry matter). For cost and simplicity, HMF looked like the most appropriate index in puffed cereals. Puffing influenced starch in vitro digestibility, being most of the starch (81-93%) hydrolyzed to maltotriose, maltose and glucose whereas only limited amounts of AGEs were released. The relevant antioxidant capacity revealed by digested puffed kernels can be ascribed to both the new formed Maillard reaction products and the conditions adopted during in vitro digestion. Copyright © 2015 Elsevier Ltd. All rights reserved.
Digestibility of solvent-treated Jatropha curcas kernel by broiler chickens in Senegal.
Nesseim, Thierry Daniel Tamsir; Dieng, Abdoulaye; Mergeai, Guy; Ndiaye, Saliou; Hornick, Jean-Luc
2015-12-01
Jatropha curcas is a drought-resistant shrub belonging to the Euphorbiaceae family. The kernel contains approximately 60 % lipid in dry matter, and the meal obtained after oil extraction could be an exceptional source of protein for family poultry farming, in the absence of curcin and, especially, some diterpene derivatives phorbol esters that are partially lipophilic. The nutrient digestibility of J. curcas kernel meal (JKM), obtained after partial physicochemical deoiling was thus evaluated in broiler chickens. Twenty broiler chickens, 6 weeks old, were maintained in individual metabolic cages and divided into four groups of five animals, according to a 4 × 4 Latin square design where deoiled JKM was incorporated into grinded corn at 0, 4, 8, and 12 % levels (diets 0, 4, 8, and 12 J), allowing measurement of nutrient digestibility by the differential method. The dry matter (DM) and organic matter (OM) digestibility of diets was affected to a low extent by JKM (85 and 86 % in 0 J and 81 % in 12 J, respectively) in such a way that DM and OM digestibility of JKM was estimated to be close to 50 %. The ether extract (EE) digestibility of JKM remained high, at about 90 %, while crude protein (CP) and crude fiber (CF) digestibility were largely impacted by JKM, with values closed to 40 % at the highest levels of incorporation. J. curcas kernel presents various nutrient digestibilities but has adverse effects on CP and CF digestibility of the diet. The effects of an additional heat or biological treatment on JKM remain to be assessed.
NASA Astrophysics Data System (ADS)
Du, Peijun; Tan, Kun; Xing, Xiaoshi
2010-12-01
Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.
On- and off-axis spectral emission features from laser-produced gas breakdown plasmas
NASA Astrophysics Data System (ADS)
Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.; Brumfield, B. E.; Phillips, M. C.; Miloshevsky, G.
2017-06-01
Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during their early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of the surrounding ambient: photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early times of their creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with a pulse duration of 6 ns are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density, and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times, while space and time resolved spectroscopy is used for evaluating the emission features and for inferring plasma physical conditions at on- and off-axis positions. The structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using the computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms, and molecules are separated in time with early time temperatures and densities in excess of 35 000 K and 4 × 1018/cm3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N2 bands and is represented by non-local thermodynamic equilibrium (non-LTE) conditions. Our results also highlight that the ultraviolet radiation emitted during the early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.
On- and off-axis spectral emission features from laser-produced gas breakdown plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.
Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during its early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of surrounding ambient: viz. photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early timesmore » of its creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission features of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with 6 ns pulse duration are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times while space and time resolved spectroscopy is used for evaluating the emission features as well as for inferring plasma fundaments at on- and off-axis. Structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms and molecules are separated in time with an early time temperatures and densities in excess of 35000 K and 4×10 18 /cm 3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N 2 bands and represented by non-LTE conditions. Finally, our results also highlight that the ultraviolet radiation emitted during early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.« less
On- and off-axis spectral emission features from laser-produced gas breakdown plasmas
Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.; ...
2017-06-01
Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during its early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of surrounding ambient: viz. photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early timesmore » of its creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission features of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with 6 ns pulse duration are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times while space and time resolved spectroscopy is used for evaluating the emission features as well as for inferring plasma fundaments at on- and off-axis. Structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms and molecules are separated in time with an early time temperatures and densities in excess of 35000 K and 4×1018 /cm3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N2 bands and represented by non-LTE conditions. Our results also highlight that the ultraviolet radiation emitted during early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.« less
Many Molecular Properties from One Kernel in Chemical Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole
We introduce property-independent kernels for machine learning modeling of arbitrarily many molecular properties. The kernels encode molecular structures for training sets of varying size, as well as similarity measures sufficiently diffuse in chemical space to sample over all training molecules. Corresponding molecular reference properties provided, they enable the instantaneous generation of ML models which can systematically be improved through the addition of more data. This idea is exemplified for single kernel based modeling of internal energy, enthalpy, free energy, heat capacity, polarizability, electronic spread, zero-point vibrational energy, energies of frontier orbitals, HOMOLUMO gap, and the highest fundamental vibrational wavenumber. Modelsmore » of these properties are trained and tested using 112 kilo organic molecules of similar size. Resulting models are discussed as well as the kernels’ use for generating and using other property models.« less
Modeling adaptive kernels from probabilistic phylogenetic trees.
Nicotra, Luca; Micheli, Alessio
2009-01-01
Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.
ERIC Educational Resources Information Center
Choi, Sae Il
2009-01-01
This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…
Bhattacharya, Abhishek; Dunson, David B.
2012-01-01
This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295
NASA Astrophysics Data System (ADS)
Abdulhameed, M.; Vieru, D.; Roslan, R.
2017-10-01
This paper investigates the electro-magneto-hydrodynamic flow of the non-Newtonian behavior of biofluids, with heat transfer, through a cylindrical microchannel. The fluid is acted by an arbitrary time-dependent pressure gradient, an external electric field and an external magnetic field. The governing equations are considered as fractional partial differential equations based on the Caputo-Fabrizio time-fractional derivatives without singular kernel. The usefulness of fractional calculus to study fluid flows or heat and mass transfer phenomena was proven. Several experimental measurements led to conclusion that, in such problems, the models described by fractional differential equations are more suitable. The most common time-fractional derivative used in Continuum Mechanics is Caputo derivative. However, two disadvantages appear when this derivative is used. First, the definition kernel is a singular function and, secondly, the analytical expressions of the problem solutions are expressed by generalized functions (Mittag-Leffler, Lorenzo-Hartley, Robotnov, etc.) which, generally, are not adequate to numerical calculations. The new time-fractional derivative Caputo-Fabrizio, without singular kernel, is more suitable to solve various theoretical and practical problems which involve fractional differential equations. Using the Caputo-Fabrizio derivative, calculations are simpler and, the obtained solutions are expressed by elementary functions. Analytical solutions of the biofluid velocity and thermal transport are obtained by means of the Laplace and finite Hankel transforms. The influence of the fractional parameter, Eckert number and Joule heating parameter on the biofluid velocity and thermal transport are numerically analyzed and graphic presented. This fact can be an important in Biochip technology, thus making it possible to use this analysis technique extremely effective to control bioliquid samples of nanovolumes in microfluidic devices used for biological analysis and medical diagnosis.
Boundary conditions for gas flow problems from anisotropic scattering kernels
NASA Astrophysics Data System (ADS)
To, Quy-Dong; Vu, Van-Huyen; Lauriat, Guy; Léonard, Céline
2015-10-01
The paper presents an interface model for gas flowing through a channel constituted of anisotropic wall surfaces. Using anisotropic scattering kernels and Chapman Enskog phase density, the boundary conditions (BCs) for velocity, temperature, and discontinuities including velocity slip and temperature jump at the wall are obtained. Two scattering kernels, Dadzie and Méolans (DM) kernel, and generalized anisotropic Cercignani-Lampis (ACL) are examined in the present paper, yielding simple BCs at the wall fluid interface. With these two kernels, we rigorously recover the analytical expression for orientation dependent slip shown in our previous works [Pham et al., Phys. Rev. E 86, 051201 (2012) and To et al., J. Heat Transfer 137, 091002 (2015)] which is in good agreement with molecular dynamics simulation results. More important, our models include both thermal transpiration effect and new equations for the temperature jump. While the same expression depending on the two tangential accommodation coefficients is obtained for slip velocity, the DM and ACL temperature equations are significantly different. The derived BC equations associated with these two kernels are of interest for the gas simulations since they are able to capture the direction dependent slip behavior of anisotropic interfaces.
How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.
Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J
2014-09-01
Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.
KERNELHR: A program for estimating animal home ranges
Seaman, D.E.; Griffith, B.; Powell, R.A.
1998-01-01
Kernel methods are state of the art for estimating animal home-range area and utilization distribution (UD). The KERNELHR program was developed to provide researchers and managers a tool to implement this extremely flexible set of methods with many variants. KERNELHR runs interactively or from the command line on any personal computer (PC) running DOS. KERNELHR provides output of fixed and adaptive kernel home-range estimates, as well as density values in a format suitable for in-depth statistical and spatial analyses. An additional package of programs creates contour files for plotting in geographic information systems (GIS) and estimates core areas of ranges.
Rafal Podlaski; Francis A. Roesch
2014-01-01
Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.
2012-07-01
A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied.more » (authors)« less
A shock-capturing SPH scheme based on adaptive kernel estimation
NASA Astrophysics Data System (ADS)
Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime
2006-02-01
Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.
Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.
Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit
2018-02-13
Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
1981-07-01
process is observed over all of (0,1], the reproducing kernel Hilbert space (RKHS) techniques developed by Parzen (1961a, 1961b) 2 may be used to construct...covariance kernel,R, for the process (1.1) is the reproducing kernel for a reproducing kernel Hilbert space (RKHS) which will be denoted as H(R) (c.f...2.6), it is known that (c.f. Eubank, Smith and Smith (1981a, 1981b)), i) H(R) is a Hilbert function space consisting of functions which satisfy for fEH
Seismic Imaging of VTI, HTI and TTI based on Adjoint Methods
NASA Astrophysics Data System (ADS)
Rusmanugroho, H.; Tromp, J.
2014-12-01
Recent studies show that isotropic seismic imaging based on adjoint method reduces low-frequency artifact caused by diving waves, which commonly occur in two-wave wave-equation migration, such as Reverse Time Migration (RTM). Here, we derive new expressions of sensitivity kernels for Vertical Transverse Isotropy (VTI) using the Thomsen parameters (ɛ, δ, γ) plus the P-, and S-wave speeds (α, β) as well as via the Chen & Tromp (GJI 2005) parameters (A, C, N, L, F). For Horizontal Transverse Isotropy (HTI), these parameters depend on an azimuthal angle φ, where the tilt angle θ is equivalent to 90°, and for Tilted Transverse Isotropy (TTI), these parameters depend on both the azimuth and tilt angles. We calculate sensitivity kernels for each of these two approaches. Individual kernels ("images") are numerically constructed based on the interaction between the regular and adjoint wavefields in smoothed models which are in practice estimated through Full-Waveform Inversion (FWI). The final image is obtained as a result of summing all shots, which are well distributed to sample the target model properly. The impedance kernel, which is a sum of sensitivity kernels of density and the Thomsen or Chen & Tromp parameters, looks crisp and promising for seismic imaging. The other kernels suffer from low-frequency artifacts, similar to traditional seismic imaging conditions. However, all sensitivity kernels are important for estimating the gradient of the misfit function, which, in combination with a standard gradient-based inversion algorithm, is used to minimize the objective function in FWI.
Omnibus Risk Assessment via Accelerated Failure Time Kernel Machine Modeling
Sinnott, Jennifer A.; Cai, Tianxi
2013-01-01
Summary Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai et al., 2011). In this paper, we derive testing and prediction methods for KM regression under the accelerated failure time model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. PMID:24328713
An improved robust blind motion de-blurring algorithm for remote sensing images
NASA Astrophysics Data System (ADS)
He, Yulong; Liu, Jin; Liang, Yonghui
2016-10-01
Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.
On the heat trace of Schroedinger operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banuelos, R.; Sa Barreto, A.
1995-12-31
Trace formulae for heat kernels of Schroedinger operators have been widely studied in connection with spectral and scattering theory. They have been used to obtain information about a potential from its spectrum, or from its scattering data, and vice-versa. Using elementary Fourier transform methods we obtain a formula for the general coefficient in the asymptotic expansion of the trace of the heat kernel of the Schroedinger operator {minus}{Delta} + V, as t {down_arrow} 0, with V {element_of} S(R{sup n}), the class of functions with rapid decay at infinity. In dimension n = 1 a recurrent formula for the general coefficientmore » in the expansion is obtained in [6]. However the KdV methods used there do not seem to generalize to higher dimension. Using the formula of [6] and the symmetry of some integrals, Y. Colin de Verdiere has computed the first four coefficients for potentials in three space dimensions. Also in [1] a different method is used to compute heat coefficients for differential operators on manifolds. 14 refs.« less
PERI - Auto-tuning Memory Intensive Kernels for Multicore
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H; Williams, Samuel; Datta, Kaushik
2008-06-24
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we developmore » a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.« less
Famurewa, Ademola C; Nwankwo, Onyebuchi E; Folawiyo, Abiola M; Igwe, Emeka C; Epete, Michael A; Ufebe, Odomero G
2017-01-01
The literature reports that the health benefits of vegetable oil can be deteriorated by repeated heating, which leads to lipid oxidation and the formation of free radicals. Virgin coconut oil (VCO) is emerging as a functional food oil and its health benefits are attributed to its potent polyphenolic compounds. We investigated the beneficial effect of VCO supplementation on lipid profile, liver and kidney markers in rats fed repeatedly heated palm kernel oil (HPO). Rats were divided into four groups (n = 5). The control group rats were fed with a normal diet; group 2 rats were fed a 10% VCO supplemented diet; group 3 administered 10 ml HPO/kg b.w. orally; group 4 were fed 10% VCO + 10 ml HPO/kg for 28 days. Subsequently, serum markers of liver damage (ALT, AST, ALP and albumin), kidney damage (urea, creatinine and uric acid), lipid profile and lipid ratios as cardiovascular risk indices were evaluated. HPO induced a significant increase in serum markers of liver and kidney damage as well as con- comitant lipid abnormalities and a marked reduction in serum HDL-C. The lipid ratios evaluated for atherogenic and coronary risk indices in rats administered HPO only were remarkably higher than control. It was observed that VCO supplementation attenuated the biochemical alterations, including the indices of cardiovascular risks. VCO supplementation demonstrates beneficial health effects against HPO-induced biochemical alterations in rats. VCO may serve to modulate the adverse effects associated with consumption of repeatedly heated palm kernel oil.
Pearson correlation estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less
Lu, Zhao; Sun, Jing; Butts, Kenneth
2016-02-03
A giant leap has been made in the past couple of decades with the introduction of kernel-based learning as a mainstay for designing effective nonlinear computational learning algorithms. In view of the geometric interpretation of conditional expectation and the ubiquity of multiscale characteristics in highly complex nonlinear dynamic systems [1]-[3], this paper presents a new orthogonal projection operator wavelet kernel, aiming at developing an efficient computational learning approach for nonlinear dynamical system identification. In the framework of multiresolution analysis, the proposed projection operator wavelet kernel can fulfill the multiscale, multidimensional learning to estimate complex dependencies. The special advantage of the projection operator wavelet kernel developed in this paper lies in the fact that it has a closed-form expression, which greatly facilitates its application in kernel learning. To the best of our knowledge, it is the first closed-form orthogonal projection wavelet kernel reported in the literature. It provides a link between grid-based wavelets and mesh-free kernel-based methods. Simulation studies for identifying the parallel models of two benchmark nonlinear dynamical systems confirm its superiority in model accuracy and sparsity.
Searching for efficient Markov chain Monte Carlo proposal kernels
Yang, Ziheng; Rodríguez, Carlos E.
2013-01-01
Markov chain Monte Carlo (MCMC) or the Metropolis–Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis–Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals. PMID:24218600
Bazargan, Alireza; Rough, Sarah L; McKay, Gordon
2018-04-01
Palm kernel shell biochars (PKSB) ejected as residues from a gasifier have been used for solid fuel briquette production. With this approach, palm kernel shells can be used for energy production twice: first, by producing rich syngas during gasification; second, by compacting the leftover residues from gasification into high calorific value briquettes. Herein, the process parameters for the manufacture of PKSB biomass briquettes via compaction are optimized. Two possible optimum process scenarios are considered. In the first, the compaction speed is increased from 0.5 to 10 mm/s, the compaction pressure is decreased from 80 Pa to 40 MPa, the retention time is reduced from 10 s to zero, and the starch binder content of the briquette is halved from 0.1 to 0.05 kg/kg. With these adjustments, the briquette production rate increases by more than 20-fold; hence capital and operational costs can be reduced and the service life of compaction equipment can be increased. The resulting product satisfactorily passes tensile (compressive) crushing strength and impact resistance tests. The second scenario involves reducing the starch weight content to 0.03 kg/kg, while reducing the compaction pressure to a value no lower than 60 MPa. Overall, in both cases, the PKSB biomass briquettes show excellent potential as a solid fuel with calorific values on par with good-quality coal. CHNS: carbon, hydrogen, nitrogen, sulfur; FFB: fresh fruit bunch(es); HHV: higher heating value [J/kg]; LHV: lower heating value [J/kg]; PKS: palm kernel shell(s); PKSB: palm kernel shell biochar(s); POME: palm oil mill effluent; RDF: refuse-derived fuel; TGA: thermogravimetric analysis.
General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.
Marmarelis, V Z; Berger, T W
2005-07-01
This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.
Surface and top-of-atmosphere radiative feedback kernels for CESM-CAM5
NASA Astrophysics Data System (ADS)
Pendergrass, Angeline G.; Conley, Andrew; Vitt, Francis M.
2018-02-01
Radiative kernels at the top of the atmosphere are useful for decomposing changes in atmospheric radiative fluxes due to feedbacks from atmosphere and surface temperature, water vapor, and surface albedo. Here we describe and validate radiative kernels calculated with the large-ensemble version of CAM5, CESM1.1.2, at the top of the atmosphere and the surface. Estimates of the radiative forcing from greenhouse gases and aerosols in RCP8.5 in the CESM large-ensemble simulations are also diagnosed. As an application, feedbacks are calculated for the CESM large ensemble. The kernels are freely available at https://doi.org/10.5065/D6F47MT6, and accompanying software can be downloaded from https://github.com/apendergrass/cam5-kernels.
Measurement of Flaw Size From Thermographic Data
NASA Technical Reports Server (NTRS)
Winfree, William P.; Zalameda, Joseph N.; Howell, Patricia A.
2015-01-01
Simple methods for reducing the pulsed thermographic responses of delaminations tend to overestimate the size of the delamination, since the heat diffuses in the plane parallel to the surface. The result is a temperature profile over the delamination which is larger than the delamination size. A variational approach is presented for reducing the thermographic data to produce an estimated size for a flaw that is much closer to the true size of the delamination. The method is based on an estimate for the thermal response that is a convolution of a Gaussian kernel with the shape of the flaw. The size is determined from both the temporal and spatial thermal response of the exterior surface above the delamination and constraints on the length of the contour surrounding the delamination. Examples of the application of the technique to simulation and experimental data are presented to investigate the limitations of the technique.
Nana, Roger; Hu, Xiaoping
2010-01-01
k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.
Chen, Lidong; Basu, Anup; Zhang, Maojun; Wang, Wei; Liu, Yu
2014-03-20
A complementary catadioptric imaging technique was proposed to solve the problem of low and nonuniform resolution in omnidirectional imaging. To enhance this research, our paper focuses on how to generate a high-resolution panoramic image from the captured omnidirectional image. To avoid the interference between the inner and outer images while fusing the two complementary views, a cross-selection kernel regression method is proposed. First, in view of the complementarity of sampling resolution in the tangential and radial directions between the inner and the outer images, respectively, the horizontal gradients in the expected panoramic image are estimated based on the scattered neighboring pixels mapped from the outer, while the vertical gradients are estimated using the inner image. Then, the size and shape of the regression kernel are adaptively steered based on the local gradients. Furthermore, the neighboring pixels in the next interpolation step of kernel regression are also selected based on the comparison between the horizontal and vertical gradients. In simulation and real-image experiments, the proposed method outperforms existing kernel regression methods and our previous wavelet-based fusion method in terms of both visual quality and objective evaluation.
Lesion contrast and detection using sonoelastographic shear velocity imaging: preliminary results
NASA Astrophysics Data System (ADS)
Hoyt, Kenneth; Parker, Kevin J.
2007-03-01
This paper assesses lesion contrast and detection using sonoelastographic shear velocity imaging. Shear wave interference patterns, termed crawling waves, for a two phase medium were simulated assuming plane wave conditions. Shear velocity estimates were computed using a spatial autocorrelation algorithm that operates in the direction of shear wave propagation for a given kernel size. Contrast was determined by analyzing shear velocity estimate transition between mediums. Experimental results were obtained using heterogeneous phantoms with spherical inclusions (5 or 10 mm in diameter) characterized by elevated shear velocities. Two vibration sources were applied to opposing phantom edges and scanned (orthogonal to shear wave propagation) with an ultrasound scanner equipped for sonoelastography. Demodulated data was saved and transferred to an external computer for processing shear velocity images. Simulation results demonstrate shear velocity transition between contrasting mediums is governed by both estimator kernel size and source vibration frequency. Experimental results from phantoms further indicates that decreasing estimator kernel size produces corresponding decrease in shear velocity estimate transition between background and inclusion material albeit with an increase in estimator noise. Overall, results demonstrate the ability to generate high contrast shear velocity images using sonoelastographic techniques and detect millimeter-sized lesions.
A Experimental Study of the Growth of Laser Spark and Electric Spark Ignited Flame Kernels.
NASA Astrophysics Data System (ADS)
Ho, Chi Ming
1995-01-01
Better ignition sources are constantly in demand for enhancing the spark ignition in practical applications such as automotive and liquid rocket engines. In response to this practical challenge, the present experimental study was conducted with the major objective to obtain a better understanding on how spark formation and hence spark characteristics affect the flame kernel growth. Two laser sparks and one electric spark were studied in air, propane-air, propane -air-nitrogen, methane-air, and methane-oxygen mixtures that were initially at ambient pressure and temperature. The growth of the kernels was monitored by imaging the kernels with shadowgraph systems, and by imaging the planar laser -induced fluorescence of the hydroxyl radicals inside the kernels. Characteristic dimensions and kernel structures were obtained from these images. Since different energy transfer mechanisms are involved in the formation of a laser spark as compared to that of an electric spark; a laser spark is insensitive to changes in mixture ratio and mixture type, while an electric spark is sensitive to changes in both. The detailed structures of the kernels in air and propane-air mixtures primarily depend on the spark characteristics. But the combustion heat released rapidly in methane-oxygen mixtures significantly modifies the kernel structure. Uneven spark energy distribution causes remarkably asymmetric kernel structure. The breakdown energy of a spark creates a blast wave that shows good agreement with the numerical point blast solution, and a succeeding complex spark-induced flow that agrees reasonably well with a simple puff model. The transient growth rates of the propane-air, propane-air -nitrogen, and methane-air flame kernels can be interpreted in terms of spark effects, flame stretch, and preferential diffusion. For a given mixture, a spark with higher breakdown energy produces a greater and longer-lasting enhancing effect on the kernel growth rate. By comparing the growth rates of the appropriate mixtures, the positive and negative effects of preferential diffusion and flame stretch on the developing flame are clearly demonstrated.
Confidence set inference with a prior quadratic bound
NASA Technical Reports Server (NTRS)
Backus, George E.
1988-01-01
In the uniqueness part of a geophysical inverse problem, the observer wants to predict all likely values of P unknown numerical properties z = (z sub 1,...,z sub p) of the earth from measurement of D other numerical properties y(0)=(y sub 1(0),...,y sub D(0)) knowledge of the statistical distribution of the random errors in y(0). The data space Y containing y(0) is D-dimensional, so when the model space X is infinite-dimensional the linear uniqueness problem usually is insoluble without prior information about the correct earth model x. If that information is a quadratic bound on x (e.g., energy or dissipation rate), Bayesian inference (BI) and stochastic inversion (SI) inject spurious structure into x, implied by neither the data nor the quadratic bound. Confidence set inference (CSI) provides an alternative inversion technique free of this objection. CSI is illustrated in the problem of estimating the geomagnetic field B at the core-mantle boundary (CMB) from components of B measured on or above the earth's surface. Neither the heat flow nor the energy bound is strong enough to permit estimation of B(r) at single points on the CMB, but the heat flow bound permits estimation of uniform averages of B(r) over discs on the CMB, and both bounds permit weighted disc-averages with continous weighting kernels. Both bounds also permit estimation of low-degree Gauss coefficients at the CMB. The heat flow bound resolves them up to degree 8 if the crustal field at satellite altitudes must be treated as a systematic error, but can resolve to degree 11 under the most favorable statistical treatment of the crust. These two limits produce circles of confusion on the CMB with diameters of 25 deg and 19 deg respectively.
Surface and top-of-atmosphere radiative feedback kernels for CESM-CAM5
Pendergrass, Angeline G.; Conley, Andrew; Vitt, Francis M.
2018-02-21
Radiative kernels at the top of the atmosphere are useful for decomposing changes in atmospheric radiative fluxes due to feedbacks from atmosphere and surface temperature, water vapor, and surface albedo. Here we describe and validate radiative kernels calculated with the large-ensemble version of CAM5, CESM1.1.2, at the top of the atmosphere and the surface. Estimates of the radiative forcing from greenhouse gases and aerosols in RCP8.5 in the CESM large-ensemble simulations are also diagnosed. As an application, feedbacks are calculated for the CESM large ensemble. The kernels are freely available at https://doi.org/10.5065/D6F47MT6, and accompanying software can be downloaded from https://github.com/apendergrass/cam5-kernels.
Surface and top-of-atmosphere radiative feedback kernels for CESM-CAM5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pendergrass, Angeline G.; Conley, Andrew; Vitt, Francis M.
Radiative kernels at the top of the atmosphere are useful for decomposing changes in atmospheric radiative fluxes due to feedbacks from atmosphere and surface temperature, water vapor, and surface albedo. Here we describe and validate radiative kernels calculated with the large-ensemble version of CAM5, CESM1.1.2, at the top of the atmosphere and the surface. Estimates of the radiative forcing from greenhouse gases and aerosols in RCP8.5 in the CESM large-ensemble simulations are also diagnosed. As an application, feedbacks are calculated for the CESM large ensemble. The kernels are freely available at https://doi.org/10.5065/D6F47MT6, and accompanying software can be downloaded from https://github.com/apendergrass/cam5-kernels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.
Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less
Phylodynamic Inference with Kernel ABC and Its Application to HIV Epidemiology.
Poon, Art F Y
2015-09-01
The shapes of phylogenetic trees relating virus populations are determined by the adaptation of viruses within each host, and by the transmission of viruses among hosts. Phylodynamic inference attempts to reverse this flow of information, estimating parameters of these processes from the shape of a virus phylogeny reconstructed from a sample of genetic sequences from the epidemic. A key challenge to phylodynamic inference is quantifying the similarity between two trees in an efficient and comprehensive way. In this study, I demonstrate that a new distance measure, based on a subset tree kernel function from computational linguistics, confers a significant improvement over previous measures of tree shape for classifying trees generated under different epidemiological scenarios. Next, I incorporate this kernel-based distance measure into an approximate Bayesian computation (ABC) framework for phylodynamic inference. ABC bypasses the need for an analytical solution of model likelihood, as it only requires the ability to simulate data from the model. I validate this "kernel-ABC" method for phylodynamic inference by estimating parameters from data simulated under a simple epidemiological model. Results indicate that kernel-ABC attained greater accuracy for parameters associated with virus transmission than leading software on the same data sets. Finally, I apply the kernel-ABC framework to study a recent outbreak of a recombinant HIV subtype in China. Kernel-ABC provides a versatile framework for phylodynamic inference because it can fit a broader range of models than methods that rely on the computation of exact likelihoods. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Omnibus risk assessment via accelerated failure time kernel machine modeling.
Sinnott, Jennifer A; Cai, Tianxi
2013-12-01
Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.
Is there a single best estimator? selection of home range estimators using area- under- the-curve
Walter, W. David; Onorato, Dave P.; Fischer, Justin W.
2015-01-01
Comparisons of fit of home range contours with locations collected would suggest that use of VHF technology is not as accurate as GPS technology to estimate size of home range for large mammals. Estimators of home range collected with GPS technology performed better than those estimated with VHF technology regardless of estimator used. Furthermore, estimators that incorporate a temporal component (third-generation estimators) appeared to be the most reliable regardless of whether kernel-based or Brownian bridge-based algorithms were used and in comparison to first- and second-generation estimators. We defined third-generation estimators of home range as any estimator that incorporates time, space, animal-specific parameters, and habitat. Such estimators would include movement-based kernel density, Brownian bridge movement models, and dynamic Brownian bridge movement models among others that have yet to be evaluated.
Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party
NASA Astrophysics Data System (ADS)
Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi
The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.
Calculation of the time resolution of the J-PET tomograph using kernel density estimation
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
2017-06-01
In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.
Kernel analysis in TeV gamma-ray selection
NASA Astrophysics Data System (ADS)
Moriarty, P.; Samuelson, F. W.
2000-06-01
We discuss the use of kernel analysis as a technique for selecting gamma-ray candidates in Atmospheric Cherenkov astronomy. The method is applied to observations of the Crab Nebula and Markarian 501 recorded with the Whipple 10 m Atmospheric Cherenkov imaging system, and the results are compared with the standard Supercuts analysis. Since kernel analysis is computationally intensive, we examine approaches to reducing the computational load. Extension of the technique to estimate the energy of the gamma-ray primary is considered. .
Lung dynamic MRI deblurring using low-rank decomposition and dictionary learning.
Gou, Shuiping; Wang, Yueyue; Wu, Jiaolong; Lee, Percy; Sheng, Ke
2015-04-01
Lung dynamic MRI (dMRI) has emerged to be an appealing tool to quantify lung motion for both planning and treatment guidance purposes. However, this modality can result in blurry images due to intrinsically low signal-to-noise ratio in the lung and spatial/temporal interpolation. The image blurring could adversely affect the image processing that depends on the availability of fine landmarks. The purpose of this study is to reduce dMRI blurring using image postprocessing. To enhance the image quality and exploit the spatiotemporal continuity of dMRI sequences, a low-rank decomposition and dictionary learning (LDDL) method was employed to deblur lung dMRI and enhance the conspicuity of lung blood vessels. Fifty frames of continuous 2D coronal dMRI frames using a steady state free precession sequence were obtained from five subjects including two healthy volunteer and three lung cancer patients. In LDDL, the lung dMRI was decomposed into sparse and low-rank components. Dictionary learning was employed to estimate the blurring kernel based on the whole image, low-rank or sparse component of the first image in the lung MRI sequence. Deblurring was performed on the whole image sequences using deconvolution based on the estimated blur kernel. The deblurring results were quantified using an automated blood vessel extraction method based on the classification of Hessian matrix filtered images. Accuracy of automated extraction was calculated using manual segmentation of the blood vessels as the ground truth. In the pilot study, LDDL based on the blurring kernel estimated from the sparse component led to performance superior to the other ways of kernel estimation. LDDL consistently improved image contrast and fine feature conspicuity of the original MRI without introducing artifacts. The accuracy of automated blood vessel extraction was on average increased by 16% using manual segmentation as the ground truth. Image blurring in dMRI images can be effectively reduced using a low-rank decomposition and dictionary learning method using kernels estimated by the sparse component.
Cid, Jaime A; von Davier, Alina A
2015-05-01
Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.
USDA-ARS?s Scientific Manuscript database
Effect of moisture content variation on the accuracy of single kernel deoxynivalenol (DON) prediction by near-infrared (NIR) spectroscopy was investigated. Sample moisture content (MC) considerably affected accuracy of the current NIR DON calibration by underestimating or over estimating DON at high...
Tarjan, Lily M; Tinker, M. Tim
2016-01-01
Parametric and nonparametric kernel methods dominate studies of animal home ranges and space use. Most existing methods are unable to incorporate information about the underlying physical environment, leading to poor performance in excluding areas that are not used. Using radio-telemetry data from sea otters, we developed and evaluated a new algorithm for estimating home ranges (hereafter Permissible Home Range Estimation, or “PHRE”) that reflects habitat suitability. We began by transforming sighting locations into relevant landscape features (for sea otters, coastal position and distance from shore). Then, we generated a bivariate kernel probability density function in landscape space and back-transformed this to geographic space in order to define a permissible home range. Compared to two commonly used home range estimation methods, kernel densities and local convex hulls, PHRE better excluded unused areas and required a smaller sample size. Our PHRE method is applicable to species whose ranges are restricted by complex physical boundaries or environmental gradients and will improve understanding of habitat-use requirements and, ultimately, aid in conservation efforts.
Wang, Gang; Wang, Yalin
2017-02-15
In this paper, we propose a heat kernel based regional shape descriptor that may be capable of better exploiting volumetric morphological information than other available methods, thereby improving statistical power on brain magnetic resonance imaging (MRI) analysis. The mechanism of our analysis is driven by the graph spectrum and the heat kernel theory, to capture the volumetric geometry information in the constructed tetrahedral meshes. In order to capture profound brain grey matter shape changes, we first use the volumetric Laplace-Beltrami operator to determine the point pair correspondence between white-grey matter and CSF-grey matter boundary surfaces by computing the streamlines in a tetrahedral mesh. Secondly, we propose multi-scale grey matter morphology signatures to describe the transition probability by random walk between the point pairs, which reflects the inherent geometric characteristics. Thirdly, a point distribution model is applied to reduce the dimensionality of the grey matter morphology signatures and generate the internal structure features. With the sparse linear discriminant analysis, we select a concise morphology feature set with improved classification accuracies. In our experiments, the proposed work outperformed the cortical thickness features computed by FreeSurfer software in the classification of Alzheimer's disease and its prodromal stage, i.e., mild cognitive impairment, on publicly available data from the Alzheimer's Disease Neuroimaging Initiative. The multi-scale and physics based volumetric structure feature may bring stronger statistical power than some traditional methods for MRI-based grey matter morphology analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation
Huang, Hao; Yoo, Shinjae; Yu, Dantong; ...
2015-06-01
Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, somore » it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.« less
Dawson, Andria; Paciorek, Christopher J.; McLachlan, Jason S.; Goring, Simon; Williams, John W.; Jackson, Stephen T.
2016-01-01
Mitigation of climate change and adaptation to its effects relies partly on how effectively land-atmosphere interactions can be quantified. Quantifying composition of past forest ecosystems can help understand processes governing forest dynamics in a changing world. Fossil pollen data provide information about past forest composition, but rigorous interpretation requires development of pollen-vegetation models (PVMs) that account for interspecific differences in pollen production and dispersal. Widespread and intensified land-use over the 19th and 20th centuries may have altered pollen-vegetation relationships. Here we use STEPPS, a Bayesian hierarchical spatial PVM, to estimate key process parameters and associated uncertainties in the pollen-vegetation relationship. We apply alternate dispersal kernels, and calibrate STEPPS using a newly developed Euro-American settlement-era calibration data set constructed from Public Land Survey data and fossil pollen samples matched to the settlement-era using expert elicitation. Models based on the inverse power-law dispersal kernel outperformed those based on the Gaussian dispersal kernel, indicating that pollen dispersal kernels are fat tailed. Pine and birch have the highest pollen productivities. Pollen productivity and dispersal estimates are generally consistent with previous understanding from modern data sets, although source area estimates are larger. Tests of model predictions demonstrate the ability of STEPPS to predict regional compositional patterns.
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.
NASA Astrophysics Data System (ADS)
Dawson, Andria; Paciorek, Christopher J.; McLachlan, Jason S.; Goring, Simon; Williams, John W.; Jackson, Stephen T.
2016-04-01
Mitigation of climate change and adaptation to its effects relies partly on how effectively land-atmosphere interactions can be quantified. Quantifying composition of past forest ecosystems can help understand processes governing forest dynamics in a changing world. Fossil pollen data provide information about past forest composition, but rigorous interpretation requires development of pollen-vegetation models (PVMs) that account for interspecific differences in pollen production and dispersal. Widespread and intensified land-use over the 19th and 20th centuries may have altered pollen-vegetation relationships. Here we use STEPPS, a Bayesian hierarchical spatial PVM, to estimate key process parameters and associated uncertainties in the pollen-vegetation relationship. We apply alternate dispersal kernels, and calibrate STEPPS using a newly developed Euro-American settlement-era calibration data set constructed from Public Land Survey data and fossil pollen samples matched to the settlement-era using expert elicitation. Models based on the inverse power-law dispersal kernel outperformed those based on the Gaussian dispersal kernel, indicating that pollen dispersal kernels are fat tailed. Pine and birch have the highest pollen productivities. Pollen productivity and dispersal estimates are generally consistent with previous understanding from modern data sets, although source area estimates are larger. Tests of model predictions demonstrate the ability of STEPPS to predict regional compositional patterns.
Snake River Plain Geothermal Play Fairway Analysis - Phase 1 Raster Files
John Shervais
2015-10-09
Snake River Plain Play Fairway Analysis - Phase 1 CRS Raster Files. This dataset contains raster files created in ArcGIS. These raster images depict Common Risk Segment (CRS) maps for HEAT, PERMEABILITY, AND SEAL, as well as selected maps of Evidence Layers. These evidence layers consist of either Bayesian krige functions or kernel density functions, and include: (1) HEAT: Heat flow (Bayesian krige map), Heat flow standard error on the krige function (data confidence), volcanic vent distribution as function of age and size, groundwater temperature (equivalue interval and natural breaks bins), and groundwater T standard error. (2) PERMEABILTY: Fault and lineament maps, both as mapped and as kernel density functions, processed for both dilational tendency (TD) and slip tendency (ST), along with data confidence maps for each data type. Data types include mapped surface faults from USGS and Idaho Geological Survey data bases, as well as unpublished mapping; lineations derived from maximum gradients in magnetic, deep gravity, and intermediate depth gravity anomalies. (3) SEAL: Seal maps based on presence and thickness of lacustrine sediments and base of SRP aquifer. Raster size is 2 km. All files generated in ArcGIS.
NASA Astrophysics Data System (ADS)
Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.
2014-10-01
In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.
77 FR 25375 - United States Standards for Wheat
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-30
...: Defects: Damaged kernels Heat (part of total) 0.2 0.2 0.5 1.0 .3.0 Total 2.0 4.0 7.0 10.0 15.0 Foreign material 0.4 0.7 1.3 3.0 5.0 Shrunken and broken kernels 2.0 4.0 8.0 12.0 20.0 Total \\1\\ 3.0 5.0 8.0 12.0 20.0 Wheat of other classes: \\2\\ Contrasting classes 1.0 2.0 3.0 10.0 10.0 Total \\3\\ 3.0 5.0 10.0 10...
Özcan, Mehmet Musa; Juhaimi, Fahad Al; Uslu, Nurhan
2018-01-01
Brazilian peanut oil content increased with oven heating (65.08%) and decreased with microwave heating process (61.00%). While the phenolic content of untreated Brazilian nut was the highest of 68.97 mg GAE/100 g. Hazelnut (Sivri) contained the highest antioxidant activity (86.52%, untreated). Results reflected significantly differences between the antioxidant effect and total phenol contents of Brazilian nut and hazelnut (Sivri) kernels heated in the oven and microwave. Microwave heating caused a decrease in antioxidant activity of hazelnut. Gallic acid, 3,4-dihydroxybenzoic acid and (+)- and catechin were the main phenolic compounds of raw Brazilian nut with the value of 5.33, 4.33 and 4.88 mg/100 g, respectively, while the dominant phenolics of raw hazelnut (Sivri) kernels were gallic acid (4.81 mg/100 g), 3,4-dihydroxybenzoic acid (4.61 mg/100 g), (+)-catechin (6.96 mg/100 g) and 1,2-dihydroxybenzene (4.14 mg/100 g). Both conventional and microwave heating caused minor reduction in phenolic compounds. The main fatty acids of Brazilian nut oil were linoleic (44.39-48.18%), oleic (27.74-31.74%), palmitic (13.09-13.70%) and stearic (8.20-8.91%) acids, while the dominant fatty acids of hazelnut (Sivri) oil were oleic acid (80.84%), respectively. The heating process caused noticeable change in fatty acid compositions of both nut oils.
7 CFR 810.1402 - Definition of other terms.
Code of Federal Regulations, 2014 CFR
2014-01-01
... containing spots that, singly or in combination, cover 25.0 percent or less of the kernel. (4) Mixed sorghum... the 5/64 triangular-hole sieve according to procedures prescribed in FGIS instructions. (g) Heat...
7 CFR 810.1402 - Definition of other terms.
Code of Federal Regulations, 2013 CFR
2013-01-01
... containing spots that, singly or in combination, cover 25.0 percent or less of the kernel. (4) Mixed sorghum... the 5/64 triangular-hole sieve according to procedures prescribed in FGIS instructions. (g) Heat...
7 CFR 810.1402 - Definition of other terms.
Code of Federal Regulations, 2012 CFR
2012-01-01
... containing spots that, singly or in combination, cover 25.0 percent or less of the kernel. (4) Mixed sorghum... the 5/64 triangular-hole sieve according to procedures prescribed in FGIS instructions. (g) Heat...
USDA-ARS?s Scientific Manuscript database
Pre-harvest sprouting of wheat kernels within the grain head presents serious problems as it can greatly affect end use quality. Functional properties of wheat flour made from sprouted wheat result in poor dough and bread-making quality. This research examined the ability of two instruments to estim...
NASA Astrophysics Data System (ADS)
Chu, Weiqi; Li, Xiantao
2018-01-01
We present some estimates for the memory kernel function in the generalized Langevin equation, derived using the Mori-Zwanzig formalism from a one-dimensional lattice model, in which the particles interactions are through nearest and second nearest neighbors. The kernel function can be explicitly expressed in a matrix form. The analysis focuses on the decay properties, both spatially and temporally, revealing a power-law behavior in both cases. The dependence on the level of coarse-graining is also studied.
Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.
Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan
2016-11-01
In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects. Copyright © 2016 Crop Science Society of America.
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
NASA Astrophysics Data System (ADS)
Troudi, Molka; Alimi, Adel M.; Saoudi, Samir
2008-12-01
The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
A self-calibrated angularly continuous 2D GRAPPA kernel for propeller trajectories
Skare, Stefan; Newbould, Rexford D; Nordell, Anders; Holdsworth, Samantha J; Bammer, Roland
2008-01-01
The k-space readout of propeller-type sequences may be accelerated by the use of parallel imaging (PI). For PROPELLER, the main benefits are reduced blurring due to T2 decay and SAR reduction, while for EPI-based propeller acquisitions such as Turbo-PROP and SAP-EPI, the faster k-space traversal alleviates geometric distortions. In this work, the feasibility of calculating a 2D GRAPPA kernel on only the undersampled propeller blades themselves is explored, using the matching orthogonal undersampled blade. It is shown that the GRAPPA kernel varies slowly across blades, therefore an angularly continuous 2D GRAPPA kernel is proposed, in which the angular variation of the weights is parameterized. This new angularly continuous kernel formulation greatly increases the numerical stability of the GRAPPA weight estimation, allowing the generation of fully sampled diagnostic quality images using only the undersampled propeller data. PMID:19025911
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, Rodney Dale; Johnson, Jared A.; Collins, Jack Lee
A comparison study on carbon blacks and dispersing agents was performed to determine their impacts on the final properties of uranium fuel kernels with carbon. The main target compositions in this internal gelation study were 10 and 20 mol % uranium dicarbide (UC 2), which is UC 1.86, with the balance uranium dioxide. After heat treatment at 1900 K in flowing carbon monoxide in argon for 12 h, the density of the kernels produced using a X-energy proprietary carbon suspension, which is commercially available, ranged from 96% to 100% of theoretical density (TD), with full conversion of UC to UCmore » 2 at both carbon concentrations. However, higher carbon concentrations such as a 2.5 mol ratio of carbon to uranium in the feed solutions failed to produce gel spheres with the proprietary carbon suspension. The kernels using our former baseline of Mogul L carbon black and Tamol SN were 90–92% of TD with full conversion of UC to UC 2 at a variety of carbon levels. Raven 5000 carbon black and Tamol SN were used to produce 10 mol % UC2 kernels with 95% of TD. However, an increase in the Raven 5000 concentration led to a kernel density below 90% of TD. Raven 3500 carbon black and Tamol SN were used to make very dense kernels without complete conversion to UC 2. Lastly, the selection of the carbon black and dispersing agent is highly dependent on the desired final properties of the target kernels.« less
NASA Astrophysics Data System (ADS)
Hunt, R. D.; Johnson, J. A.; Collins, J. L.; McMurray, J. W.; Reif, T. J.; Brown, D. R.
2018-01-01
A comparison study on carbon blacks and dispersing agents was performed to determine their impacts on the final properties of uranium fuel kernels with carbon. The main target compositions in this internal gelation study were 10 and 20 mol % uranium dicarbide (UC2), which is UC1.86, with the balance uranium dioxide. After heat treatment at 1900 K in flowing carbon monoxide in argon for 12 h, the density of the kernels produced using a X-energy proprietary carbon suspension, which is commercially available, ranged from 96% to 100% of theoretical density (TD), with full conversion of UC to UC2 at both carbon concentrations. However, higher carbon concentrations such as a 2.5 mol ratio of carbon to uranium in the feed solutions failed to produce gel spheres with the proprietary carbon suspension. The kernels using our former baseline of Mogul L carbon black and Tamol SN were 90-92% of TD with full conversion of UC to UC2 at a variety of carbon levels. Raven 5000 carbon black and Tamol SN were used to produce 10 mol % UC2 kernels with 95% of TD. However, an increase in the Raven 5000 concentration led to a kernel density below 90% of TD. Raven 3500 carbon black and Tamol SN were used to make very dense kernels without complete conversion to UC2. The selection of the carbon black and dispersing agent is highly dependent on the desired final properties of the target kernels.
Hunt, Rodney Dale; Johnson, Jared A.; Collins, Jack Lee; ...
2017-10-12
A comparison study on carbon blacks and dispersing agents was performed to determine their impacts on the final properties of uranium fuel kernels with carbon. The main target compositions in this internal gelation study were 10 and 20 mol % uranium dicarbide (UC 2), which is UC 1.86, with the balance uranium dioxide. After heat treatment at 1900 K in flowing carbon monoxide in argon for 12 h, the density of the kernels produced using a X-energy proprietary carbon suspension, which is commercially available, ranged from 96% to 100% of theoretical density (TD), with full conversion of UC to UCmore » 2 at both carbon concentrations. However, higher carbon concentrations such as a 2.5 mol ratio of carbon to uranium in the feed solutions failed to produce gel spheres with the proprietary carbon suspension. The kernels using our former baseline of Mogul L carbon black and Tamol SN were 90–92% of TD with full conversion of UC to UC 2 at a variety of carbon levels. Raven 5000 carbon black and Tamol SN were used to produce 10 mol % UC2 kernels with 95% of TD. However, an increase in the Raven 5000 concentration led to a kernel density below 90% of TD. Raven 3500 carbon black and Tamol SN were used to make very dense kernels without complete conversion to UC 2. Lastly, the selection of the carbon black and dispersing agent is highly dependent on the desired final properties of the target kernels.« less
Subramanian, Sundarraman
2008-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423
Subramanian, Sundarraman
2006-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birmingham, D.; Kantowski, R.; Milton, K.A.
We use two methods of computing the unique logarithmically divergent part of the Casimir energy for massive scalar and spinor fields defined on even-dimensional Kaluza-Klein spaces of the form M/sup 4/ x S/sup N//sup 1/ x S/sup N//sup 2/ x xxx. Both methods (heat kernel and direct) give identical results. The first evaluates the required internal zeta function by identifying it in the asymptotic expansion of the trace of the heat kernel, and the second evaluates the zeta function directly using the Euler-Maclaurin sum formula. In Appendix C we tabulate these energies for all spaces of total internal dimension lessmore » than or equal to6. These methods are easily applied to vector and tensor fields needed in computing one-loop vacuum gravitational energies on these spaces. Stable solutions are given for internal structure S/sup 2/ x S/sup 2/.« less
Ebenstein, Avraham; Fan, Maoyong; Greenstone, Michael; He, Guojun; Zhou, Maigeng
2017-01-01
This paper finds that a 10-μg/m3 increase in airborne particulate matter [particulate matter smaller than 10 μm (PM10)] reduces life expectancy by 0.64 years (95% confidence interval = 0.21–1.07). This estimate is derived from quasiexperimental variation in PM10 generated by China’s Huai River Policy, which provides free or heavily subsidized coal for indoor heating during the winter to cities north of the Huai River but not to those to the south. The findings are derived from a regression discontinuity design based on distance from the Huai River, and they are robust to using parametric and nonparametric estimation methods, different kernel types and bandwidth sizes, and adjustment for a rich set of demographic and behavioral covariates. Furthermore, the shorter lifespans are almost entirely caused by elevated rates of cardiorespiratory mortality, suggesting that PM10 is the causal factor. The estimates imply that bringing all of China into compliance with its Class I standards for PM10 would save 3.7 billion life-years. PMID:28893980
Using kernel density estimation to understand the influence of neighbourhood destinations on BMI
King, Tania L; Bentley, Rebecca J; Thornton, Lukar E; Kavanagh, Anne M
2016-01-01
Objectives Little is known about how the distribution of destinations in the local neighbourhood is related to body mass index (BMI). Kernel density estimation (KDE) is a spatial analysis technique that accounts for the location of features relative to each other. Using KDE, this study investigated whether individuals living near destinations (shops and service facilities) that are more intensely distributed rather than dispersed, have lower BMIs. Study design and setting A cross-sectional study of 2349 residents of 50 urban areas in metropolitan Melbourne, Australia. Methods Destinations were geocoded, and kernel density estimates of destination intensity were created using kernels of 400, 800 and 1200 m. Using multilevel linear regression, the association between destination intensity (classified in quintiles Q1(least)–Q5(most)) and BMI was estimated in models that adjusted for the following confounders: age, sex, country of birth, education, dominant household occupation, household type, disability/injury and area disadvantage. Separate models included a physical activity variable. Results For kernels of 800 and 1200 m, there was an inverse relationship between BMI and more intensely distributed destinations (compared to areas with least destination intensity). Effects were significant at 1200 m: Q4, β −0.86, 95% CI −1.58 to −0.13, p=0.022; Q5, β −1.03 95% CI −1.65 to −0.41, p=0.001. Inclusion of physical activity in the models attenuated effects, although effects remained marginally significant for Q5 at 1200 m: β −0.77 95% CI −1.52, −0.02, p=0.045. Conclusions This study conducted within urban Melbourne, Australia, found that participants living in areas of greater destination intensity within 1200 m of home had lower BMIs. Effects were partly explained by physical activity. The results suggest that increasing the intensity of destination distribution could reduce BMI levels by encouraging higher levels of physical activity. PMID:26883235
Winter home-range characteristics of American Marten (Martes americana) in Northern Wisconsin
Joseph B. Dumyahn; Patrick A. Zollner
2007-01-01
We estimated home-range size for American marten (Martes americana) in northern Wisconsin during the winter months of 2001-2004, and compared the proportion of cover-type selection categories (highly used, neutral and avoided) among home-ranges (95% fixed-kernel), core areas (50% fixed-kernel) and the study area. Average winter homerange size was 3....
Diversity of maize kernels from a breeding program for protein quality III: Ionome profiling
USDA-ARS?s Scientific Manuscript database
Densities of single and multiple macro- and micronutrients have been estimated in mature kernels of 1,348 accessions in 13 maize genotypes. The germplasm belonged to stiff stalk (SS) and non-stiff stalk (NS) heterotic groups (HG) with one (S1) to four (S4) years of inbreeding (IB), or open pollinati...
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859
Rapid scatter estimation for CBCT using the Boltzmann transport equation
NASA Astrophysics Data System (ADS)
Sun, Mingshan; Maslowski, Alex; Davis, Ian; Wareing, Todd; Failla, Gregory; Star-Lack, Josh
2014-03-01
Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU's running in parallel.
StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.
Li, Chenhui; Baciu, George; Han, Yu
2018-03-01
Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.
Mixed kernel function support vector regression for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
NASA Astrophysics Data System (ADS)
Lu, Zhanpeng; Chen, Junjie; Shoji, Tetsuo; Takeda, Yoichi; Yamazaki, Seiya
2015-10-01
With increasing the distance from the weld fusion line in an Alloy 690 heat-affected zone, micro-hardness decreases, kernel average misorientation decreases and the fraction of Σ3 boundaries increases. Chromium depletion at grain boundaries in the Alloy 690 heat-affected zone is less significant than that in an Alloy 600 heat-affected zone. Alloy 690 heat-affected zone exhibits much higher IGSCC resistance than Alloy 600 heat-affected zone in simulated pressurized water reactor primary water. Heavily cold worked Alloy 690 exhibits localized intergranular stress corrosion cracking. The effects of metallurgical and mechanical properties on stress corrosion cracking in Alloy 690 are discussed.
Risk Classification with an Adaptive Naive Bayes Kernel Machine Model.
Minnier, Jessica; Yuan, Ming; Liu, Jun S; Cai, Tianxi
2015-04-22
Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models.
Power weighted L p -inequalities for Laguerre-Riesz transforms
NASA Astrophysics Data System (ADS)
Harboure, Eleonor; Segovia, Carlos; Torrea, José L.; Viviani, Beatriz
2008-10-01
In this paper we give a complete description of the power weighted inequalities, of strong, weak and restricted weak type for the pair of Riesz transforms associated with the Laguerre function system \\{mathcal{L}_k^{α}\\}, for any given α>-1. We achieve these results by a careful estimate of the kernels: near the diagonal we show that they are local Calderón-Zygmund operators while in the complement they are majorized by Hardy type operators and the maximal heat-diffusion operator. We also show that in all the cases our results are sharp.
Using kernel density estimates to investigate lymphatic filariasis in northeast Brazil
Medeiros, Zulma; Bonfim, Cristine; Brandão, Eduardo; Netto, Maria José Evangelista; Vasconcellos, Lucia; Ribeiro, Liany; Portugal, José Luiz
2012-01-01
After more than 10 years of the Global Program to Eliminate Lymphatic Filariasis (GPELF) in Brazil, advances have been seen, but the endemic disease persists as a public health problem. The aim of this study was to describe the spatial distribution of lymphatic filariasis in the municipality of Jaboatão dos Guararapes, Pernambuco, Brazil. An epidemiological survey was conducted in the municipality, and positive filariasis cases identified in this survey were georeferenced in point form, using the GPS. A kernel intensity estimator was applied to identify clusters with greater intensity of cases. We examined 23 673 individuals and 323 individuals with microfilaremia were identified, representing a mean prevalence rate of 1.4%. Around 88% of the districts surveyed presented cases of filarial infection, with prevalences of 0–5.6%. The male population was more affected by the infection, with 63.8% of the cases (P<0.005). Positive cases were found in all age groups examined. The kernel intensity estimator identified the areas of greatest intensity and least intensity of filarial infection cases. The case distribution was heterogeneous across the municipality. The kernel estimator identified spatial clusters of cases, thus indicating locations with greater intensity of transmission. The main advantage of this type of analysis lies in its ability to rapidly and easily show areas with the highest concentration of cases, thereby contributing towards planning, monitoring, and surveillance of filariasis elimination actions. Incorporation of geoprocessing and spatial analysis techniques constitutes an important tool for use within the GPELF. PMID:22943547
SOMKE: kernel density estimation over data streams by sequences of self-organizing maps.
Cao, Yuan; He, Haibo; Man, Hong
2012-08-01
In this paper, we propose a novel method SOMKE, for kernel density estimation (KDE) over data streams based on sequences of self-organizing map (SOM). In many stream data mining applications, the traditional KDE methods are infeasible because of the high computational cost, processing time, and memory requirement. To reduce the time and space complexity, we propose a SOM structure in this paper to obtain well-defined data clusters to estimate the underlying probability distributions of incoming data streams. The main idea of this paper is to build a series of SOMs over the data streams via two operations, that is, creating and merging the SOM sequences. The creation phase produces the SOM sequence entries for windows of the data, which obtains clustering information of the incoming data streams. The size of the SOM sequences can be further reduced by combining the consecutive entries in the sequence based on the measure of Kullback-Leibler divergence. Finally, the probability density functions over arbitrary time periods along the data streams can be estimated using such SOM sequences. We compare SOMKE with two other KDE methods for data streams, the M-kernel approach and the cluster kernel approach, in terms of accuracy and processing time for various stationary data streams. Furthermore, we also investigate the use of SOMKE over nonstationary (evolving) data streams, including a synthetic nonstationary data stream, a real-world financial data stream and a group of network traffic data streams. The simulation results illustrate the effectiveness and efficiency of the proposed approach.
NASA Astrophysics Data System (ADS)
Ngono Mbarga, M. C.; Bup Nde, D.; Mohagir, A.; Kapseu, C.; Elambo Nkeng, G.
2017-01-01
A neem tree growing abundantly in India as well as in some regions of Asia and Africa gives fruits whose kernels have about 40-50% oil. This oil has high therapeutic and cosmetic qualities and is recently projected to be an important raw material for the production of biodiesel. Its seed is harvested at high moisture contents, which leads tohigh post-harvest losses. In the paper, the sorption isotherms are determined by the static gravimetric method at 40, 50, and 60°C to establish a database useful in defining drying and storage conditions of neem kernels. Five different equations are validated for modeling the sorption isotherms of neem kernels. The properties of sorbed water, such as the monolayer moisture content, surface area of adsorbent, number of adsorbed monolayers, and the percent of bound water are also defined. The critical moisture content necessary for the safe storage of dried neem kernels is shown to range from 5 to 10% dry basis, which can be obtained at a relative humidity less than 65%. The isosteric heats of sorption at 5% moisture content are 7.40 and 22.5 kJ/kg for the adsorption and desorption processes, respectively. This work is the first, to the best of our knowledge, to give the important parameters necessary for drying and storage of neem kernels, a potential raw material for the production of oil to be used in pharmaceutics, cosmetics, and biodiesel manufacturing.
Aerosol Direct Radiative Effects and Heating in the New Era of Active Satellite Observations
NASA Astrophysics Data System (ADS)
Matus, Alexander V.
Atmospheric aerosols impact the global energy budget by scattering and absorbing solar radiation. Despite their impacts, aerosols remain a significant source of uncertainty in our ability to predict future climate. Multi-sensor observations from the A-Train satellite constellation provide valuable observational constraints necessary to reduce uncertainties in model simulations of aerosol direct effects. This study will discuss recent efforts to quantify aerosol direct effects globally and regionally using CloudSat's radiative fluxes and heating rates product. Improving upon previous techniques, this approach leverages the capability of CloudSat and CALIPSO to retrieve vertically resolved estimates of cloud and aerosol properties critical for accurately evaluating the radiative impacts of aerosols. We estimate the global annual mean aerosol direct effect to be -1.9 +/- 0.6 W/m2, which is in better agreement with previously published estimates from global models than previous satellite-based estimates. Detailed comparisons against a fully coupled simulation of the Community Earth System Model, however, reveal that this agreement on the global annual mean masks large regional discrepancies between modeled and observed estimates of aerosol direct effects related to model biases in cloud cover. A low bias in stratocumulus cloud cover over the southeastern Pacific Ocean, for example, leads to an overestimate of the radiative effects of marine aerosols. Stratocumulus clouds over the southeastern Atlantic Ocean can enhance aerosol absorption by 50% allowing aerosol layers to remain self-lofted in an area of subsidence. Aerosol heating is found to peak at 0.6 +/- 0.3 K/day an altitude of 4 km in September when biomass burning reaches a maximum. Finally, the contributions of observed aerosols components are evaluated to estimate the direct radiative forcing of anthropogenic aerosols. Aerosol forcing is computed using satellite-based radiative kernels that describe the sensitivity of shortwave fluxes in response to aerosol optical depth. The direct radiative forcing is estimated to be -0.21 W/m2 with the largest contributions from pollution that is partially offset by a positive forcing from smoke aerosols. The results from these analyses provide new benchmarks on the global radiative effects of aerosols and offer new insights for improving future assessments.
Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D
2010-05-01
The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to 0.91 when a threshold of either 20 or 100 ng g(-1) was used. Overall, the results indicate that fluorescence hyperspectral imaging may be applicable in estimating aflatoxin content in individual corn kernels.
NASA Astrophysics Data System (ADS)
Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong
2017-05-01
Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.
Estimating Mixture of Gaussian Processes by Kernel Smoothing
Huang, Mian; Li, Runze; Wang, Hansheng; Yao, Weixin
2014-01-01
When the functional data are not homogeneous, e.g., there exist multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this paper, we propose a new estimation procedure for the Mixture of Gaussian Processes, to incorporate both functional and inhomogeneous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from EM algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset. PMID:24976675
NASA Astrophysics Data System (ADS)
Sole-Mari, G.; Fernandez-Garcia, D.
2016-12-01
Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.
Locally adaptive methods for KDE-based random walk models of reactive transport in porous media
NASA Astrophysics Data System (ADS)
Sole-Mari, G.; Fernandez-Garcia, D.
2017-12-01
Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.
Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis
2017-10-16
Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods
Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.
2017-01-01
Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333
Detoxification of Jatropha curcas kernel cake by a novel Streptomyces fimicarius strain.
Wang, Xing-Hong; Ou, Lingcheng; Fu, Liang-Liang; Zheng, Shui; Lou, Ji-Dong; Gomes-Laranjo, José; Li, Jiao; Zhang, Changhe
2013-09-15
A huge amount of kernel cake, which contains a variety of toxins including phorbol esters (tumor promoters), is projected to be generated yearly in the near future by the Jatropha biodiesel industry. We showed that the kernel cake strongly inhibited plant seed germination and root growth and was highly toxic to carp fingerlings, even though phorbol esters were undetectable by HPLC. Therefore it must be detoxified before disposal to the environment. A mathematic model was established to estimate the general toxicity of the kernel cake by determining the survival time of carp fingerling. A new strain (Streptomyces fimicarius YUCM 310038) capable of degrading the total toxicity by more than 97% in a 9-day solid state fermentation was screened out from 578 strains including 198 known strains and 380 strains isolated from air and soil. The kernel cake fermented by YUCM 310038 was nontoxic to plants and carp fingerlings and significantly promoted tobacco plant growth, indicating its potential to transform the toxic kernel cake to bio-safe animal feed or organic fertilizer to remove the environmental concern and to reduce the cost of the Jatropha biodiesel industry. Microbial strain profile essential for the kernel cake detoxification was discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
A new discrete dipole kernel for quantitative susceptibility mapping.
Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian
2018-09-01
Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.
Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan
2018-05-01
With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.
Efficient 3D movement-based kernel density estimator and application to wildlife ecology
Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.
2014-01-01
We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.
Optimized data fusion for K-means Laplacian clustering
Yu, Shi; Liu, Xinhai; Tranchevent, Léon-Charles; Glänzel, Wolfgang; Suykens, Johan A. K.; De Moor, Bart; Moreau, Yves
2011-01-01
Motivation: We propose a novel algorithm to combine multiple kernels and Laplacians for clustering analysis. The new algorithm is formulated on a Rayleigh quotient objective function and is solved as a bi-level alternating minimization procedure. Using the proposed algorithm, the coefficients of kernels and Laplacians can be optimized automatically. Results: Three variants of the algorithm are proposed. The performance is systematically validated on two real-life data fusion applications. The proposed Optimized Kernel Laplacian Clustering (OKLC) algorithms perform significantly better than other methods. Moreover, the coefficients of kernels and Laplacians optimized by OKLC show some correlation with the rank of performance of individual data source. Though in our evaluation the K values are predefined, in practical studies, the optimal cluster number can be consistently estimated from the eigenspectrum of the combined kernel Laplacian matrix. Availability: The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/oklc.html. Contact: shiyu@uchicago.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20980271
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.
Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying
2015-09-01
Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.
Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta
2016-09-01
An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.
Novel applications of the temporal kernel method: Historical and future radiative forcing
NASA Astrophysics Data System (ADS)
Portmann, R. W.; Larson, E.; Solomon, S.; Murphy, D. M.
2017-12-01
We present a new estimate of the historical radiative forcing derived from the observed global mean surface temperature and a model derived kernel function. Current estimates of historical radiative forcing are usually derived from climate models. Despite large variability in these models, the multi-model mean tends to do a reasonable job of representing the Earth system and climate. One method of diagnosing the transient radiative forcing in these models requires model output of top of the atmosphere radiative imbalance and global mean temperature anomaly. It is difficult to apply this method to historical observations due to the lack of TOA radiative measurements before CERES. We apply the temporal kernel method (TKM) of calculating radiative forcing to the historical global mean temperature anomaly. This novel approach is compared against the current regression based methods using model outputs and shown to produce consistent forcing estimates giving confidence in the forcing derived from the historical temperature record. The derived TKM radiative forcing provides an estimate of the forcing time series that the average climate model needs to produce the observed temperature record. This forcing time series is found to be in good overall agreement with previous estimates but includes significant differences that will be discussed. The historical anthropogenic aerosol forcing is estimated as a residual from the TKM and found to be consistent with earlier moderate forcing estimates. In addition, this method is applied to future temperature projections to estimate the radiative forcing required to achieve those temperature goals, such as those set in the Paris agreement.
Determination of aflatoxin risk components for in-shell Brazil nuts.
Vargas, E A; dos Santos, E A; Whitaker, T B; Slate, A B
2011-09-01
A study was conducted on the risk from aflatoxins associated with the kernels and shells of Brazil nuts. Samples were collected from processing plants in Amazonia, Brazil. A total of 54 test samples (40 kg) were taken from 13 in-shell Brazil nut lots ready for market. Each in-shell sample was shelled and the kernels and shells were sorted in five fractions: good kernels, rotten kernels, good shells with kernel residue, good shells without kernel residue, and rotten shells, and analysed for aflatoxins. The kernel:shell ratio mass (w/w) was 50.2/49.8%. The Brazil nut shell was found to be contaminated with aflatoxin. Rotten nuts were found to be a high-risk fraction for aflatoxin in in-shell Brazil nut lots. Rotten nuts contributed only 4.2% of the sample mass (kg), but contributed 76.6% of the total aflatoxin mass (µg) in the in-shell test sample. The highest correlations were found between the aflatoxin concentration in in-shell Brazil nuts samples and the aflatoxin concentration in all defective fractions (R(2)=0.97). The aflatoxin mass of all defective fractions (R(2)=0.90) as well as that of the rotten nut (R(2)=0.88) were also strongly correlated with the aflatoxin concentration of the in-shell test samples. Process factors of 0.17, 0.16 and 0.24 were respectively calculated to estimate the aflatoxin concentration in the good kernels (edible) and good nuts by measuring the aflatoxin concentration in the in-shell test sample and in all kernels, respectively. © 2011 Taylor & Francis
A novel SURE-based criterion for parametric PSF estimation.
Xue, Feng; Blu, Thierry
2015-02-01
We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.
Support vector machines for nuclear reactor state estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavaljevski, N.; Gross, K. C.
2000-02-14
Validation of nuclear power reactor signals is often performed by comparing signal prototypes with the actual reactor signals. The signal prototypes are often computed based on empirical data. The implementation of an estimation algorithm which can make predictions on limited data is an important issue. A new machine learning algorithm called support vector machines (SVMS) recently developed by Vladimir Vapnik and his coworkers enables a high level of generalization with finite high-dimensional data. The improved generalization in comparison with standard methods like neural networks is due mainly to the following characteristics of the method. The input data space is transformedmore » into a high-dimensional feature space using a kernel function, and the learning problem is formulated as a convex quadratic programming problem with a unique solution. In this paper the authors have applied the SVM method for data-based state estimation in nuclear power reactors. In particular, they implemented and tested kernels developed at Argonne National Laboratory for the Multivariate State Estimation Technique (MSET), a nonlinear, nonparametric estimation technique with a wide range of applications in nuclear reactors. The methodology has been applied to three data sets from experimental and commercial nuclear power reactor applications. The results are promising. The combination of MSET kernels with the SVM method has better noise reduction and generalization properties than the standard MSET algorithm.« less
NASA Astrophysics Data System (ADS)
Sardet, Laure; Patilea, Valentin
When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.
The Casimir effect in rugby-ball type flux compactifications
NASA Astrophysics Data System (ADS)
Minamitsuji, M.
2008-04-01
We discuss volume stabilization in a 6D braneworld model based on 6D supergravity theory. The internal space is compactified by magnetic flux and contains codimension two 3-branes (conical singularities) as its boundaries. In general the external 4D spacetime is warped and in the unwrapped limit the shape of the internal space looks like a 'rugby ball'. The size of the internal space is not fixed due to the scale invariance of the supergravity theory. We discuss the possibility of volume stabilization by the Casimir effect for a massless, minimally coupled bulk scalar field. The main obstacle in studying this case is that the brane (conical) part of the relevant heat kernel coefficient (a6) has not been formulated. Thus as a first step, we consider the 4D analog model with boundary codimension two 1-branes. The spacetime structure of the 4D model is very similar to that of the original 6D model, where now the relevant heat kernel coefficient is well known. We derive the one-loop effective potential induced by a scalar field in the bulk by employing zeta function regularization with heat kernel analysis. As a result, the volume is stabilized for most possible choices of the parameters. Especially, for a larger degree of warping, our results imply that a large hierarchy between the mass scales and a tiny amount of effective cosmological constant can be realized on the brane. In the non-warped limit the ratio tends to converge to the same value, independently of the bulk gauge coupling constant. Finally, we will analyze volume stabilization in the original model 6D by employing the same mode-sum technique.
NASA Astrophysics Data System (ADS)
Qiu, Xiang; Dai, Ming; Yin, Chuan-li
2017-09-01
Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.
Selection and properties of alternative forming fluids for TRISO fuel kernel production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, M. P.; King, J. C.; Gorman, B. P.
2013-01-01
Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardousmore » alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1- bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.« less
Selection and properties of alternative forming fluids for TRISO fuel kernel production
NASA Astrophysics Data System (ADS)
Baker, M. P.; King, J. C.; Gorman, B. P.; Marshall, D. W.
2013-01-01
Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ˜10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.
Sasipriya, Gopalakrishnan; Siddhuraju, Perumal
2012-08-01
The present study is proposed to determine the antioxidant activity of raw and processed samples of underutilized legumes, Entada scandens seed kernel and Canavalia gladiata seeds. The indigenous processing methods like dry heating, autoclaving and soaking followed by autoclaving in different solutions (plain water, ash, sugar and sodium bicarbonate) were adopted to seed samples. All other processing methods than dry heat showed significant reduction in phenolics (2.9-63%), tannins (26-100%) and flavonoids (14-67%). However, in processed samples of E. scandens, the hydroxyl radical scavenging activity and β-carotene bleaching inhibition activity were increased, whereas, 2,2-azinobis (3-ethyl benzothiazoline-6-sulfonic acid) diammonium salt (ABTS·(+)), ferric reducing antioxidant power (FRAP), metal chelating and superoxide anion scavenging activity were similar to unprocessed ones. In contrary, except dry heating in C. gladiata, all other processing methods significantly (P<0.05) reduced the 2,2'-diphenyl-1-picryl-hydrazyl (DPPH·) (20-35%), ABTS·(+) (22-75%), FRAP (34-74%), metal chelating (30-41%), superoxide anion radical scavenging (8-80%), hydroxyl radical scavenging (20-40%) and β-carotene bleaching inhibition activity (15-69%). In addition, the sample extracts of raw and dry heated samples protected DNA damage at 10 μg. All processing methods in E. scandens and dry heating in C. gladiata would be a suitable method for adopting in domestic or industrial processing. Copyright © 2012 Elsevier Ltd. All rights reserved.
Memory behaviors of entropy production rates in heat conduction
NASA Astrophysics Data System (ADS)
Li, Shu-Nan; Cao, Bing-Yang
2018-02-01
Based on the relaxation time approximation and first-order expansion, memory behaviors in heat conduction are found between the macroscopic and Boltzmann-Gibbs-Shannon (BGS) entropy production rates with exponentially decaying memory kernels. In the frameworks of classical irreversible thermodynamics (CIT) and BGS statistical mechanics, the memory dependency on the integrated history is unidirectional, while for the extended irreversible thermodynamics (EIT) and BGS entropy production rates, the memory dependences are bidirectional and coexist with the linear terms. When macroscopic and microscopic relaxation times satisfy a specific relationship, the entropic memory dependences will be eliminated. There also exist initial effects in entropic memory behaviors, which decay exponentially. The second-order term are also discussed, which can be understood as the global non-equilibrium degree. The effects of the second-order term are consisted of three parts: memory dependency, initial value and linear term. The corresponding memory kernels are still exponential and the initial effects of the global non-equilibrium degree also decay exponentially.
Earth Structure, Ice Mass Changes, and the Local Dynamic Geoid
NASA Astrophysics Data System (ADS)
Harig, C.; Simons, F. J.
2014-12-01
Spherical Slepian localization functions are a useful method for studying regional mass changes observed by satellite gravimetry. By projecting data onto a sparse basis set, the local field can be estimated more easily than with the full spherical harmonic basis. We have used this method previously to estimate the ice mass change in Greenland from GRACE data, and it can also be applied to other planetary problems such as global magnetic fields. Earth's static geoid, in contrast to the time-variable field, is in large part related to the internal density and rheological structure of the Earth. Past studies have used dynamic geoid kernels to relate this density structure and the internal deformation it induces to the surface geopotential at large scales. These now classical studies of the eighties and nineties were able to estimate the mantle's radial rheological profile, placing constraints on the ratio between upper and lower mantle viscosity. By combining these two methods, spherical Slepian localization and dynamic geoid kernels, we have created local dynamic geoid kernels which are sensitive only to density variations within an area of interest. With these kernels we can estimate the approximate local radial rheological structure that best explains the locally observed geoid on a regional basis. First-order differences of the regional mantle viscosity structure are accessible to this technique. In this contribution we present our latest, as yet unpublished results on the geographical and temporal pattern of ice mass changes in Antarctica over the past decade, and we introduce a new approach to extract regional information about the internal structure of the Earth from the static global gravity field. Both sets of results are linked in terms of the relevant physics, but also in being developed from the marriage of Slepian functions and geoid kernels. We make predictions on the utility of our approach to derive fully three-dimensional rheological Earth models, to be used for corrections for glacio-isostatic adjustment, as necessary for the interpretation of time-variable gravity observations in terms of ice sheet mass-balance studies.
Bayesian kernel machine regression for estimating the health effects of multi-pollutant mixtures.
Bobb, Jennifer F; Valeri, Linda; Claus Henn, Birgit; Christiani, David C; Wright, Robert O; Mazumdar, Maitreyi; Godleski, John J; Coull, Brent A
2015-07-01
Because humans are invariably exposed to complex chemical mixtures, estimating the health effects of multi-pollutant exposures is of critical concern in environmental epidemiology, and to regulatory agencies such as the U.S. Environmental Protection Agency. However, most health effects studies focus on single agents or consider simple two-way interaction models, in part because we lack the statistical methodology to more realistically capture the complexity of mixed exposures. We introduce Bayesian kernel machine regression (BKMR) as a new approach to study mixtures, in which the health outcome is regressed on a flexible function of the mixture (e.g. air pollution or toxic waste) components that is specified using a kernel function. In high-dimensional settings, a novel hierarchical variable selection approach is incorporated to identify important mixture components and account for the correlated structure of the mixture. Simulation studies demonstrate the success of BKMR in estimating the exposure-response function and in identifying the individual components of the mixture responsible for health effects. We demonstrate the features of the method through epidemiology and toxicology applications. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Modeling utilization distributions in space and time
Keating, K.A.; Cherry, S.
2009-01-01
W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.
A comparison of skyshine computational methods.
Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J
2005-01-01
A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.
Kwon, Oh-Hyun; Crnovrsanin, Tarik; Ma, Kwan-Liu
2018-01-01
Using different methods for laying out a graph can lead to very different visual appearances, with which the viewer perceives different information. Selecting a "good" layout method is thus important for visualizing a graph. The selection can be highly subjective and dependent on the given task. A common approach to selecting a good layout is to use aesthetic criteria and visual inspection. However, fully calculating various layouts and their associated aesthetic metrics is computationally expensive. In this paper, we present a machine learning approach to large graph visualization based on computing the topological similarity of graphs using graph kernels. For a given graph, our approach can show what the graph would look like in different layouts and estimate their corresponding aesthetic metrics. An important contribution of our work is the development of a new framework to design graph kernels. Our experimental study shows that our estimation calculation is considerably faster than computing the actual layouts and their aesthetic metrics. Also, our graph kernels outperform the state-of-the-art ones in both time and accuracy. In addition, we conducted a user study to demonstrate that the topological similarity computed with our graph kernel matches perceptual similarity assessed by human users.
RTOS kernel in portable electrocardiograph
NASA Astrophysics Data System (ADS)
Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.
2011-12-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
Sensitivity Kernels for the Cross-Convolution Measure: Eliminate the Source in Waveform Tomography
NASA Astrophysics Data System (ADS)
Menke, W. H.
2017-12-01
We use the adjoint method to derive sensitivity kernels for the cross-convolution measure, a goodness-of-fit criterion that is applicable to seismic data containing closely-spaced multiple arrivals, such as reverberating compressional waves and split shear waves. In addition to a general formulation, specific expressions for sensitivity with respect to density, Lamé parameter and shear modulus are derived for a isotropic elastic solid. As is typical of adjoint methods, the kernels depend upon an adjoint field, the source of which, in this case, is the reference displacement field, pre-multiplied by a matrix of cross-correlations of components of the observed field. We use a numerical simulation to evaluate the resolving power of a topographic inversion that employs the cross-convolution measure. The estimated resolving kernel shows is point-like, indicating that the cross-convolution measure will perform well in waveform tomography settings.
Nonparametric Item Response Curve Estimation with Correction for Measurement Error
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
NASA Astrophysics Data System (ADS)
Alcuson, J. A.; Reynolds-Barredo, J. M.; Mier, J. A.; Sanchez, Raul; Del-Castillo-Negrete, Diego; Newman, David E.; Tribaldos, V.
2015-11-01
A method to determine fractional transport exponents in systems dominated by fluid or plasma turbulence is proposed. The method is based on the estimation of the integro-differential kernel that relates values of the fluxes and gradients of the transported field, and its comparison with the family of analytical kernels of the linear fractional transport equation. Although use of this type of kernels has been explored before in this context, the methodology proposed here is rather unique since the connection with specific fractional equations is exploited from the start. The procedure has been designed to be particularly well-suited for application in experimental setups, taking advantage of the fact that kernel determination only requires temporal data of the transported field measured on an Eulerian grid. The simplicity and robustness of the method is tested first by using fabricated data from continuous-time random walk models built with prescribed transport characteristics. Its strengths are then illustrated on numerical Eulerian data gathered from simulations of a magnetically confined turbulent plasma in a near-critical regime, that is known to exhibit superdiffusive radial transport
Quantitative comparison of noise texture across CT scanners from different manufacturers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solomon, Justin B.; Christianson, Olav; Samei, Ehsan
2012-10-15
Purpose: To quantitatively compare noise texture across computed tomography (CT) scanners from different manufacturers using the noise power spectrum (NPS). Methods: The American College of Radiology CT accreditation phantom (Gammex 464, Gammex, Inc., Middleton, WI) was imaged on two scanners: Discovery CT 750HD (GE Healthcare, Waukesha, WI), and SOMATOM Definition Flash (Siemens Healthcare, Germany), using a consistent acquisition protocol (120 kVp, 0.625/0.6 mm slice thickness, 250 mAs, and 22 cm field of view). Images were reconstructed using filtered backprojection and a wide selection of reconstruction kernels. For each image set, the 2D NPS were estimated from the uniform section ofmore » the phantom. The 2D spectra were normalized by their integral value, radially averaged, and filtered by the human visual response function. A systematic kernel-by-kernel comparison across manufacturers was performed by computing the root mean square difference (RMSD) and the peak frequency difference (PFD) between the NPS from different kernels. GE and Siemens kernels were compared and kernel pairs that minimized the RMSD and |PFD| were identified. Results: The RMSD (|PFD|) values between the NPS of GE and Siemens kernels varied from 0.01 mm{sup 2} (0.002 mm{sup -1}) to 0.29 mm{sup 2} (0.74 mm{sup -1}). The GE kernels 'Soft,''Standard,''Chest,' and 'Lung' closely matched the Siemens kernels 'B35f,''B43f,''B41f,' and 'B80f' (RMSD < 0.05 mm{sup 2}, |PFD| < 0.02 mm{sup -1}, respectively). The GE 'Bone,''Bone+,' and 'Edge' kernels all matched most closely with Siemens 'B75f' kernel but with sizeable RMSD and |PFD| values up to 0.18 mm{sup 2} and 0.41 mm{sup -1}, respectively. These sizeable RMSD and |PFD| values corresponded to visually perceivable differences in the noise texture of the images. Conclusions: It is possible to use the NPS to quantitatively compare noise texture across CT systems. The degree to which similar texture across scanners could be achieved varies and is limited by the kernels available on each scanner.« less
Quantitative comparison of noise texture across CT scanners from different manufacturers.
Solomon, Justin B; Christianson, Olav; Samei, Ehsan
2012-10-01
To quantitatively compare noise texture across computed tomography (CT) scanners from different manufacturers using the noise power spectrum (NPS). The American College of Radiology CT accreditation phantom (Gammex 464, Gammex, Inc., Middleton, WI) was imaged on two scanners: Discovery CT 750HD (GE Healthcare, Waukesha, WI), and SOMATOM Definition Flash (Siemens Healthcare, Germany), using a consistent acquisition protocol (120 kVp, 0.625∕0.6 mm slice thickness, 250 mAs, and 22 cm field of view). Images were reconstructed using filtered backprojection and a wide selection of reconstruction kernels. For each image set, the 2D NPS were estimated from the uniform section of the phantom. The 2D spectra were normalized by their integral value, radially averaged, and filtered by the human visual response function. A systematic kernel-by-kernel comparison across manufacturers was performed by computing the root mean square difference (RMSD) and the peak frequency difference (PFD) between the NPS from different kernels. GE and Siemens kernels were compared and kernel pairs that minimized the RMSD and |PFD| were identified. The RMSD (|PFD|) values between the NPS of GE and Siemens kernels varied from 0.01 mm(2) (0.002 mm(-1)) to 0.29 mm(2) (0.74 mm(-1)). The GE kernels "Soft," "Standard," "Chest," and "Lung" closely matched the Siemens kernels "B35f," "B43f," "B41f," and "B80f" (RMSD < 0.05 mm(2), |PFD| < 0.02 mm(-1), respectively). The GE "Bone," "Bone+," and "Edge" kernels all matched most closely with Siemens "B75f" kernel but with sizeable RMSD and |PFD| values up to 0.18 mm(2) and 0.41 mm(-1), respectively. These sizeable RMSD and |PFD| values corresponded to visually perceivable differences in the noise texture of the images. It is possible to use the NPS to quantitatively compare noise texture across CT systems. The degree to which similar texture across scanners could be achieved varies and is limited by the kernels available on each scanner.
Some physical properties of ginkgo nuts and kernels
NASA Astrophysics Data System (ADS)
Ch'ng, P. E.; Abdullah, M. H. R. O.; Mathai, E. J.; Yunus, N. A.
2013-12-01
Some data of the physical properties of ginkgo nuts at a moisture content of 45.53% (±2.07) (wet basis) and of their kernels at 60.13% (± 2.00) (wet basis) are presented in this paper. It consists of the estimation of the mean length, width, thickness, the geometric mean diameter, sphericity, aspect ratio, unit mass, surface area, volume, true density, bulk density, and porosity measures. The coefficient of static friction for nuts and kernels was determined by using plywood, glass, rubber, and galvanized steel sheet. The data are essential in the field of food engineering especially dealing with design and development of machines, and equipment for processing and handling agriculture products.
Gutman, Boris; Leonardo, Cassandra; Jahanshad, Neda; Hibar, Derrek; Eschen-burg, Kristian; Nir, Talia; Villalon, Julio; Thompson, Paul
2014-01-01
We present a framework for registering cortical surfaces based on tractography-informed structural connectivity. We define connectivity as a continuous kernel on the product space of the cortex, and develop a method for estimating this kernel from tractography fiber models. Next, we formulate the kernel registration problem, and present a means to non-linearly register two brains’ continuous connectivity profiles. We apply theoretical results from operator theory to develop an algorithm for decomposing the connectome into its shared and individual components. Lastly, we extend two discrete connectivity measures to the continuous case, and apply our framework to 98 Alzheimer’s patients and controls. Our measures show significant differences between the two groups. PMID:25320795
Jabbar, Ahmed Najah
2018-04-13
This letter suggests two new types of asymmetrical higher-order kernels (HOK) that are generated using the orthogonal polynomials Laguerre (positive or right skew) and Bessel (negative or left skew). These skewed HOK are implemented in the blind source separation/independent component analysis (BSS/ICA) algorithm. The tests for these proposed HOK are accomplished using three scenarios to simulate a real environment using actual sound sources, an environment of mixtures of multimodal fast-changing probability density function (pdf) sources that represent a challenge to the symmetrical HOK, and an environment of an adverse case (near gaussian). The separation is performed by minimizing the mutual information (MI) among the mixed sources. The performance of the skewed kernels is compared to the performance of the standard kernels such as Epanechnikov, bisquare, trisquare, and gaussian and the performance of the symmetrical HOK generated using the polynomials Chebyshev1, Chebyshev2, Gegenbauer, Jacobi, and Legendre to the tenth order. The gaussian HOK are generated using the Hermite polynomial and the Wand and Schucany procedure. The comparison among the 96 kernels is based on the average intersymbol interference ratio (AISIR) and the time needed to complete the separation. In terms of AISIR, the skewed kernels' performance is better than that of the standard kernels and rivals most of the symmetrical kernels' performance. The importance of these new skewed HOK is manifested in the environment of the multimodal pdf mixtures. In such an environment, the skewed HOK come in first place compared with the symmetrical HOK. These new families can substitute for symmetrical HOKs in such applications.
Mutual information estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
For the automated, objective and joint analysis of time series, similarity measures are crucial. Used in the analysis of climate records, they allow for a complimentary, unbiased view onto sparse datasets. The irregular sampling of many of these time series, however, makes it necessary to either perform signal reconstruction (e.g. interpolation) or to develop and use adapted measures. Standard linear interpolation comes with an inevitable loss of information and bias effects. We have recently developed a Gaussian kernel-based correlation algorithm with which the interpolation error can be substantially lowered, but this would not work should the functional relationship in a bivariate setting be non-linear. We therefore propose an algorithm to estimate lagged auto and cross mutual information from irregularly sampled time series. We have extended the standard and adaptive binning histogram estimators and use Gaussian distributed weights in the estimation of the (joint) probabilities. To test our method we have simulated linear and nonlinear auto-regressive processes with Gamma-distributed inter-sampling intervals. We have then performed a sensitivity analysis for the estimation of actual coupling length, the lag of coupling and the decorrelation time in the synthetic time series and contrast our results to the performance of a signal reconstruction scheme. Finally we applied our estimator to speleothem records. We compare the estimated memory (or decorrelation time) to that from a least-squares estimator based on fitting an auto-regressive process of order 1. The calculated (cross) mutual information results are compared for the different estimators (standard or adaptive binning) and contrasted with results from signal reconstruction. We find that the kernel-based estimator has a significantly lower root mean square error and less systematic sampling bias than the interpolation-based method. It is possible that these encouraging results could be further improved by using non-histogram mutual information estimators, like k-Nearest Neighbor or Kernel-Density estimators, but for short (<1000 points) and irregularly sampled datasets the proposed algorithm is already a great improvement.
Méndez, Nelson; Oviedo-Pastrana, Misael; Mattar, Salim; Caicedo-Castro, Isaac; Arrieta, German
2017-01-01
The Zika virus disease (ZVD) has had a huge impact on public health in Colombia for the numbers of people affected and the presentation of Guillain-Barre syndrome (GBS) and microcephaly cases associated to ZVD. A retrospective descriptive study was carried out, we analyze the epidemiological situation of ZVD and its association with microcephaly and GBS during a 21-month period, from October 2015 to June 2017. The variables studied were: (i) ZVD cases, (ii) ZVD cases in pregnant women, (iii) laboratory-confirmed ZVD in pregnant women, (iv) ZVD cases associated with microcephaly, (v) laboratory-confirmed ZVD associated with microcephaly, and (vi) ZVD associated to GBS cases. Average number of cases, attack rates (AR) and proportions were also calculated. The studied variables were plotted by epidemiological weeks and months. The distribution of ZVD cases in Colombia was mapped across the time using Kernel density estimator and QGIS software; we adopted Kernel Ridge Regression (KRR) and the Gaussian Kernel to estimate the number of Guillain Barre cases given the number of ZVD cases. One hundred eight thousand eighty-seven ZVD cases had been reported in Colombia, including 19,963 (18.5%) in pregnant women, 710 (0.66%) associated with microcephaly (AR, 4.87 cases per 10,000 live births) and 453 (0.42%) ZVD associated to GBS cases (AR, 41.9 GBS cases per 10,000 ZVD cases). It appears the cases of GBS increased in parallel with the cases of ZVD, cases of microcephaly appeared 5 months after recognition of the outbreak. The kernel density map shows that throughout the study period, the states most affected by the Zika outbreak in Colombia were mainly San Andrés and Providencia islands, Casanare, Norte de Santander, Arauca and Huila. The KRR shows that there is no proportional relationship between the number of GBS and ZVD cases. During the cross validation, the RMSE achieved for the second order polynomial kernel, the linear kernel, the sigmoid kernel, and the Gaussian kernel are 9.15, 9.2, 10.7, and 7.2 respectively. This study updates the epidemiological analysis of the ZVD situation in Colombia describes the geographical distribution of ZVD and shows the functional relationship between ZVD cases and GBS.
Buck, Christoph; Kneib, Thomas; Tkaczick, Tobias; Konstabel, Kenn; Pigeot, Iris
2015-12-22
Built environment studies provide broad evidence that urban characteristics influence physical activity (PA). However, findings are still difficult to compare, due to inconsistent measures assessing urban point characteristics and varying definitions of spatial scale. Both were found to influence the strength of the association between the built environment and PA. We simultaneously evaluated the effect of kernel approaches and network-distances to investigate the association between urban characteristics and physical activity depending on spatial scale and intensity measure. We assessed urban measures of point characteristics such as intersections, public transit stations, and public open spaces in ego-centered network-dependent neighborhoods based on geographical data of one German study region of the IDEFICS study. We calculated point intensities using the simple intensity and kernel approaches based on fixed bandwidths, cross-validated bandwidths including isotropic and anisotropic kernel functions and considering adaptive bandwidths that adjust for residential density. We distinguished six network-distances from 500 m up to 2 km to calculate each intensity measure. A log-gamma regression model was used to investigate the effect of each urban measure on moderate-to-vigorous physical activity (MVPA) of 400 2- to 9.9-year old children who participated in the IDEFICS study. Models were stratified by sex and age groups, i.e. pre-school children (2 to <6 years) and school children (6-9.9 years), and were adjusted for age, body mass index (BMI), education and safety concerns of parents, season and valid weartime of accelerometers. Association between intensity measures and MVPA strongly differed by network-distance, with stronger effects found for larger network-distances. Simple intensity revealed smaller effect estimates and smaller goodness-of-fit compared to kernel approaches. Smallest variation in effect estimates over network-distances was found for kernel intensity measures based on isotropic and anisotropic cross-validated bandwidth selection. We found a strong variation in the association between the built environment and PA of children based on the choice of intensity measure and network-distance. Kernel intensity measures provided stable results over various scales and improved the assessment compared to the simple intensity measure. Considering different spatial scales and kernel intensity methods might reduce methodological limitations in assessing opportunities for PA in the built environment.
Improving Genomic Prediction in Cassava Field Experiments Using Spatial Analysis.
Elias, Ani A; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc
2018-01-04
Cassava ( Manihot esculenta Crantz) is an important staple food in sub-Saharan Africa. Breeding experiments were conducted at the International Institute of Tropical Agriculture in cassava to select elite parents. Taking into account the heterogeneity in the field while evaluating these trials can increase the accuracy in estimation of breeding values. We used an exploratory approach using the parametric spatial kernels Power, Spherical, and Gaussian to determine the best kernel for a given scenario. The spatial kernel was fit simultaneously with a genomic kernel in a genomic selection model. Predictability of these models was tested through a 10-fold cross-validation method repeated five times. The best model was chosen as the one with the lowest prediction root mean squared error compared to that of the base model having no spatial kernel. Results from our real and simulated data studies indicated that predictability can be increased by accounting for spatial variation irrespective of the heritability of the trait. In real data scenarios we observed that the accuracy can be increased by a median value of 3.4%. Through simulations, we showed that a 21% increase in accuracy can be achieved. We also found that Range (row) directional spatial kernels, mostly Gaussian, explained the spatial variance in 71% of the scenarios when spatial correlation was significant. Copyright © 2018 Elias et al.
Rogel-Castillo, Cristian; Zuskov, David; Chan, Bronte Lee; Lee, Jihyun; Huang, Guangwei; Mitchell, Alyson E
2015-09-23
Concealed damage (CD) is a brown discoloration of nutmeat that appears only after kernels are treated with moderate heat (e.g., roasting). Identifying factors that promote CD in almonds is of significant interest to the nut industry. Herein, the effect of temperature (35 and 45 °C) and moisture (<5, 8, and 11%) on the composition of volatiles in raw almonds (Prunus dulcis var. Nonpareil) was studied using HS-SPME-GC/MS. A CIE LCh colorimetric method was developed to identify raw almonds with CD. A significant increase in CD was demonstrated in almonds exposed to moisture (8% kernel moisture content) at 45 °C as compared to 35 °C. Elevated levels of volatiles related to lipid peroxidation and amino acid degradation were observed in almonds with CD. These results suggest that postharvest moisture exposure resulting in an internal kernel moisture ≥ 8% is a key factor in the development of CD in raw almonds and that CD is accelerated by temperature.
A Comparison of Strategies for Estimating Conditional DIF
ERIC Educational Resources Information Center
Moses, Tim; Miao, Jing; Dorans, Neil J.
2010-01-01
In this study, the accuracies of four strategies were compared for estimating conditional differential item functioning (DIF), including raw data, logistic regression, log-linear models, and kernel smoothing. Real data simulations were used to evaluate the estimation strategies across six items, DIF and No DIF situations, and four sample size…
Corn fiber gum and milk protein conjugates with improved emulsion stability
USDA-ARS?s Scientific Manuscript database
Corn fiber gum (CFG), an alkaline hydrogen peroxide extract of the corn kernel milling by-product “corn fiber” was covalently conjugated with Beta-lactoglobulin (Beta-LG) and whey protein isolate (WPI). Covalent coupling of CFG to protein was achieved by dry heating reaction (Maillard-type) of CFG ...
Fourier's law of heat conduction: quantum mechanical master equation analysis.
Wu, Lian-Ao; Segal, Dvira
2008-06-01
We derive the macroscopic Fourier's Law of heat conduction from the exact gain-loss time convolutionless quantum master equation under three assumptions for the interaction kernel. To second order in the interaction, we show that the first two assumptions are natural results of the long time limit. The third assumption can be satisfied by a family of interactions consisting of an exchange effect. The pure exchange model directly leads to energy diffusion in a weakly coupled spin- 12 chain.
Efficient Strategies for Estimating the Spatial Coherence of Backscatter
Hyun, Dongwoon; Crowley, Anna Lisa C.; Dahl, Jeremy J.
2017-01-01
The spatial coherence of ultrasound backscatter has been proposed to reduce clutter in medical imaging, to measure the anisotropy of the scattering source, and to improve the detection of blood flow. These techniques rely on correlation estimates that are obtained using computationally expensive strategies. In this study, we assess existing spatial coherence estimation methods and propose three computationally efficient modifications: a reduced kernel, a downsampled receive aperture, and the use of an ensemble correlation coefficient. The proposed methods are implemented in simulation and in vivo studies. Reducing the kernel to a single sample improved computational throughput and improved axial resolution. Downsampling the receive aperture was found to have negligible effect on estimator variance, and improved computational throughput by an order of magnitude for a downsample factor of 4. The ensemble correlation estimator demonstrated lower variance than the currently used average correlation. Combining the three methods, the throughput was improved 105-fold in simulation with a downsample factor of 4 and 20-fold in vivo with a downsample factor of 2. PMID:27913342
Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.
Echinaka, Yuki; Ozeki, Yukiyasu
2016-10-01
The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.
A dual-input nonlinear system analysis of autonomic modulation of heart rate
NASA Technical Reports Server (NTRS)
Chon, K. H.; Mullen, T. J.; Cohen, R. J.
1996-01-01
Linear analyses of fluctuations in heart rate and other hemodynamic variables have been used to elucidate cardiovascular regulatory mechanisms. The role of nonlinear contributions to fluctuations in hemodynamic variables has not been fully explored. This paper presents a nonlinear system analysis of the effect of fluctuations in instantaneous lung volume (ILV) and arterial blood pressure (ABP) on heart rate (HR) fluctuations. To successfully employ a nonlinear analysis based on the Laguerre expansion technique (LET), we introduce an efficient procedure for broadening the spectral content of the ILV and ABP inputs to the model by adding white noise. Results from computer simulations demonstrate the effectiveness of broadening the spectral band of input signals to obtain consistent and stable kernel estimates with the use of the LET. Without broadening the band of the ILV and ABP inputs, the LET did not provide stable kernel estimates. Moreover, we extend the LET to the case of multiple inputs in order to accommodate the analysis of the combined effect of ILV and ABP effect on heart rate. Analyzes of data based on the second-order Volterra-Wiener model reveal an important contribution of the second-order kernels to the description of the effect of lung volume and arterial blood pressure on heart rate. Furthermore, physiological effects of the autonomic blocking agents propranolol and atropine on changes in the first- and second-order kernels are also discussed.
Dispersal of Engineered Male Aedes aegypti Mosquitoes.
Winskill, Peter; Carvalho, Danilo O; Capurro, Margareth L; Alphey, Luke; Donnelly, Christl A; McKemey, Andrew R
2015-11-01
Aedes aegypti, the principal vector of dengue fever, have been genetically engineered for use in a sterile insect control programme. To improve our understanding of the dispersal ecology of mosquitoes and to inform appropriate release strategies of 'genetically sterile' male Aedes aegypti detailed knowledge of the dispersal ability of the released insects is needed. The dispersal ability of released 'genetically sterile' male Aedes aegypti at a field site in Brazil has been estimated. Dispersal kernels embedded within a generalized linear model framework were used to analyse data collected from three large scale mark release recapture studies. The methodology has been applied to previously published dispersal data to compare the dispersal ability of 'genetically sterile' male Aedes aegypti in contrasting environments. We parameterised dispersal kernels and estimated the mean distance travelled for insects in Brazil: 52.8 m (95% CI: 49.9 m, 56.8 m) and Malaysia: 58.0 m (95% CI: 51.1 m, 71.0 m). Our results provide specific, detailed estimates of the dispersal characteristics of released 'genetically sterile' male Aedes aegypti in the field. The comparative analysis indicates that despite differing environments and recapture rates, key features of the insects' dispersal kernels are conserved across the two studies. The results can be used to inform both risk assessments and release programmes using 'genetically sterile' male Aedes aegypti.
2010-01-01
Background Discovering novel disease genes is still challenging for diseases for which no prior knowledge - such as known disease genes or disease-related pathways - is available. Performing genetic studies frequently results in large lists of candidate genes of which only few can be followed up for further investigation. We have recently developed a computational method for constitutional genetic disorders that identifies the most promising candidate genes by replacing prior knowledge by experimental data of differential gene expression between affected and healthy individuals. To improve the performance of our prioritization strategy, we have extended our previous work by applying different machine learning approaches that identify promising candidate genes by determining whether a gene is surrounded by highly differentially expressed genes in a functional association or protein-protein interaction network. Results We have proposed three strategies scoring disease candidate genes relying on network-based machine learning approaches, such as kernel ridge regression, heat kernel, and Arnoldi kernel approximation. For comparison purposes, a local measure based on the expression of the direct neighbors is also computed. We have benchmarked these strategies on 40 publicly available knockout experiments in mice, and performance was assessed against results obtained using a standard procedure in genetics that ranks candidate genes based solely on their differential expression levels (Simple Expression Ranking). Our results showed that our four strategies could outperform this standard procedure and that the best results were obtained using the Heat Kernel Diffusion Ranking leading to an average ranking position of 8 out of 100 genes, an AUC value of 92.3% and an error reduction of 52.8% relative to the standard procedure approach which ranked the knockout gene on average at position 17 with an AUC value of 83.7%. Conclusion In this study we could identify promising candidate genes using network based machine learning approaches even if no knowledge is available about the disease or phenotype. PMID:20840752
Aflatoxin B1 levels in groundnut products from local markets in Zambia.
Njoroge, Samuel M C; Matumba, Limbikani; Kanenga, Kennedy; Siambi, Moses; Waliyar, Farid; Maruwo, Joseph; Machinjiri, Norah; Monyo, Emmanuel S
2017-05-01
In Zambia, groundnut products (milled groundnut powder, groundnut kernels) are mostly sold in under-regulated markets. Coupled with the lack of quality enforcement in such markets, consumers may be at risk to aflatoxin exposure. However, the level of aflatoxin contamination in these products is not known. Compared to groundnut kernels, milled groundnut powder obscures visual indicators of aflatoxin contamination in groundnuts such as moldiness, discoloration, insect damage or kernel damage. A survey was therefore conducted from 2012 to 2014, to estimate and compare aflatoxin levels in these products (n = 202), purchased from markets in important groundnut growing districts and in urban areas. Samples of whole groundnut kernels (n = 163) and milled groundnut powder (n = 39) were analysed for aflatoxin B 1 (AFB 1 ) by competitive enzyme-linked immunosorbent assay (cELISA). Results showed substantial AFB 1 contamination levels in both types of groundnut products with maximum AFB 1 levels of 11,100 μg/kg (groundnut kernels) and 3000 μg/kg (milled groundnut powder). However, paired t test analysis showed that AFB 1 contamination levels in milled groundnut powder were not always significantly higher (P > 0.05) than those in groundnut kernels. Even for products from the same vendor, AFB 1 levels were not consistently higher in milled groundnut powder than in whole groundnut kernels. This suggests that vendors do not systematically sort out whole groundnut kernels of visually poor quality for milling. However, the overall contamination levels of groundnut products with AFB 1 were found to be alarmingly high in all years and locations. Therefore, solutions are needed to reduce aflatoxin levels in such under-regulated markets.
Verma, Prashant; Doyley, Marvin M
2017-09-01
We derived the Cramér Rao lower bound for 2-D estimators employed in quasi-static elastography. To illustrate the theory, we modeled the 2-D point spread function as a sinc-modulated sine pulse in the axial direction and as a sinc function in the lateral direction. We compared theoretical predictions of the variance incurred in displacements and strains when quasi-static elastography was performed under varying conditions (different scanning methods, different configuration of conventional linear array imaging and different-size kernels) with those measured from simulated or experimentally acquired data. We performed studies to illustrate the application of the derived expressions when performing vascular elastography with plane wave and compounded plane wave imaging. Standard deviations in lateral displacements were an order higher than those in axial. Additionally, the derived expressions predicted that peak performance should occur when 2% strain is applied, the same order of magnitude as observed in simulations (1%) and experiments (1%-2%). We assessed how different configurations of conventional linear array imaging (number of active reception and transmission elements) influenced the quality of axial and lateral strain elastograms. The theoretical expressions predicted that 2-D echo tracking should be performed with wide kernels, but the length of the kernels should be selected using knowledge of the magnitude of the applied strain: specifically, longer kernels for small strains (<5%) and shorter kernels for larger strains. Although the general trends of theoretical predictions and experimental observations were similar, biases incurred during beamforming and subsample displacement estimation produced noticeable differences. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Reservoir area of influence and implications for fisheries management
Martin, Dustin R.; Chizinski, Christopher J.; Pope, Kevin L.
2015-01-01
Understanding the spatial area that a reservoir draws anglers from, defined as the reservoir's area of influence, and the potential overlap of that area of influence between reservoirs is important for fishery managers. Our objective was to define the area of influence for reservoirs of the Salt Valley regional fishery in southeastern Nebraska using kernel density estimation. We used angler survey data obtained from in-person interviews at 17 reservoirs during 2009–2012. The area of influence, defined by the 95% kernel density, for reservoirs within the Salt Valley regional fishery varied, indicating that anglers use reservoirs differently across the regional fishery. Areas of influence reveal angler preferences in a regional context, indicating preferred reservoirs with a greater area of influence. Further, differences in areas of influences across time and among reservoirs can be used as an assessment following management changes on an individual reservoir or within a regional fishery. Kernel density estimation provided a clear method for creating spatial maps of areas of influence and provided a two-dimensional view of angler travel, as opposed to the traditional mean travel distance assessment.
On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint
Zhang, Chong; Liu, Yufeng; Wu, Yichao
2015-01-01
For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575
The formulation and estimation of a spatial skew-normal generalized ordered-response model.
DOT National Transportation Integrated Search
2016-06-01
This paper proposes a new spatial generalized ordered response model with skew-normal kernel error terms and an : associated estimation method. It contributes to the spatial analysis field by allowing a flexible and parametric skew-normal : distribut...
7 CFR 868.302 - Definition of other terms.
Code of Federal Regulations, 2012 CFR
2012-01-01
... discolored or damaged by water, insects, heat, or any other means, and parboiled kernels in nonparboiled rice... red bran. (l) Seeds. Whole or broken seeds of any plant other than rice. (m) Types of rice. There are... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Milled Rice Terms Defined § 868.302...
7 CFR 868.302 - Definition of other terms.
Code of Federal Regulations, 2014 CFR
2014-01-01
... discolored or damaged by water, insects, heat, or any other means, and parboiled kernels in nonparboiled rice... red bran. (l) Seeds. Whole or broken seeds of any plant other than rice. (m) Types of rice. There are... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Milled Rice Terms Defined § 868.302...
7 CFR 868.302 - Definition of other terms.
Code of Federal Regulations, 2013 CFR
2013-01-01
... discolored or damaged by water, insects, heat, or any other means, and parboiled kernels in nonparboiled rice... red bran. (l) Seeds. Whole or broken seeds of any plant other than rice. (m) Types of rice. There are... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Milled Rice Terms Defined § 868.302...
Rogel-Castillo, Cristian; Boulton, Roger; Opastpongkarn, Arunwong; Huang, Guangwei; Mitchell, Alyson E
2016-07-27
Concealed damage (CD) is defined as a brown discoloration of the kernel interior (nutmeat) that appears only after moderate to high heat treatment (e.g., blanching, drying, roasting, etc.). Raw almonds with CD have no visible defects before heat treatment. Currently, there are no screening methods available for detecting CD in raw almonds. Herein, the feasibility of using near-infrared (NIR) spectroscopy between 1125 and 2153 nm for the detection of CD in almonds is demonstrated. Almond kernels with CD have less NIR absorbance in the region related with oil, protein, and carbohydrates. With the use of partial least squares discriminant analysis (PLS-DA) and selection of specific wavelengths, three classification models were developed. The calibration models have false-positive and false-negative error rates ranging between 12.4 and 16.1% and between 10.6 and 17.2%, respectively. The percent error rates ranged between 8.2 and 9.2%. Second-derivative preprocessing of the selected wavelength resulted in the most robust predictive model.
Preparation of UC0.07-0.10N0.90-0.93 spheres for TRISO coated fuel particles
NASA Astrophysics Data System (ADS)
Hunt, R. D.; Silva, C. M.; Lindemer, T. B.; Johnson, J. A.; Collins, J. L.
2014-05-01
The US Department of Energy is considering a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with dense uranium nitride (UN) kernels with diameters of 650 or 800 μm. The objectives of this effort are to make uranium oxide microspheres with adequately dispersed carbon nanoparticles and to convert these microspheres into UN spheres, which could be then sintered into kernels. Recent improvements to the internal gelation process were successfully applied to the production of uranium gel spheres with different concentrations of carbon black. After the spheres were washed and dried, a simple two-step heat profile was used to produce porous microspheres with a chemical composition of UC0.07-0.10N0.90-0.93. The first step involved heating the microspheres to 2023 K in a vacuum, and in the second step, the microspheres were held at 1873 K for 6 h in flowing nitrogen.
Determining the minimum required uranium carbide content for HTGR UCO fuel kernels
McMurray, Jacob W.; Lindemer, Terrence B.; Brown, Nicholas R.; ...
2017-03-10
There are three important failure mechanisms that must be controlled in high-temperature gas-cooled reactor (HTGR) fuel for certain higher burnup applications are SiC layer rupture, SiC corrosion by CO, and coating compromise from kernel migration. All are related to high CO pressures stemming from free O generated when uranium present as UO 2 fissions and the O is not subsequently bound by other elements. Furthermore, in the HTGR UCO kernel design, CO buildup from excess O is controlled by the inclusion of additional uranium in the form of a carbide, UC x. An approach for determining the minimum UC xmore » content to ensure negligible CO formation was developed and demonstrated using CALPHAD models and the Serpent 2 reactor physics and depletion analysis tool. Our results are intended to be more accurate than previous estimates by including more nuclear and chemical factors, in particular the effect of transmutation products on the oxygen distribution as the fuel kernel composition evolves with burnup.« less
A Nonrigid Kernel-Based Framework for 2D-3D Pose Estimation and 2D Image Segmentation
Sandhu, Romeil; Dambreville, Samuel; Yezzi, Anthony; Tannenbaum, Allen
2013-01-01
In this work, we present a nonrigid approach to jointly solving the tasks of 2D-3D pose estimation and 2D image segmentation. In general, most frameworks that couple both pose estimation and segmentation assume that one has exact knowledge of the 3D object. However, under nonideal conditions, this assumption may be violated if only a general class to which a given shape belongs is given (e.g., cars, boats, or planes). Thus, we propose to solve the 2D-3D pose estimation and 2D image segmentation via nonlinear manifold learning of 3D embedded shapes for a general class of objects or deformations for which one may not be able to associate a skeleton model. Thus, the novelty of our method is threefold: First, we present and derive a gradient flow for the task of nonrigid pose estimation and segmentation. Second, due to the possible nonlinear structures of one’s training set, we evolve the preimage obtained through kernel PCA for the task of shape analysis. Third, we show that the derivation for shape weights is general. This allows us to use various kernels, as well as other statistical learning methodologies, with only minimal changes needing to be made to the overall shape evolution scheme. In contrast with other techniques, we approach the nonrigid problem, which is an infinite-dimensional task, with a finite-dimensional optimization scheme. More importantly, we do not explicitly need to know the interaction between various shapes such as that needed for skeleton models as this is done implicitly through shape learning. We provide experimental results on several challenging pose estimation and segmentation scenarios. PMID:20733218
LoCoH: Non-parameteric kernel methods for constructing home ranges and utilization distributions
Getz, Wayne M.; Fortmann-Roe, Scott; Cross, Paul C.; Lyons, Andrew J.; Ryan, Sadie J.; Wilmers, Christopher C.
2007-01-01
Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: ‘‘fixed sphere-of-influence,’’ or r -LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an ‘‘adaptive sphere-of-influence,’’ or a -LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a ), and compare them to the original ‘‘fixed-number-of-points,’’ or k -LoCoH (all kernels constructed from k -1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a -LoCoH is generally superior to k - and r -LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).
Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use
Arthur, Steve M.; Schwartz, Charles C.
1999-01-01
We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.
de Oliveira, R L; de Carvalho, G G P; Oliveira, R L; Tosto, M S L; Santos, E M; Ribeiro, R D X; Silva, T M; Correia, B R; de Rufino, L M A
2017-10-01
The objective of this study was to evaluate the effects of the inclusion of palm kernel (Elaeis guineensis) cake in diets for goats on feeding behaviors, rectal temperature, and cardiac and respiratory frequencies. Forty crossbred Boer male, non-castrated goats (ten animals per treatment), with an average age of 90 days and an initial body weight of 15.01 ± 1.76 kg, were used. The goats were fed Tifton 85 (Cynodon spp.) hay and palm kernel supplemented at the rates of 0, 7, 14, and 21% of dry matter (DM). The feeding behaviors (rumination, feeding, and idling times) were observed for three 24-h periods. DM and neutral detergent fiber (NDF) intake values were estimated as the difference between the total DM and NDF contents of the feed offered and the total DM and NDF contents of the orts. There was no effect of palm kernel cake inclusion in goat diets on DM intake (P > 0.05). However, palm kernel cake promoted a linear increase (P < 0.05) in NDF intake and time spent feeding and ruminating (min/day; %; period) and a linear decrease in time spent idling. Palm kernel cakes had no effects (P > 0.05) on the chewing, feeding, and rumination efficiency (DM and NDF) or on physiological variables. The use up to 21% palm kernel cake in the diet of crossbred Boer goats maintained the feeding behaviors and did not change the physiological parameters of goats; therefore, its use is recommended in the diet of these animals.
Jung, Jooyeoun; Wang, Wenjie; McGorrin, Robert J; Zhao, Yanyun
2018-02-01
Moisture adsorption isotherms and storability of dried hazelnut inshells and kernels produced in Oregon were evaluated and compared among cultivars, including Barcelona, Yamhill, and Jefferson. Experimental moisture adsorption data fitted to Guggenheim-Anderson-de Boer (GAB) model, showing less hygroscopic properties in Yamhill than other cultivars of inshells and kernels due to lower content of carbohydrate and protein, but higher content of fat. The safe levels of moisture content (MC, dry basis) of dried inshells and kernels for reaching kernel water activity (a w ) ≤0.65 were estimated using the GAB model as 11.3% and 5.0% for Barcelona, 9.4% and 4.2% for Yamhill, and 10.7% and 4.9% for Jefferson, respectively. Storage conditions (2 °C at 85% to 95% relative humidity [RH], 10 °C at 65% to 75% RH, and 27 °C at 35% to 45% RH), times (0, 4, 8, or 12 mo), and packaging methods (atmosphere vs. vacuum) affected MC, a w , bioactive compounds, lipid oxidation, and enzyme activity of dried hazelnut inshells or kernels. For inshells packaged at woven polypropylene bag, MC and a w of inshells and kernels (inside shells) increased at 2 and 10 °C, but decreased at 27 °C during storage. For kernels, lipid oxidation and polyphenol oxidase activity also increased with extended storage time (P < 0.05), and MC and a w of vacuum packaged samples were more stable during storage than those atmospherically packaged ones. Principal component analysis showed correlation of kernel qualities with storage condition, time, and packaging method. This study demonstrated that the ideal storage condition or packaging method varied among cultivars due to their different moisture adsorption and physicochemical and enzymatic stability during storage. Moisture adsorption isotherm of hazelnut inshells and kernels is useful for predicting the storability of nuts. This study found that water adsorption and storability varied among the different cultivars of nuts, in which Yamhill was less hygroscopic than Barcelona and Jefferson, thus more stable during storage. For ensuring food safety and quality of nuts during storage, each cultivar of kernels should be dried to a certain level of MC. Lipid oxidation and enzyme activity of kernel could be increased with extended storage time. Vacuum packaging was recommended to kernels for reducing moisture adsorption during storage. © 2018 Institute of Food Technologists®.
NASA Astrophysics Data System (ADS)
Dai, Jun; Zhou, Haigang; Zhao, Shaoquan
2017-01-01
This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.
Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…
NASA Astrophysics Data System (ADS)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.
2017-11-01
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; ...
2017-10-24
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
ERIC Educational Resources Information Center
Moses, Tim; Miao, Jing; Dorans, Neil
2010-01-01
This study compared the accuracies of four differential item functioning (DIF) estimation methods, where each method makes use of only one of the following: raw data, logistic regression, loglinear models, or kernel smoothing. The major focus was on the estimation strategies' potential for estimating score-level, conditional DIF. A secondary focus…
Marmarelis, Vasilis Z.; Berger, Theodore W.
2009-01-01
Parametric and non-parametric modeling methods are combined to study the short-term plasticity (STP) of synapses in the central nervous system (CNS). The nonlinear dynamics of STP are modeled by means: (1) previously proposed parametric models based on mechanistic hypotheses and/or specific dynamical processes, and (2) non-parametric models (in the form of Volterra kernels) that transforms the presynaptic signals into postsynaptic signals. In order to synergistically use the two approaches, we estimate the Volterra kernels of the parametric models of STP for four types of synapses using synthetic broadband input–output data. Results show that the non-parametric models accurately and efficiently replicate the input–output transformations of the parametric models. Volterra kernels provide a general and quantitative representation of the STP. PMID:18506609
Stochastic modeling of stock price process induced from the conjugate heat equation
NASA Astrophysics Data System (ADS)
Paeng, Seong-Hun
2015-02-01
Currency can be considered as a ruler for values of commodities. Then the price is the measured value by the ruler. We can suppose that inflation and variation of exchange rate are caused by variation of the scale of the ruler. In geometry, variation of the scale means that the metric is time-dependent. The conjugate heat equation is the modified heat equation which satisfies the heat conservation law for the time-dependent metric space. We propose a new model of stock prices by using the stochastic process whose transition probability is determined by the kernel of the conjugate heat equation. Our model of stock prices shows how the volatility term is affected by inflation and exchange rate. This model modifies the Black-Scholes equation in light of inflation and exchange rate.
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-08-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision-making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps, i.e. the spatial probability of a future vent opening given the past eruptive activity of a volcano. This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source Geographic Information System Quantum GIS, that is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows to select an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input datasets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
Complete one-loop renormalization of the Higgs-electroweak chiral Lagrangian
NASA Astrophysics Data System (ADS)
Buchalla, G.; Catà, O.; Celis, A.; Knecht, M.; Krause, C.
2018-03-01
Employing background-field method and super-heat-kernel expansion, we compute the complete one-loop renormalization of the electroweak chiral Lagrangian with a light Higgs boson. Earlier results from purely scalar fluctuations are confirmed as a special case. We also recover the one-loop renormalization of the conventional Standard Model in the appropriate limit.
Mutant maize variety containing the glt1-1 allele
Nelson, Oliver E.; Pan, David
1994-01-01
A maize plant has in its genome a non-mutable form of a mutant allele designated vitX-8132. The allele is located at a locus designated as glt which conditions kernels having an altered starch characteristic. Maize plants including such a mutant allele produce a starch that does not increase in viscosity on cooling, after heating.
Maternal and child mortality indicators across 187 countries of the world: converging or diverging.
Goli, Srinivas; Arokiasamy, Perianayagam
2014-01-01
This study reassessed the progress achieved since 1990 in maternal and child mortality indicators to test whether the progress is converging or diverging across countries worldwide. The convergence process is examined using standard parametric and non-parametric econometric models of convergence. The results of absolute convergence estimates reveal that progress in maternal and child mortality indicators is diverging for the entire period of 1990-2010 [maternal mortality ratio (MMR) - β = .00033, p < .574; neonatal mortality rate (NNMR) - β = .04367, p < .000; post-neonatal mortality rate (PNMR) - β = .02677, p < .000; under-five mortality rate (U5MR) - β = .00828, p < .000)]. In the recent period, such divergence is replaced with convergence for MMR but diverged for all the child mortality indicators. The results of Kernel density estimate reveal considerable reduction in divergence of MMR for the recent period; however, the Kernel density distribution plots show more than one 'peak' which indicates the emergence of convergence clubs based on their mortality levels. For child mortality indicators, the Kernel estimates suggest that divergence is in progress across the countries worldwide but tended to converge for countries with low mortality levels. A mere progress in global averages of maternal and child mortality indicators among a global cross-section of countries does not warranty convergence unless there is a considerable reduction in variance, skewness and range of change.
Marine Dispersal Scales Are Congruent over Evolutionary and Ecological Time.
Pinsky, Malin L; Saenz-Agudelo, Pablo; Salles, Océane C; Almany, Glenn R; Bode, Michael; Berumen, Michael L; Andréfouët, Serge; Thorrold, Simon R; Jones, Geoffrey P; Planes, Serge
2017-01-09
The degree to which offspring remain near their parents or disperse widely is critical for understanding population dynamics, evolution, and biogeography, and for designing conservation actions. In the ocean, most estimates suggesting short-distance dispersal are based on direct ecological observations of dispersing individuals, while indirect evolutionary estimates often suggest substantially greater homogeneity among populations. Reconciling these two approaches and their seemingly competing perspectives on dispersal has been a major challenge. Here we show for the first time that evolutionary and ecological measures of larval dispersal can closely agree by using both to estimate the distribution of dispersal distances. In orange clownfish (Amphiprion percula) populations in Kimbe Bay, Papua New Guinea, we found that evolutionary dispersal kernels were 17 km (95% confidence interval: 12-24 km) wide, while an exhaustive set of direct larval dispersal observations suggested kernel widths of 27 km (19-36 km) or 19 km (15-27 km) across two years. The similarity between these two approaches suggests that ecological and evolutionary dispersal kernels can be equivalent, and that the apparent disagreement between direct and indirect measurements can be overcome. Our results suggest that carefully applied evolutionary methods, which are often less expensive, can be broadly relevant for understanding ecological dispersal across the tree of life. Copyright © 2017 Elsevier Ltd. All rights reserved.
Oddou-Muratorio, Sylvie; Klein, Etienne K; Austerlitz, Frédéric
2005-12-01
Knowing the extent of gene movements from parents to offspring is essential to understand the potential of a species to adapt rapidly to a changing environment, and to design appropriate conservation strategies. In this study, we develop a nonlinear statistical model to jointly estimate the pollen dispersal kernel and the heterogeneity in fecundity among phenotypically or environmentally defined groups of males. This model uses genotype data from a sample of fruiting plants, a sample of seeds harvested on each of these plants, and all males within a circumscribed area. We apply this model to a scattered, entomophilous woody species, Sorbus torminalis (L.) Crantz, within a natural population covering more than 470 ha. We estimate a high heterogeneity in male fecundity among ecological groups, both due to phenotype (size of the trees and flowering intensity) and landscape factors (stand density within the neighbourhood). We also show that fat-tailed kernels are the most appropriate to depict the important abilities of long-distance pollen dispersal for this species. Finally, our results reveal that the spatial position of a male with respect to females affects as much its mating success as ecological determinants of male fecundity. Our study thus stresses the interest to account for the dispersal kernel when estimating heterogeneity in male fecundity, and reciprocally.
NASA Astrophysics Data System (ADS)
Niazmardi, S.; Safari, A.; Homayouni, S.
2017-09-01
Crop mapping through classification of Satellite Image Time-Series (SITS) data can provide very valuable information for several agricultural applications, such as crop monitoring, yield estimation, and crop inventory. However, the SITS data classification is not straightforward. Because different images of a SITS data have different levels of information regarding the classification problems. Moreover, the SITS data is a four-dimensional data that cannot be classified using the conventional classification algorithms. To address these issues in this paper, we presented a classification strategy based on Multiple Kernel Learning (MKL) algorithms for SITS data classification. In this strategy, initially different kernels are constructed from different images of the SITS data and then they are combined into a composite kernel using the MKL algorithms. The composite kernel, once constructed, can be used for the classification of the data using the kernel-based classification algorithms. We compared the computational time and the classification performances of the proposed classification strategy using different MKL algorithms for the purpose of crop mapping. The considered MKL algorithms are: MKL-Sum, SimpleMKL, LPMKL and Group-Lasso MKL algorithms. The experimental tests of the proposed strategy on two SITS data sets, acquired by SPOT satellite sensors, showed that this strategy was able to provide better performances when compared to the standard classification algorithm. The results also showed that the optimization method of the used MKL algorithms affects both the computational time and classification accuracy of this strategy.
Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆
Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny
2014-01-01
There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702
Comparison of different methods for gender estimation from face image of various poses
NASA Astrophysics Data System (ADS)
Ishii, Yohei; Hongo, Hitoshi; Niwa, Yoshinori; Yamamoto, Kazuhiko
2003-04-01
Recently, gender estimation from face images has been studied for frontal facial images. However, it is difficult to obtain such facial images constantly in the case of application systems for security, surveillance and marketing research. In order to build such systems, a method is required to estimate gender from the image of various facial poses. In this paper, three different classifiers are compared in appearance-based gender estimation, which use four directional features (FDF). The classifiers are linear discriminant analysis (LDA), Support Vector Machines (SVMs) and Sparse Network of Winnows (SNoW). Face images used for experiments were obtained from 35 viewpoints. The direction of viewpoints varied +/-45 degrees horizontally, +/-30 degrees vertically at 15 degree intervals respectively. Although LDA showed the best performance for frontal facial images, SVM with Gaussian kernel was found the best performance (86.0%) for the facial images of 35 viewpoints. It is considered that SVM with Gaussian kernel is robust to changes in viewpoint when estimating gender from these results. Furthermore, the estimation rate was quite close to the average estimation rate at 35 viewpoints respectively. It is supposed that the methods are reasonable to estimate gender within the range of experimented viewpoints by learning face images from multiple directions by one class.
Structured functional additive regression in reproducing kernel Hilbert spaces.
Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen
2014-06-01
Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application.
Alumina Concentration Detection Based on the Kernel Extreme Learning Machine.
Zhang, Sen; Zhang, Tao; Yin, Yixin; Xiao, Wendong
2017-09-01
The concentration of alumina in the electrolyte is of great significance during the production of aluminum. The amount of the alumina concentration may lead to unbalanced material distribution and low production efficiency and affect the stability of the aluminum reduction cell and current efficiency. The existing methods cannot meet the needs for online measurement because industrial aluminum electrolysis has the characteristics of high temperature, strong magnetic field, coupled parameters, and high nonlinearity. Currently, there are no sensors or equipment that can detect the alumina concentration on line. Most companies acquire the alumina concentration from the electrolyte samples which are analyzed through an X-ray fluorescence spectrometer. To solve the problem, the paper proposes a soft sensing model based on a kernel extreme learning machine algorithm that takes the kernel function into the extreme learning machine. K-fold cross validation is used to estimate the generalization error. The proposed soft sensing algorithm can detect alumina concentration by the electrical signals such as voltages and currents of the anode rods. The predicted results show that the proposed approach can give more accurate estimations of alumina concentration with faster learning speed compared with the other methods such as the basic ELM, BP, and SVM.
Robust Kernel-Based Object Tracking with Multiple Kernel Centers
2009-07-09
orientation and scale estimation, which will be added in Section 4. 1017 where gji ,l represents g(‖ rl(y j)−xi h0 ‖2) for short. Note that y j cancels out...ρ(zj)]g j i,l hj ∑N i=1 ∑L l=1 w j i,lg j i,l (47) where, vji,l = (xi − yj)T ∂∆rl(φ j) ∂φ (48) sji,l = (xi − yj)T (xi − rl(zj)) (49) and gji ,l
Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.
Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry
2016-09-01
Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Lee, Kyoung-Sun; Imada, Shinsuke; Kyoko, Watanabe; Bamba, Yumi; Brooks, David H.
2016-10-01
An X1.6 flare occurred at the AR 12192 on 2014 October 22 at14:02 UT was observed by Hinode, IRIS, SDO, and RHESSI. We analyze a bright kernel which produces a white light (WL) flare with continuum enhancement and a hard X-ray (HXR) peak. Taking advantage of the spectroscopic observations of IRIS and Hinode/EIS, we measure the temporal variation of the plasma properties in the bright kernel in the chromosphere and corona. We found that explosive evaporation was observed when the WL emission occurred, even though the intensity enhancement in hotter lines is quite weak. The temporal correlation of the WL emission, HXR peak, and evaporation flows indicate that the WL emission was produced by accelerated electrons. To understand the white light emission processes, we calculated the deposited energy flux from the non-thermal electrons observed by RHESSI and compared it to the dissipated energy estimated from the chromospheric line (Mg II triplet) observed by IRIS. The deposited energy flux from the non-thermal electrons is about 3.1 × 1010erg cm-2 s-1 when we consider a cut-off energy 20 keV. The estimated energy flux from the temperature changes in the chromosphere measured from the Mg II subordinate line is about 4.6-6.7×109erg cm-2 s-1, ˜ 15-22% of the deposited energy. By comparison of these estimated energy fluxes we conclude that the continuum enhancement was directly produced by the non-thermal electrons.
Defining space use and movements of Canada lynx with global positioning system telemetry
Burdett, C.L.; Moen, R.A.; Niemi, G.J.; Mech, L.D.
2007-01-01
Space use and movements of Canada lynx (Lynx canadensis) are difficult to study with very-high-frequency radiocollars. We deployed global positioning system (GPS) collars on 11 lynx in Minnesota to study their seasonal space-use patterns. We estimated home ranges with minimum-convex-polygon and fixed-kernel methods and estimated core areas with area/probability curves. Fixed-kernel home ranges of males (range = 29-522 km2) were significantly larger than those of females (range = 5-95 km2) annually and during the denning season. Some male lynx increased movements during March, the month most influenced by breeding activity. Lynx core areas were predicted by the 60% fixed-kernel isopleth in most seasons. The mean core-area size of males (range = 6-190 km2) was significantly larger than that of females (range = 1-19 km2) annually and during denning. Most female lynx were reproductive animals with reduced movements, whereas males often ranged widely between Minnesota and Ontario. Sensitivity analyses examining the effect of location frequency on home-range size suggest that the home-range sizes of breeding females are less sensitive to sample size than those of males. Longer periods between locations decreased home-range and core-area overlap relative to the home range estimated from daily locations. GPS collars improve our understanding of space use and movements by lynx by increasing the spatial extent and temporal frequency of monitoring and allowing home ranges to be estimated over short periods that are relevant to life-history characteristics. ?? 2007 American Society of Mammalogists.
Mutant maize variety containing the glt1-1 allele
Nelson, O.E.; Pan, D.
1994-07-19
A maize plant has in its genome a non-mutable form of a mutant allele designated vitX-8132. The allele is located at a locus designated as glt which conditions kernels having an altered starch characteristic. Maize plants including such a mutant allele produce a starch that does not increase in viscosity on cooling, after heating. 2 figs.
Nawab, Anjum; Alam, Feroz; Haq, Mohammad Abdul; Haider, Mohammad Samee; Lutfi, Zubala; Kamaluddin, Sheikh; Hasnain, Abid
2018-07-15
In this study mango kernel starch (MKS) based heat sealable pouches were developed for packing of red chili powder. The films were prepared by casting technique using glycerol, sorbitol and 1:1 mixture of glycerol and sorbitol and were sealed. All films showed better heat sealing capacity but glycerol films plasticized exhibited higher seal strength than their counterparts. The red chili powder was packed in the MKS film pouches while commercially available polyethylene (PE) film was used as control. The pungency and color of red chili powder was monitored during six months storage at 40°C. The capsaicinoid content was extracted from the red chili with acetonitrile and evaluated quantitatively using spectrophotometric method. The extractable color was measured by ASTA method using acetone. The results showed significant differences in color and pungency of chili packed in MKS and PE pouches. The highest reduction in capsaicinoid content (pungency) of chili powder was observed in PE pouch (25.9%) while lowest was observed in MKS pouch containing sorbitol (15.7%). Similarly color loss was also highest in chili packed in PE pouch while lowest in MKS-sorbitol pouch. Copyright © 2018 Elsevier B.V. All rights reserved.
Influence of Initial Correlations on Evolution of a Subsystem in a Heat Bath and Polaron Mobility
NASA Astrophysics Data System (ADS)
Los, Victor F.
2017-08-01
A regular approach to accounting for initial correlations, which allows to go beyond the unrealistic random phase (initial product state) approximation in deriving the evolution equations, is suggested. An exact homogeneous (time-convolution and time-convolutionless) equations for a relevant part of the two-time equilibrium correlation function for the dynamic variables of a subsystem interacting with a boson field (heat bath) are obtained. No conventional approximation like RPA or Bogoliubov's principle of weakening of initial correlations is used. The obtained equations take into account the initial correlations in the kernel governing their evolution. The solution to these equations is found in the second order of the kernel expansion in the electron-phonon interaction, which demonstrates that generally the initial correlations influence the correlation function's evolution in time. It is explicitly shown that this influence vanishes on a large timescale (actually at t→ ∞) and the evolution process enters an irreversible kinetic regime. The developed approach is applied to the Fröhlich polaron and the low-temperature polaron mobility (which was under a long-time debate) is found with a correction due to initial correlations.
Space Use and Movement of a Neotropical Top Predator: The Endangered Jaguar
Stabach, Jared A.; Fleming, Chris H.; Calabrese, Justin M.; De Paula, Rogério C.; Ferraz, Kátia M. P. M.; Kantek, Daniel L. Z.; Miyazaki, Selma S.; Pereira, Thadeu D. C.; Araujo, Gediendson R.; Paviolo, Agustin; De Angelo, Carlos; Di Bitetti, Mario S.; Cruz, Paula; Lima, Fernando; Cullen, Laury; Sana, Denis A.; Ramalho, Emiliano E.; Carvalho, Marina M.; Soares, Fábio H. S.; Zimbres, Barbara; Silva, Marina X.; Moraes, Marcela D. F.; Vogliotti, Alexandre; May, Joares A.; Haberfeld, Mario; Rampim, Lilian; Sartorello, Leonardo; Ribeiro, Milton C.; Leimgruber, Peter
2016-01-01
Accurately estimating home range and understanding movement behavior can provide important information on ecological processes. Advances in data collection and analysis have improved our ability to estimate home range and movement parameters, both of which have the potential to impact species conservation. Fitting continuous-time movement model to data and incorporating the autocorrelated kernel density estimator (AKDE), we investigated range residency of forty-four jaguars fit with GPS collars across five biomes in Brazil and Argentina. We assessed home range and movement parameters of range resident animals and compared AKDE estimates with kernel density estimates (KDE). We accounted for differential space use and movement among individuals, sex, region, and habitat quality. Thirty-three (80%) of collared jaguars were range resident. Home range estimates using AKDE were 1.02 to 4.80 times larger than KDE estimates that did not consider autocorrelation. Males exhibited larger home ranges, more directional movement paths, and a trend towards larger distances traveled per day. Jaguars with the largest home ranges occupied the Atlantic Forest, a biome with high levels of deforestation and high human population density. Our results fill a gap in the knowledge of the species’ ecology with an aim towards better conservation of this endangered/critically endangered carnivore—the top predator in the Neotropics. PMID:28030568
Space Use and Movement of a Neotropical Top Predator: The Endangered Jaguar.
Morato, Ronaldo G; Stabach, Jared A; Fleming, Chris H; Calabrese, Justin M; De Paula, Rogério C; Ferraz, Kátia M P M; Kantek, Daniel L Z; Miyazaki, Selma S; Pereira, Thadeu D C; Araujo, Gediendson R; Paviolo, Agustin; De Angelo, Carlos; Di Bitetti, Mario S; Cruz, Paula; Lima, Fernando; Cullen, Laury; Sana, Denis A; Ramalho, Emiliano E; Carvalho, Marina M; Soares, Fábio H S; Zimbres, Barbara; Silva, Marina X; Moraes, Marcela D F; Vogliotti, Alexandre; May, Joares A; Haberfeld, Mario; Rampim, Lilian; Sartorello, Leonardo; Ribeiro, Milton C; Leimgruber, Peter
2016-01-01
Accurately estimating home range and understanding movement behavior can provide important information on ecological processes. Advances in data collection and analysis have improved our ability to estimate home range and movement parameters, both of which have the potential to impact species conservation. Fitting continuous-time movement model to data and incorporating the autocorrelated kernel density estimator (AKDE), we investigated range residency of forty-four jaguars fit with GPS collars across five biomes in Brazil and Argentina. We assessed home range and movement parameters of range resident animals and compared AKDE estimates with kernel density estimates (KDE). We accounted for differential space use and movement among individuals, sex, region, and habitat quality. Thirty-three (80%) of collared jaguars were range resident. Home range estimates using AKDE were 1.02 to 4.80 times larger than KDE estimates that did not consider autocorrelation. Males exhibited larger home ranges, more directional movement paths, and a trend towards larger distances traveled per day. Jaguars with the largest home ranges occupied the Atlantic Forest, a biome with high levels of deforestation and high human population density. Our results fill a gap in the knowledge of the species' ecology with an aim towards better conservation of this endangered/critically endangered carnivore-the top predator in the Neotropics.
Comparing estimates of genetic variance across different relationship models.
Legarra, Andres
2016-02-01
Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.
Lara-Romero, Carlos; Robledo-Arnuncio, Juan J; García-Fernández, Alfredo; Iriondo, Jose M
2014-01-01
Plant recruitment depends among other factors on environmental conditions and their variation at different spatial scales. Characterizing dispersal in contrasting environments may thus be necessary to understand natural intraspecific variation in the processes underlying recruitment. Silene ciliata and Armeria caespitosa are two representative species of cryophilic pastures above the tree line in Mediterranean high mountains. No explicit estimations of dispersal kernels have been made so far for these or other high-mountain plants. Such data could help to predict their dispersal and recruitment patterns in a context of changing environments under ongoing global warming. We used an inverse modelling approach to analyse effective seed dispersal patterns in five populations of both Silene ciliata and Armeria caespitosa along an altitudinal gradient in Sierra de Guadarrama (Madrid, Spain). We considered four commonly employed two-dimensional seedling dispersal kernels exponential-power, 2Dt, WALD and log-normal. No single kernel function provided the best fit across all populations, although estimated mean dispersal distances were short (<1 m) in all cases. S. ciliata did not exhibit significant among-population variation in mean dispersal distance, whereas significant differences in mean dispersal distance were found in A. caespitosa. Both S. ciliata and A. caespitosa exhibited among-population variation in the fecundity parameter and lacked significant variation in kernel shape. This study illustrates the complexity of intraspecific variation in the processes underlying recruitment, showing that effective dispersal kernels can remain relatively invariant across populations within particular species, even if there are strong variations in demographic structure and/or physical environment among populations, while the invariant dispersal assumption may not hold for other species in the same environment. Our results call for a case-by-case analysis in a wider range of plant taxa and environments to assess the prevalence and magnitude of intraspecific dispersal variation.
NASA Astrophysics Data System (ADS)
Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-12-01
Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a limited number of particles.
Nowcasting Cloud Fields for U.S. Air Force Special Operations
2017-03-01
application of Bayes’ Rule offers many advantages over Kernel Density Estimation (KDE) and other commonly used statistical post-processing methods...reflectance and probability of cloud. A statistical post-processing technique is applied using Bayesian estimation to train the system from a set of past...nowcasting, low cloud forecasting, cloud reflectance, ISR, Bayesian estimation, statistical post-processing, machine learning 15. NUMBER OF PAGES
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-11-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps (i.e., the spatial probability of a future vent opening given the past eruptive activity of a volcano). This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source geographic information system Quantum GIS, which is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows the selection of an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input data sets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan
2016-01-01
Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction. PMID:28603407
NASA Astrophysics Data System (ADS)
Hrycik, Janelle M.; Chassé, Joël; Ruddick, Barry R.; Taggart, Christopher T.
2013-11-01
Early life-stage dispersal influences recruitment and is of significance in explaining the distribution and connectivity of marine species. Motivations for quantifying dispersal range from biodiversity conservation to the design of marine reserves and the mitigation of species invasions. Here we compare estimates of real particle dispersion in a coastal marine environment with similar estimates provided by hydrodynamic modelling. We do so by using a system of magnetically attractive particles (MAPs) and a magnetic-collector array that provides measures of Lagrangian dispersion based on the time-integration of MAPs dispersing through the array. MAPs released as a point source in a coastal marine location dispersed through the collector array over a 5-7 d period. A virtual release and observed (real-time) environmental conditions were used in a high-resolution three-dimensional hydrodynamic model to estimate the dispersal of virtual particles (VPs). The number of MAPs captured throughout the collector array and the number of VPs that passed through each corresponding model location were enumerated and compared. Although VP dispersal reflected several aspects of the observed MAP dispersal, the comparisons demonstrated model sensitivity to the small-scale (random-walk) particle diffusivity parameter (Kp). The one-dimensional dispersal kernel for the MAPs had an e-folding scale estimate in the range of 5.19-11.44 km, while those from the model simulations were comparable at 1.89-6.52 km, and also demonstrated sensitivity to Kp. Variations among comparisons are related to the value of Kp used in modelling and are postulated to be related to MAP losses from the water column and (or) shear dispersion acting on the MAPs; a process that is constrained in the model. Our demonstration indicates a promising new way of 1) quantitatively and empirically estimating the dispersal kernel in aquatic systems, and 2) quantitatively assessing and (or) improving regional hydrodynamic models.
Structured functional additive regression in reproducing kernel Hilbert spaces
Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen
2013-01-01
Summary Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application. PMID:25013362
Weavers, Paul T; Borisch, Eric A; Riederer, Stephen J
2015-06-01
To develop and validate a method for choosing the optimal two-dimensional CAIPIRINHA kernel for subtraction contrast-enhanced MR angiography (CE-MRA) and estimate the degree of image quality improvement versus that of some reference acceleration parameter set at R ≥ 8. A metric based on patient-specific coil calibration information was defined for evaluating optimality of CAIPIRINHA kernels as applied to subtraction CE-MRA. Evaluation in retrospective studies using archived coil calibration data from abdomen, calf, foot, and hand CE-MRA exams was accomplished with an evaluation metric comparing the geometry factor (g-factor) histograms. Prospective calf, foot, and hand CE-MRA studies were evaluated with vessel signal-to-noise ratio (SNR). Retrospective studies show g-factor improvement for the selected CAIPIRINHA kernels was significant in the feet, moderate in the abdomen, and modest in the calves and hands. Prospective CE-MRA studies using optimal CAIPIRINHA show reduced noise amplification with identical acquisition time in studies of the feet, with minor improvements in the hands and calves. A method for selection of the optimal CAIPIRINHA kernel for high (R ≥ 8) acceleration CE-MRA exams given a specific patient and receiver array was demonstrated. CAIPIRINHA optimization appears valuable in accelerated CE-MRA of the feet and to a lesser extent in the abdomen. © 2014 Wiley Periodicals, Inc.
Kebede, Aida Z; Woldemariam, Tsegaye; Reid, Lana M; Harris, Linda J
2016-01-01
Unique and co-localized chromosomal regions affecting Gibberella ear rot disease resistance and correlated agronomic traits were identified in maize. Dissecting the mechanisms underlying resistance to Gibberella ear rot (GER) disease in maize provides insight towards more informed breeding. To this goal, we evaluated 410 recombinant inbred lines (RIL) for GER resistance over three testing years using silk channel and kernel inoculation techniques. RILs were also evaluated for agronomic traits like days to silking, husk cover, and kernel drydown rate. The RILs showed significant genotypic differences for all traits with above average to high heritability estimates. Significant (P < 0.01) but weak genotypic correlations were observed between disease severity and agronomic traits, indicating the involvement of agronomic traits in disease resistance. Common QTLs were detected for GER resistance and kernel drydown rate, suggesting the existence of pleiotropic genes that could be exploited to improve both traits at the same time. The QTLs identified for silk and kernel resistance shared some common regions on chromosomes 1, 2, and 8 and also had some regions specific to each tissue on chromosomes 9 and 10. Thus, effective GER resistance breeding could be achieved by considering screening methods that allow exploitation of tissue-specific disease resistance mechanisms and include kernel drydown rate either in an index or as indirect selection criterion.
Sepsis mortality prediction with the Quotient Basis Kernel.
Ribas Ripoll, Vicent J; Vellido, Alfredo; Romero, Enrique; Ruiz-Rodríguez, Juan Carlos
2014-05-01
This paper presents an algorithm to assess the risk of death in patients with sepsis. Sepsis is a common clinical syndrome in the intensive care unit (ICU) that can lead to severe sepsis, a severe state of septic shock or multi-organ failure. The proposed algorithm may be implemented as part of a clinical decision support system that can be used in combination with the scores deployed in the ICU to improve the accuracy, sensitivity and specificity of mortality prediction for patients with sepsis. In this paper, we used the Simplified Acute Physiology Score (SAPS) for ICU patients and the Sequential Organ Failure Assessment (SOFA) to build our kernels and algorithms. In the proposed method, we embed the available data in a suitable feature space and use algorithms based on linear algebra, geometry and statistics for inference. We present a simplified version of the Fisher kernel (practical Fisher kernel for multinomial distributions), as well as a novel kernel that we named the Quotient Basis Kernel (QBK). These kernels are used as the basis for mortality prediction using soft-margin support vector machines. The two new kernels presented are compared against other generative kernels based on the Jensen-Shannon metric (centred, exponential and inverse) and other widely used kernels (linear, polynomial and Gaussian). Clinical relevance is also evaluated by comparing these results with logistic regression and the standard clinical prediction method based on the initial SAPS score. As described in this paper, we tested the new methods via cross-validation with a cohort of 400 test patients. The results obtained using our methods compare favourably with those obtained using alternative kernels (80.18% accuracy for the QBK) and the standard clinical prediction method, which are based on the basal SAPS score or logistic regression (71.32% and 71.55%, respectively). The QBK presented a sensitivity and specificity of 79.34% and 83.24%, which outperformed the other kernels analysed, logistic regression and the standard clinical prediction method based on the basal SAPS score. Several scoring systems for patients with sepsis have been introduced and developed over the last 30 years. They allow for the assessment of the severity of disease and provide an estimate of in-hospital mortality. Physiology-based scoring systems are applied to critically ill patients and have a number of advantages over diagnosis-based systems. Severity score systems are often used to stratify critically ill patients for possible inclusion in clinical trials. In this paper, we present an effective algorithm that combines both scoring methodologies for the assessment of death in patients with sepsis that can be used to improve the sensitivity and specificity of the currently available methods. Copyright © 2014 Elsevier B.V. All rights reserved.
Testing the causality of Hawkes processes with time reversal
NASA Astrophysics Data System (ADS)
Cordi, Marcus; Challet, Damien; Muni Toke, Ioane
2018-03-01
We show that univariate and symmetric multivariate Hawkes processes are only weakly causal: the true log-likelihoods of real and reversed event time vectors are almost equal, thus parameter estimation via maximum likelihood only weakly depends on the direction of the arrow of time. In ideal (synthetic) conditions, tests of goodness of parametric fit unambiguously reject backward event times, which implies that inferring kernels from time-symmetric quantities, such as the autocovariance of the event rate, only rarely produce statistically significant fits. Finally, we find that fitting financial data with many-parameter kernels may yield significant fits for both arrows of time for the same event time vector, sometimes favouring the backward time direction. This goes to show that a significant fit of Hawkes processes to real data with flexible kernels does not imply a definite arrow of time unless one tests it.
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Extensive screening for herbal extracts with potent antioxidant properties
Niwano, Yoshimi; Saito, Keita; Yoshizaki, Fumihiko; Kohno, Masahiro; Ozawa, Toshihiko
2011-01-01
This paper summarizes our research for herbal extracts with potent antioxidant activity obtained from a large scale screening based on superoxide radical (O2•−) scavenging activity followed by characterization of antioxidant properties. Firstly, scavenging activity against O2•− was extensively screened from ethanol extracts of approximately 1000 kinds of herbs by applying an electron spin resonance (ESR)-spin trapping method, and we chose four edible herbal extracts with prominently potent ability to scavenge O2•−. They are the extracts from Punica granatum (Peel), Syzygium aromaticum (Bud), Mangifera indica (Kernel), and Phyllanthus emblica (Fruit). These extracts were further examined to determine if they also scavenge hydroxyl radical (•OH), by applying the ESR spin-trapping method, and if they have heat resistance as a desirable characteristic feature. Experiments with the Fenton reaction and photolysis of H2O2 induced by UV irradiation demonstrated that all four extracts have potent ability to directly scavenge •OH. Furthermore, the scavenging activities against O2•− and •OH of the extracts of P. granatum (peel), M. indica (kernel) and P. emblica (fruit) proved to be heat-resistant. The results of the review might give useful information when choosing a potent antioxidant as a foodstuff. For instance, the four herbal extracts chosen from extensive screening possess desirable antioxidant properties. In particular, the extracts of the aforementioned three herbs are expected to be suitable for food processing in which thermal devices are used, because of their heat resistance. PMID:21297917
Estimating epidemic arrival times using linear spreading theory
NASA Astrophysics Data System (ADS)
Chen, Lawrence M.; Holzer, Matt; Shapiro, Anne
2018-01-01
We study the dynamics of a spatially structured model of worldwide epidemics and formulate predictions for arrival times of the disease at any city in the network. The model is composed of a system of ordinary differential equations describing a meta-population susceptible-infected-recovered compartmental model defined on a network where each node represents a city and the edges represent the flight paths connecting cities. Making use of the linear determinacy of the system, we consider spreading speeds and arrival times in the system linearized about the unstable disease free state and compare these to arrival times in the nonlinear system. Two predictions are presented. The first is based upon expansion of the heat kernel for the linearized system. The second assumes that the dominant transmission pathway between any two cities can be approximated by a one dimensional lattice or a homogeneous tree and gives a uniform prediction for arrival times independent of the specific network features. We test these predictions on a real network describing worldwide airline traffic.
New numerical method for radiation heat transfer in nonhomogeneous participating media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howell, J.R.; Tan, Zhiqiang
A new numerical method, which solves the exact integral equations of distance-angular integration form for radiation transfer, is introduced in this paper. By constructing and prestoring the numerical integral formulas for the distance integral for appropriate kernel functions, this method eliminates the time consuming evaluations of the kernels of the space integrals in the formal computations. In addition, when the number of elements in the system is large, the resulting coefficient matrix is quite sparse. Thus, either considerable time or much storage can be saved. A weakness of the method is discussed, and some remedies are suggested. As illustrations, somemore » one-dimensional and two-dimensional problems in both homogeneous and inhomogeneous emitting, absorbing, and linear anisotropic scattering media are studied. Some results are compared with available data. 13 refs.« less
Limitations of shallow nets approximation.
Lin, Shao-Bo
2017-10-01
In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Estimating average growth trajectories in shape-space using kernel smoothing.
Hutton, Tim J; Buxton, Bernard F; Hammond, Peter; Potts, Henry W W
2003-06-01
In this paper, we show how a dense surface point distribution model of the human face can be computed and demonstrate the usefulness of the high-dimensional shape-space for expressing the shape changes associated with growth and aging. We show how average growth trajectories for the human face can be computed in the absence of longitudinal data by using kernel smoothing across a population. A training set of three-dimensional surface scans of 199 male and 201 female subjects of between 0 and 50 years of age is used to build the model.
2010-09-01
and y, the axial and radial coordinates respectively. Point c lies somewhere within the mesh generated by the initial expansion (the kernel). All that...and the surface will be subjected to high heat loads restricting the choice of suitable materials. Material choice has direct implications for...Some legacy trajectory codes might not be able to deal with anything other than axial forces from engines, reflecting the class of problem they were
Motives and periods in Bianchi IX gravity models
NASA Astrophysics Data System (ADS)
Fan, Wentao; Fathizadeh, Farzad; Marcolli, Matilde
2018-05-01
We show that, when considering the anisotropic scaling factors and their derivatives as affine variables, the coefficients of the heat-kernel expansion of the Dirac-Laplacian on SU(2) Bianchi IX metrics are algebro-geometric periods of motives of complements in affine spaces of unions of quadrics and hyperplanes. We show that the motives are mixed Tate and we provide an explicit computation of their Grothendieck classes.
Effects of study area size on home range estimates of common bottlenose dolphins Tursiops truncatus
Nekolny, Samantha R; Denny, Matthew; Biedenbach, George; Howells, Elisabeth M; Mazzoil, Marilyn; Durden, Wendy N; Moreland, Lydia; David Lambert, J
2017-01-01
Abstract Knowledge of an animal’s home range is a crucial component in making informed management decisions. However, many home range studies are limited by study area size, and therefore may underestimate the size of the home range. In many cases, individuals have been shown to travel outside of the study area and utilize a larger area than estimated by the study design. In this study, data collected by multiple research groups studying bottlenose dolphins on the east coast of Florida were combined to determine how home range estimates increased with increasing study area size. Home range analyses utilized photo-identification data collected from 6 study areas throughout the St Johns River (SJR; Jacksonville, FL, USA) and adjacent waterways, extending a total of 253 km to the southern end of Mosquito Lagoon in the Indian River Lagoon Estuarine System. Univariate kernel density estimates (KDEs) were computed for individuals with 10 or more sightings (n = 20). Kernels were calculated for the primary study area (SJR) first, then additional kernels were calculated by combining the SJR and the next adjacent waterway; this continued in an additive fashion until all study areas were included. The 95% and 50% KDEs calculated for the SJR alone ranged from 21 to 35 km and 4 to 19 km, respectively. The 95% and 50% KDEs calculated for all combined study areas ranged from 116 to 217 km and 9 to 70 km, respectively. This study illustrates the degree to which home range may be underestimated by the use of limited study areas and demonstrates the benefits of conducting collaborative science. PMID:29492031
Effects of study area size on home range estimates of common bottlenose dolphins Tursiops truncatus.
Nekolny, Samantha R; Denny, Matthew; Biedenbach, George; Howells, Elisabeth M; Mazzoil, Marilyn; Durden, Wendy N; Moreland, Lydia; David Lambert, J; Gibson, Quincy A
2017-12-01
Knowledge of an animal's home range is a crucial component in making informed management decisions. However, many home range studies are limited by study area size, and therefore may underestimate the size of the home range. In many cases, individuals have been shown to travel outside of the study area and utilize a larger area than estimated by the study design. In this study, data collected by multiple research groups studying bottlenose dolphins on the east coast of Florida were combined to determine how home range estimates increased with increasing study area size. Home range analyses utilized photo-identification data collected from 6 study areas throughout the St Johns River (SJR; Jacksonville, FL, USA) and adjacent waterways, extending a total of 253 km to the southern end of Mosquito Lagoon in the Indian River Lagoon Estuarine System. Univariate kernel density estimates (KDEs) were computed for individuals with 10 or more sightings ( n = 20). Kernels were calculated for the primary study area (SJR) first, then additional kernels were calculated by combining the SJR and the next adjacent waterway; this continued in an additive fashion until all study areas were included. The 95% and 50% KDEs calculated for the SJR alone ranged from 21 to 35 km and 4 to 19 km, respectively. The 95% and 50% KDEs calculated for all combined study areas ranged from 116 to 217 km and 9 to 70 km, respectively. This study illustrates the degree to which home range may be underestimated by the use of limited study areas and demonstrates the benefits of conducting collaborative science.
Adaptive learning in complex reproducing kernel Hilbert spaces employing Wirtinger's subgradients.
Bouboulis, Pantelis; Slavakis, Konstantinos; Theodoridis, Sergios
2012-03-01
This paper presents a wide framework for non-linear online supervised learning tasks in the context of complex valued signal processing. The (complex) input data are mapped into a complex reproducing kernel Hilbert space (RKHS), where the learning phase is taking place. Both pure complex kernels and real kernels (via the complexification trick) can be employed. Moreover, any convex, continuous and not necessarily differentiable function can be used to measure the loss between the output of the specific system and the desired response. The only requirement is the subgradient of the adopted loss function to be available in an analytic form. In order to derive analytically the subgradients, the principles of the (recently developed) Wirtinger's calculus in complex RKHS are exploited. Furthermore, both linear and widely linear (in RKHS) estimation filters are considered. To cope with the problem of increasing memory requirements, which is present in almost all online schemes in RKHS, the sparsification scheme, based on projection onto closed balls, has been adopted. We demonstrate the effectiveness of the proposed framework in a non-linear channel identification task, a non-linear channel equalization problem and a quadrature phase shift keying equalization scheme, using both circular and non circular synthetic signal sources.
Fission Product Release and Survivability of UN-Kernel LWR TRISO Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besmann, Theodore M; Ferber, Mattison K; Lin, Hua-Tay
2014-01-01
A thermomechanical assessment of the LWR application of TRISO fuel with UN kernels was performed. Fission product release under operational and transient temperature conditions was determined by extrapolation from range calculations and limited data from irradiated UN pellets. Both fission recoil and diffusive release were considered and internal particle pressures computed for both 650 and 800 m diameter kernels as a function of buffer layer thickness. These pressures were used in conjunction with a finite element program to compute the radial and tangential stresses generated with a TRISO particle as a function of fluence. Creep and swelling of the innermore » and outer pyrolytic carbon layers were included in the analyses. A measure of reliability of the TRISO particle was obtained by measuring the probability of survival of the SiC barrier layer and the maximum tensile stress generated in the pyrolytic carbon layers as a function of fluence. These reliability estimates were obtained as functions of the kernel diameter, buffer layer thickness, and pyrolytic carbon layer thickness. The value of the probability of survival at the end of irradiation was inversely proportional to the maximum pressure.« less
Fission product release and survivability of UN-kernel LWR TRISO fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. M. Besmann; M. K. Ferber; H.-T. Lin
2014-05-01
A thermomechanical assessment of the LWR application of TRISO fuel with UN kernels was performed. Fission product release under operational and transient temperature conditions was determined by extrapolation from fission product recoil calculations and limited data from irradiated UN pellets. Both fission recoil and diffusive release were considered and internal particle pressures computed for both 650 and 800 um diameter kernels as a function of buffer layer thickness. These pressures were used in conjunction with a finite element program to compute the radial and tangential stresses generated within a TRISO particle undergoing burnup. Creep and swelling of the inner andmore » outer pyrolytic carbon layers were included in the analyses. A measure of reliability of the TRISO particle was obtained by computing the probability of survival of the SiC barrier layer and the maximum tensile stress generated in the pyrolytic carbon layers from internal pressure and thermomechanics of the layers. These reliability estimates were obtained as functions of the kernel diameter, buffer layer thickness, and pyrolytic carbon layer thickness. The value of the probability of survival at the end of irradiation was inversely proportional to the maximum pressure.« less
Tricoli, Ugo; Macdonald, Callum M; Durduran, Turgut; Da Silva, Anabela; Markel, Vadim A
2018-02-01
Diffuse correlation tomography (DCT) uses the electric-field temporal autocorrelation function to measure the mean-square displacement of light-scattering particles in a turbid medium over a given exposure time. The movement of blood particles is here estimated through a Brownian-motion-like model in contrast to ordered motion as in blood flow. The sensitivity kernel relating the measurable field correlation function to the mean-square displacement of the particles can be derived by applying a perturbative analysis to the correlation transport equation (CTE). We derive an analytical expression for the CTE sensitivity kernel in terms of the Green's function of the radiative transport equation, which describes the propagation of the intensity. We then evaluate the kernel numerically. The simulations demonstrate that, in the transport regime, the sensitivity kernel provides sharper spatial information about the medium as compared with the correlation diffusion approximation. Also, the use of the CTE allows one to explore some additional degrees of freedom in the data such as the collimation direction of sources and detectors. Our results can be used to improve the spatial resolution of DCT, in particular, with applications to blood flow imaging in regions where the Brownian motion is dominant.
NASA Astrophysics Data System (ADS)
Tricoli, Ugo; Macdonald, Callum M.; Durduran, Turgut; Da Silva, Anabela; Markel, Vadim A.
2018-02-01
Diffuse correlation tomography (DCT) uses the electric-field temporal autocorrelation function to measure the mean-square displacement of light-scattering particles in a turbid medium over a given exposure time. The movement of blood particles is here estimated through a Brownian-motion-like model in contrast to ordered motion as in blood flow. The sensitivity kernel relating the measurable field correlation function to the mean-square displacement of the particles can be derived by applying a perturbative analysis to the correlation transport equation (CTE). We derive an analytical expression for the CTE sensitivity kernel in terms of the Green's function of the radiative transport equation, which describes the propagation of the intensity. We then evaluate the kernel numerically. The simulations demonstrate that, in the transport regime, the sensitivity kernel provides sharper spatial information about the medium as compared with the correlation diffusion approximation. Also, the use of the CTE allows one to explore some additional degrees of freedom in the data such as the collimation direction of sources and detectors. Our results can be used to improve the spatial resolution of DCT, in particular, with applications to blood flow imaging in regions where the Brownian motion is dominant.
Scanning Apollo Flight Films and Reconstructing CSM Trajectories
NASA Astrophysics Data System (ADS)
Speyerer, E.; Robinson, M. S.; Grunsfeld, J. M.; Locke, S. D.; White, M.
2006-12-01
Over thirty years ago, the astronauts of the Apollo program made the journey from the Earth to the Moon and back. To record their historic voyages and collect scientific observations many thousands of photographs were acquired with handheld and automated cameras. After returning to Earth, these films were developed and stored at the film archive at Johnson Space Center (JSC), where they still reside. Due to the historical significance of the original flight films typically only duplicate (2nd or 3rd generation) film products are studied and used to make prints. To allow full access to the original flight films for both researchers and the general public, JSC and Arizona State University are scanning and creating an online digital archive. A Leica photogrammetric scanner is being used to insure geometric and radiometric fidelity. Scanning resolution will preserve the grain of the film. Color frames are being scanned and archived as 48 bit pixels to insure capture of the full dynamic range of the film (16 bit for BW). The raw scans will consist of 70 Terabytes of data (10,000 BW Hasselblad, 10,000 color Hasselblad, 10,000 Metric frames, 4500 Pan frames, 620 35mm frames counts; are estimates). All the scanned films will be made available for download through a searchable database. Special tools are being developed to locate images based on various search parameters. To geolocate metric and panoramic frames acquired during Apollos 15\\-17, prototype SPICE kernels are being generated from existing photographic support data by entering state vectors and timestamps from multiple points throughout each orbit into the NAIF toolkit to create a type 9 Spacecraft and Planet Ephemeris Kernel (SPK), a nadir pointing C\\- matrix Kernel (CK), and a Spacecraft Clock Kernel (SCLK). These SPICE kernels, in addition to the Instrument Kernel (IK) and Frames Kernel (FK) that also under development, will be archived along with the scanned images. From the generated kernels, several IDL programs have been designed to display orbital tracks, produce footprint plots, and create image projections. Using the output from these SPICE based programs enables accurate geolocating of SIM bay photography as well as providing potential data from lunar gravitational studies.
Zalay, Osbert C; Serletis, Demitre; Carlen, Peter L; Bardakjian, Berj L
2010-06-01
Most forms of epilepsy are marked by seizure episodes that arise spontaneously. The low-magnesium/high-potassium (low-Mg(2+)/high-K(+)) experimental model of epilepsy is an acute model that produces spontaneous, recurring seizure-like events (SLEs). To elucidate the nature of spontaneous seizure transitions and their relationship to neuronal excitability, whole-cell recordings from the intact hippocampus were undertaken in vitro, and the response of hippocampal CA3 neurons to Gaussian white noise injection was obtained before and after treatment with various concentrations of low-Mg(2+)/high-K(+) solution. A second-order Volterra kernel model was estimated for each of the input-output response pairs. The spectral energy of the responses was also computed, providing a quantitative measure of neuronal excitability. Changes in duration and amplitude of the first-order kernel correlated positively with the spectral energy increase following treatment with low-Mg(2+)/high-K(+) solution, suggesting that variations in neuronal excitability are coded by the system kernels, in part by differences to the profile of the first-order kernel. In particular, kernel duration was more sensitive than amplitude to changes in spectral energy, and correlated more strongly with kernel area. An oscillator network model of the hippocampal CA3 was constructed to investigate the relationship of kernel duration to network excitability, and the model was able to generate spontaneous, recurrent SLEs by increasing the duration of a mode function analogous to the first-order kernel. Results from the model indicated that disruption to the dynamic balance of feedback was responsible for seizure-like transitions and the observed intermittency of SLEs. A physiological candidate for feedback imbalance consistent with the network model is the destabilizing interaction of extracellular potassium and paroxysmal neuronal activation. Altogether, these results (1) validate a mathematical model for epileptiform activity in the hippocampus by quantifying and subsequently correlating its behavior with an experimental, in vitro model of epilepsy; (2) elucidate a possible mechanism for epileptogenesis; and (3) pave the way for control studies in epilepsy utilizing the herein proposed experimental and mathematical setup.
NASA Astrophysics Data System (ADS)
Zalay, Osbert C.; Serletis, Demitre; Carlen, Peter L.; Bardakjian, Berj L.
2010-06-01
Most forms of epilepsy are marked by seizure episodes that arise spontaneously. The low-magnesium/high-potassium (low-Mg2+/high-K+) experimental model of epilepsy is an acute model that produces spontaneous, recurring seizure-like events (SLEs). To elucidate the nature of spontaneous seizure transitions and their relationship to neuronal excitability, whole-cell recordings from the intact hippocampus were undertaken in vitro, and the response of hippocampal CA3 neurons to Gaussian white noise injection was obtained before and after treatment with various concentrations of low-Mg2+/high-K+ solution. A second-order Volterra kernel model was estimated for each of the input-output response pairs. The spectral energy of the responses was also computed, providing a quantitative measure of neuronal excitability. Changes in duration and amplitude of the first-order kernel correlated positively with the spectral energy increase following treatment with low-Mg2+/high-K+ solution, suggesting that variations in neuronal excitability are coded by the system kernels, in part by differences to the profile of the first-order kernel. In particular, kernel duration was more sensitive than amplitude to changes in spectral energy, and correlated more strongly with kernel area. An oscillator network model of the hippocampal CA3 was constructed to investigate the relationship of kernel duration to network excitability, and the model was able to generate spontaneous, recurrent SLEs by increasing the duration of a mode function analogous to the first-order kernel. Results from the model indicated that disruption to the dynamic balance of feedback was responsible for seizure-like transitions and the observed intermittency of SLEs. A physiological candidate for feedback imbalance consistent with the network model is the destabilizing interaction of extracellular potassium and paroxysmal neuronal activation. Altogether, these results (1) validate a mathematical model for epileptiform activity in the hippocampus by quantifying and subsequently correlating its behavior with an experimental, in vitro model of epilepsy; (2) elucidate a possible mechanism for epileptogenesis; and (3) pave the way for control studies in epilepsy utilizing the herein proposed experimental and mathematical setup.
Estimation of Electrically-Evoked Knee Torque from Mechanomyography Using Support Vector Regression.
Ibitoye, Morufu Olusola; Hamzaid, Nur Azah; Abdul Wahab, Ahmad Khairi; Hasnan, Nazirah; Olatunji, Sunday Olusanya; Davis, Glen M
2016-07-19
The difficulty of real-time muscle force or joint torque estimation during neuromuscular electrical stimulation (NMES) in physical therapy and exercise science has motivated recent research interest in torque estimation from other muscle characteristics. This study investigated the accuracy of a computational intelligence technique for estimating NMES-evoked knee extension torque based on the Mechanomyographic signals (MMG) of contracting muscles that were recorded from eight healthy males. Simulation of the knee torque was modelled via Support Vector Regression (SVR) due to its good generalization ability in related fields. Inputs to the proposed model were MMG amplitude characteristics, the level of electrical stimulation or contraction intensity, and knee angle. Gaussian kernel function, as well as its optimal parameters were identified with the best performance measure and were applied as the SVR kernel function to build an effective knee torque estimation model. To train and test the model, the data were partitioned into training (70%) and testing (30%) subsets, respectively. The SVR estimation accuracy, based on the coefficient of determination (R²) between the actual and the estimated torque values was up to 94% and 89% during the training and testing cases, with root mean square errors (RMSE) of 9.48 and 12.95, respectively. The knee torque estimations obtained using SVR modelling agreed well with the experimental data from an isokinetic dynamometer. These findings support the realization of a closed-loop NMES system for functional tasks using MMG as the feedback signal source and an SVR algorithm for joint torque estimation.
Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing
2012-01-01
Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.
Designing a stable feedback control system for blind image deconvolution.
Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan
2018-05-01
Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.
Preliminary CFD study of Pebble Size and its Effect on Heat Transfer in a Pebble Bed Reactor
NASA Astrophysics Data System (ADS)
Jones, Andrew; Enriquez, Christian; Spangler, Julian; Yee, Tein; Park, Jungkyu; Farfan, Eduardo
2017-11-01
In pebble bed reactors, the typical pebble diameter used is 6cm, and within each pebble is are thousands of nuclear fuel kernels. However, efficiency of the reactor does not solely depend on the number of kernels of fuel within each graphite sphere, but also depends on the type and motion of the coolant within the voids between the spheres and the reactor itself. In this work a physical analysis of the pebble bed nuclear reactor's fluid dynamics is undertaken using Computational Fluid Dynamics software. The primary goal of this work is to observe the relationship between the different pebble diameters in an idealized alignment and the thermal transport efficiency of the reactor. The model constructed of our idealized argument will consist on stacked 8 pebble columns that fixed at the inlet on the reactor. Two different pebble sizes 4 cm and 6 cm will be studied and helium will be supplied as coolant with a fixed flow rate of 96 kg/s, also a fixed pebble surface temperatures will be used. Comparison will then be made to evaluate the efficiency of coolant to transport heat due to the varying sizes of the pebbles. Assistant Professor for the Department of Civil and Construction Engineering PhD.
Numerical method for solving the nonlinear four-point boundary value problems
NASA Astrophysics Data System (ADS)
Lin, Yingzhen; Lin, Jinnan
2010-12-01
In this paper, a new reproducing kernel space is constructed skillfully in order to solve a class of nonlinear four-point boundary value problems. The exact solution of the linear problem can be expressed in the form of series and the approximate solution of the nonlinear problem is given by the iterative formula. Compared with known investigations, the advantages of our method are that the representation of exact solution is obtained in a new reproducing kernel Hilbert space and accuracy of numerical computation is higher. Meanwhile we present the convergent theorem, complexity analysis and error estimation. The performance of the new method is illustrated with several numerical examples.
[Spatial analysis of road traffic accidents with fatalities in Spain, 2008-2011].
Gómez-Barroso, Diana; López-Cuadrado, Teresa; Llácer, Alicia; Palmera Suárez, Rocío; Fernández-Cuenca, Rafael
2015-09-01
To estimate the areas of greatest density of road traffic accidents with fatalities at 24 hours per km(2)/year in Spain from 2008 to 2011, using a geographic information system. Accidents were geocodified using the road and kilometer points where they occurred. The average nearest neighbor was calculated to detect possible clusters and to obtain the bandwidth for kernel density estimation. A total of 4775 accidents were analyzed, of which 73.3% occurred on conventional roads. The estimated average distance between accidents was 1,242 meters, and the average expected distance was 10,738 meters. The nearest neighbor index was 0.11, indicating that there were aggregations of accidents in space. A map showing the kernel density was obtained with a resolution of 1 km(2), which identified the areas of highest density. This methodology allowed a better approximation to locating accident risks by taking into account kilometer points. The map shows areas where there was a greater density of accidents. This could be an advantage in decision-making by the relevant authorities. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.
MRI-based intelligence quotient (IQ) estimation with sparse learning.
Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang
2015-01-01
In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject's IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge.
Home range and survival of breeding painted buntings on Sapelo Island, Georgia
Springborn, E.G.; Meyers, J.M.
2005-01-01
The southeastern United States population of the painted bunting (Passerina ciris) has decreased approximately 75% from 1966-1996 based on Breeding Bird Survey trends. Partners in Flight guidelines recommend painted bunting conservation as a high priority with a need for management by state and federal agencies. Basic information on home range and survival of breeding painted buntings will provide managers with required habitat types and estimates of land areas necessary to maintain minimum population sizes for this species. We radiotracked after-second-year male and after-hatching-year female buntings on Sapelo Island, Georgia, during the breeding seasons (late April-early August) of 1997 and 1998. We used the animal movement extension in ArcView to determine fixed-kernel home range in an unmanaged maritime shrub and managed 60-80-year-old pine (Pinus spp.)-oak Quercus spp.) forest. Using the Kaplan-Meier method, we estimated an adult breeding season survival of 1.00 for males (n = 36) and 0.94 (SE = 0.18) for females(n=27). Painted bunting home ranges were smaller in unmanaged maritime shrub (female: kernel (x) over bar = 3.5 ha [95% CI: 2.5-4.51; male: kernel (x) over bar = 3.1 ha [95% CI: 2.3-3.9]) compared to those in managed pine-oak forests (female: kernel (x) over bar = 4.7 ha [95% CI: 2.8-6.6]; male: kernel (x) over bar = 7.0 ha [95% CI: 4.9-9.1]). Buntings nesting in the managed pine-oak forest flew long distances (>= 300 m) to forage in salt marshes, freshwater wetlands, and moist forest clearings. In maritime shrub buntings occupied a compact area and rarely moved long distances. The painted bunting population of Sapelo Island requires conservation of maritime shrub as potential optimum nesting habitat and management of nesting habitat in open-canopy pine-oak sawtimber forests by periodic prescribed fire (every 4-6 years) and timber thinning within a landscape that contains salt marsh or freshwater wetland openings within 700 m of those forests.
Suitability of point kernel dose calculation techniques in brachytherapy treatment planning
Lakshminarayanan, Thilagam; Subbaiah, K. V.; Thayalan, K.; Kannan, S. E.
2010-01-01
Brachytherapy treatment planning system (TPS) is necessary to estimate the dose to target volume and organ at risk (OAR). TPS is always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in applicators. However, most brachytherapy TPS software packages estimate the absorbed dose at a point, taking care of only the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. There are some degrees of uncertainties in dose rate estimations under realistic clinical conditions. In this regard, an attempt is made to explore the suitability of point kernels for brachytherapy dose rate calculations and develop new interactive brachytherapy package, named as BrachyTPS, to suit the clinical conditions. BrachyTPS is an interactive point kernel code package developed to perform independent dose rate calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. The primary aim of this study is to validate the developed point kernel code package integrated with treatment planning computational systems against the Monte Carlo (MC) results. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, namely (i) Board of Radiation Isotope and Technology (BRIT) low dose rate (LDR) applicator and (ii) Fletcher Green type LDR applicator (iii) Fletcher Williamson high dose rate (HDR) applicator, are studied to test the accuracy of the software. Dose rates computed using the developed code are compared with the relevant results of the MC simulations. Further, attempts are also made to study the dose rate distribution around the commercially available shielded vaginal applicator set (Nucletron). The percentage deviations of BrachyTPS computed dose rate values from the MC results are observed to be within plus/minus 5.5% for BRIT LDR applicator, found to vary from 2.6 to 5.1% for Fletcher green type LDR applicator and are up to −4.7% for Fletcher-Williamson HDR applicator. The isodose distribution plots also show good agreements with the results of previous literatures. The isodose distributions around the shielded vaginal cylinder computed using BrachyTPS code show better agreement (less than two per cent deviation) with MC results in the unshielded region compared to shielded region, where the deviations are observed up to five per cent. The present study implies that the accurate and fast validation of complicated treatment planning calculations is possible with the point kernel code package. PMID:20589118
Gluten-containing grains skew gluten assessment in oats due to sample grind non-homogeneity.
Fritz, Ronald D; Chen, Yumin; Contreras, Veronica
2017-02-01
Oats are easily contaminated with gluten-rich kernels of wheat, rye and barley. These contaminants are like gluten 'pills', shown here to skew gluten analysis results. Using R-Biopharm R5 ELISA, we quantified gluten in gluten-free oatmeal servings from an in-market survey. For samples with a 5-20ppm reading on a first test, replicate analyses provided results ranging <5ppm to >160ppm. This suggests sample grinding may inadequately disperse gluten to allow a single accurate gluten assessment. To ascertain this, and characterize the distribution of 0.25-g gluten test results for kernel contaminated oats, twelve 50g samples of pure oats, each spiked with a wheat kernel, showed that 0.25g test results followed log-normal-like distributions. With this, we estimate probabilities of mis-assessment for a 'single measure/sample' relative to the <20ppm regulatory threshold, and derive an equation relating the probability of mis-assessment to sample average gluten content. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Kernel Embedding-Based Approach for Nonstationary Causal Model Inference.
Hu, Shoubo; Chen, Zhitang; Chan, Laiwan
2018-05-01
Although nonstationary data are more common in the real world, most existing causal discovery methods do not take nonstationarity into consideration. In this letter, we propose a kernel embedding-based approach, ENCI, for nonstationary causal model inference where data are collected from multiple domains with varying distributions. In ENCI, we transform the complicated relation of a cause-effect pair into a linear model of variables of which observations correspond to the kernel embeddings of the cause-and-effect distributions in different domains. In this way, we are able to estimate the causal direction by exploiting the causal asymmetry of the transformed linear model. Furthermore, we extend ENCI to causal graph discovery for multiple variables by transforming the relations among them into a linear nongaussian acyclic model. We show that by exploiting the nonstationarity of distributions, both cause-effect pairs and two kinds of causal graphs are identifiable under mild conditions. Experiments on synthetic and real-world data are conducted to justify the efficacy of ENCI over major existing methods.
Stable Local Volatility Calibration Using Kernel Splines
NASA Astrophysics Data System (ADS)
Coleman, Thomas F.; Li, Yuying; Wang, Cheng
2010-09-01
We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMurray, Jacob W.; Lindemer, Terrence B.; Brown, Nicholas R.
There are three important failure mechanisms that must be controlled in high-temperature gas-cooled reactor (HTGR) fuel for certain higher burnup applications are SiC layer rupture, SiC corrosion by CO, and coating compromise from kernel migration. All are related to high CO pressures stemming from free O generated when uranium present as UO 2 fissions and the O is not subsequently bound by other elements. Furthermore, in the HTGR UCO kernel design, CO buildup from excess O is controlled by the inclusion of additional uranium in the form of a carbide, UC x. An approach for determining the minimum UC xmore » content to ensure negligible CO formation was developed and demonstrated using CALPHAD models and the Serpent 2 reactor physics and depletion analysis tool. Our results are intended to be more accurate than previous estimates by including more nuclear and chemical factors, in particular the effect of transmutation products on the oxygen distribution as the fuel kernel composition evolves with burnup.« less
Takashima, Ryoichi; Takiguchi, Tetsuya; Ariki, Yasuo
2013-02-01
This paper presents a method for discriminating the location of the sound source (talker) using only a single microphone. In a previous work, the single-channel approach for discriminating the location of the sound source was discussed, where the acoustic transfer function from a user's position is estimated by using a hidden Markov model of clean speech in the cepstral domain. In this paper, each cepstral dimension of the acoustic transfer function is newly weighted, in order to obtain the cepstral dimensions having information that is useful for classifying the user's position. Then, this paper proposes a feature-weighting method for the cepstral parameter using multiple kernel learning, defining the base kernels for each cepstral dimension of the acoustic transfer function. The user's position is trained and classified by support vector machine. The effectiveness of this method has been confirmed by sound source (talker) localization experiments performed in different room environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, N.S.V.
The classical Nadaraya-Watson estimator is shown to solve a generic sensor fusion problem where the underlying sensor error densities are not known but a sample is available. By employing Haar kernels this estimator is shown to yield finite sample guarantees and also to be efficiently computable. Two simulation examples, and a robotics example involving the detection of a door using arrays of ultrasonic and infrared sensors, are presented to illustrate the performance.
Propagation phenomena in monostable integro-differential equations: Acceleration or not?
NASA Astrophysics Data System (ADS)
Alfaro, Matthieu; Coville, Jérôme
2017-11-01
We consider the homogeneous integro-differential equation ∂t u = J * u - u + f (u) with a monostable nonlinearity f. Our interest is twofold: we investigate the existence/nonexistence of travelling waves, and the propagation properties of the Cauchy problem. When the dispersion kernel J is exponentially bounded, travelling waves are known to exist and solutions of the Cauchy problem typically propagate at a constant speed [7,10,11,22,26,27]. On the other hand, when the dispersion kernel J has heavy tails and the nonlinearity f is nondegenerate, i.e. f‧ (0) > 0, travelling waves do not exist and solutions of the Cauchy problem propagate by accelerating [14,20,27]. For a general monostable nonlinearity, a dichotomy between these two types of propagation behaviour is still not known. The originality of our work is to provide such dichotomy by studying the interplay between the tails of the dispersion kernel and the Allee effect induced by the degeneracy of f, i.e. f‧ (0) = 0. First, for algebraic decaying kernels, we prove the exact separation between existence and nonexistence of travelling waves. This in turn provides the exact separation between nonacceleration and acceleration in the Cauchy problem. In the latter case, we provide a first estimate of the position of the level sets of the solution.
Building machine learning force fields for nanoclusters
NASA Astrophysics Data System (ADS)
Zeni, Claudio; Rossi, Kevin; Glielmo, Aldo; Fekete, Ádám; Gaston, Nicola; Baletto, Francesca; De Vita, Alessandro
2018-06-01
We assess Gaussian process (GP) regression as a technique to model interatomic forces in metal nanoclusters by analyzing the performance of 2-body, 3-body, and many-body kernel functions on a set of 19-atom Ni cluster structures. We find that 2-body GP kernels fail to provide faithful force estimates, despite succeeding in bulk Ni systems. However, both 3- and many-body kernels predict forces within an ˜0.1 eV/Å average error even for small training datasets and achieve high accuracy even on out-of-sample, high temperature structures. While training and testing on the same structure always provide satisfactory accuracy, cross-testing on dissimilar structures leads to higher prediction errors, posing an extrapolation problem. This can be cured using heterogeneous training on databases that contain more than one structure, which results in a good trade-off between versatility and overall accuracy. Starting from a 3-body kernel trained this way, we build an efficient non-parametric 3-body force field that allows accurate prediction of structural properties at finite temperatures, following a newly developed scheme [A. Glielmo et al., Phys. Rev. B 95, 214302 (2017)]. We use this to assess the thermal stability of Ni19 nanoclusters at a fractional cost of full ab initio calculations.
Analyzing Kernel Matrices for the Identification of Differentially Expressed Genes
Xia, Xiao-Lei; Xing, Huanlai; Liu, Xueqin
2013-01-01
One of the most important applications of microarray data is the class prediction of biological samples. For this purpose, statistical tests have often been applied to identify the differentially expressed genes (DEGs), followed by the employment of the state-of-the-art learning machines including the Support Vector Machines (SVM) in particular. The SVM is a typical sample-based classifier whose performance comes down to how discriminant samples are. However, DEGs identified by statistical tests are not guaranteed to result in a training dataset composed of discriminant samples. To tackle this problem, a novel gene ranking method namely the Kernel Matrix Gene Selection (KMGS) is proposed. The rationale of the method, which roots in the fundamental ideas of the SVM algorithm, is described. The notion of ''the separability of a sample'' which is estimated by performing -like statistics on each column of the kernel matrix, is first introduced. The separability of a classification problem is then measured, from which the significance of a specific gene is deduced. Also described is a method of Kernel Matrix Sequential Forward Selection (KMSFS) which shares the KMGS method's essential ideas but proceeds in a greedy manner. On three public microarray datasets, our proposed algorithms achieved noticeably competitive performance in terms of the B.632+ error rate. PMID:24349110
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaise Collin
The Idaho National Laboraroty (INL) PARFUME (particle fuel model) code was used to assess the overall fuel performance of uranium nitride (UN) tristructural isotropic (TRISO) ceramic fuel under irradiation conditions typical of a Light Water Reactor (LWR). The dimensional changes of the fuel particle layers and kernel were calculated, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated depending on the strain behavior of the constituent materials at high fast fluence and burn up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along withmore » stress levels in the inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn up. These material properties have large uncertainties at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, a large experimental effort would be needed to establish material properties, including kernel and PyC swelling rates, under these conditions before definitive conclusions can be drawn on the behavior of UN TRISO fuel in LWRs.« less
A Semi-supervised Heat Kernel Pagerank MBO Algorithm for Data Classification
2016-07-01
financial predictions, etc. and is finding growing use in text mining studies. In this paper, we present an efficient algorithm for classification of high...video data, set of images, hyperspectral data, medical data, text data, etc. Moreover, the framework provides a way to analyze data whose different...also be incorporated. For text classification, one can use tfidf (term frequency inverse document frequency) to form feature vectors for each document
Graviton 1-loop partition function for 3-dimensional massive gravity
NASA Astrophysics Data System (ADS)
Gaberdiel, Matthias R.; Grumiller, Daniel; Vassilevich, Dmitri
2010-11-01
Thegraviton1-loop partition function in Euclidean topologically massivegravity (TMG) is calculated using heat kernel techniques. The partition function does not factorize holomorphically, and at the chiral point it has the structure expected from a logarithmic conformal field theory. This gives strong evidence for the proposal that the dual conformal field theory to TMG at the chiral point is indeed logarithmic. We also generalize our results to new massive gravity.
Kernel PLS Estimation of Single-trial Event-related Potentials
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.
2004-01-01
Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.
Numerical techniques in radiative heat transfer for general, scattering, plane-parallel media
NASA Technical Reports Server (NTRS)
Sharma, A.; Cogley, A. C.
1982-01-01
The study of radiative heat transfer with scattering usually leads to the solution of singular Fredholm integral equations. The present paper presents an accurate and efficient numerical method to solve certain integral equations that govern radiative equilibrium problems in plane-parallel geometry for both grey and nongrey, anisotropically scattering media. In particular, the nongrey problem is represented by a spectral integral of a system of nonlinear integral equations in space, which has not been solved previously. The numerical technique is constructed to handle this unique nongrey governing equation as well as the difficulties caused by singular kernels. Example problems are solved and the method's accuracy and computational speed are analyzed.
Multi-PSF fusion in image restoration of range-gated systems
NASA Astrophysics Data System (ADS)
Wang, Canjin; Sun, Tao; Wang, Tingfeng; Miao, Xikui; Wang, Rui
2018-07-01
For the task of image restoration, an accurate estimation of degrading PSF/kernel is the premise of recovering a visually superior image. The imaging process of range-gated imaging system in atmosphere associates with lots of factors, such as back scattering, background radiation, diffraction limit and the vibration of the platform. On one hand, due to the difficulty of constructing models for all factors, the kernels from physical-model based methods are not strictly accurate and practical. On the other hand, there are few strong edges in images, which brings significant errors to most of image-feature-based methods. Since different methods focus on different formation factors of the kernel, their results often complement each other. Therefore, we propose an approach which combines physical model with image features. With an fusion strategy using GCRF (Gaussian Conditional Random Fields) framework, we get a final kernel which is closer to the actual one. Aiming at the problem that ground-truth image is difficult to obtain, we then propose a semi data-driven fusion method in which different data sets are used to train fusion parameters. Finally, a semi blind restoration strategy based on EM (Expectation Maximization) and RL (Richardson-Lucy) algorithm is proposed. Our methods not only models how the lasers transfer in the atmosphere and imaging in the ICCD (Intensified CCD) plane, but also quantifies other unknown degraded factors using image-based methods, revealing how multiple kernel elements interact with each other. The experimental results demonstrate that our method achieves better performance than state-of-the-art restoration approaches.
(EDMUNDS, WA) WILDLAND FIRE EMISSIONS MODELING: INTEGRATING BLUESKY AND SMOKE
This presentation is a status update of the BlueSky emissions modeling system. BlueSky-EM has been coupled with the Sparse Matrix Operational Kernel Emissions (SMOKE) system, and is now available as a tool for estimating emissions from wildland fires
Locally Based Kernel PLS Regression De-noising with Application to Event-Related Potentials
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Tino, Peter
2002-01-01
The close relation of signal de-noising and regression problems dealing with the estimation of functions reflecting dependency between a set of inputs and dependent outputs corrupted with some level of noise have been employed in our approach.
Mixed effects modelling for glass category estimation from glass refractive indices.
Lucy, David; Zadora, Grzegorz
2011-10-10
520 Glass fragments were taken from 105 glass items. Each item was either a container, a window, or glass from an automobile. Each of these three classes of use are defined as glass categories. Refractive indexes were measured both before, and after a programme of re-annealing. Because the refractive index of each fragment could not in itself be observed before and after re-annealing, a model based approach was used to estimate the change in refractive index for each glass category. It was found that less complex estimation methods would be equivalent to the full model, and were subsequently used. The change in refractive index was then used to calculate a measure of the evidential value for each item belonging to each glass category. The distributions of refractive index change were considered for each glass category, and it was found that, possibly due to small samples, members of the normal family would not adequately model the refractive index changes within two of the use types considered here. Two alternative approaches to modelling the change in refractive index were used, one employed more established kernel density estimates, the other a newer approach called log-concave estimation. Either method when applied to the change in refractive index was found to give good estimates of glass category, however, on all performance metrics kernel density estimates were found to be slightly better than log-concave estimates, although the estimates from log-concave estimation prossessed properties which had some qualitative appeal not encapsulated in the selected measures of performance. These results and implications of these two methods of estimating probability densities for glass refractive indexes are discussed. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Cho, Nahye; Son, Serin
2018-01-01
The purpose of this study is to analyze how the spatiotemporal characteristics of traffic accidents involving the elderly population in Seoul are changing by time period. We applied kernel density estimation and hotspot analyses to analyze the spatial characteristics of elderly people’s traffic accidents, and the space-time cube, emerging hotspot, and space-time kernel density estimation analyses to analyze the spatiotemporal characteristics. In addition, we analyzed elderly people’s traffic accidents by dividing cases into those in which the drivers were elderly people and those in which elderly people were victims of traffic accidents, and used the traffic accidents data in Seoul for 2013 for analysis. The main findings were as follows: (1) the hotspots for elderly people’s traffic accidents differed according to whether they were drivers or victims. (2) The hourly analysis showed that the hotspots for elderly drivers’ traffic accidents are in specific areas north of the Han River during the period from morning to afternoon, whereas the hotspots for elderly victims are distributed over a wide area from daytime to evening. (3) Monthly analysis showed that the hotspots are weak during winter and summer, whereas they are strong in the hiking and climbing areas in Seoul during spring and fall. Further, elderly victims’ hotspots are more sporadic than elderly drivers’ hotspots. (4) The analysis for the entire period of 2013 indicates that traffic accidents involving elderly people are increasing in specific areas on the north side of the Han River. We expect the results of this study to aid in reducing the number of traffic accidents involving elderly people in the future. PMID:29768453
Kang, Youngok; Cho, Nahye; Son, Serin
2018-01-01
The purpose of this study is to analyze how the spatiotemporal characteristics of traffic accidents involving the elderly population in Seoul are changing by time period. We applied kernel density estimation and hotspot analyses to analyze the spatial characteristics of elderly people's traffic accidents, and the space-time cube, emerging hotspot, and space-time kernel density estimation analyses to analyze the spatiotemporal characteristics. In addition, we analyzed elderly people's traffic accidents by dividing cases into those in which the drivers were elderly people and those in which elderly people were victims of traffic accidents, and used the traffic accidents data in Seoul for 2013 for analysis. The main findings were as follows: (1) the hotspots for elderly people's traffic accidents differed according to whether they were drivers or victims. (2) The hourly analysis showed that the hotspots for elderly drivers' traffic accidents are in specific areas north of the Han River during the period from morning to afternoon, whereas the hotspots for elderly victims are distributed over a wide area from daytime to evening. (3) Monthly analysis showed that the hotspots are weak during winter and summer, whereas they are strong in the hiking and climbing areas in Seoul during spring and fall. Further, elderly victims' hotspots are more sporadic than elderly drivers' hotspots. (4) The analysis for the entire period of 2013 indicates that traffic accidents involving elderly people are increasing in specific areas on the north side of the Han River. We expect the results of this study to aid in reducing the number of traffic accidents involving elderly people in the future.
Liu, Shelley H; Bobb, Jennifer F; Lee, Kyu Ha; Gennings, Chris; Claus Henn, Birgit; Bellinger, David; Austin, Christine; Schnaas, Lourdes; Tellez-Rojo, Martha M; Hu, Howard; Wright, Robert O; Arora, Manish; Coull, Brent A
2018-07-01
The impact of neurotoxic chemical mixtures on children's health is a critical public health concern. It is well known that during early life, toxic exposures may impact cognitive function during critical time intervals of increased vulnerability, known as windows of susceptibility. Knowledge on time windows of susceptibility can help inform treatment and prevention strategies, as chemical mixtures may affect a developmental process that is operating at a specific life phase. There are several statistical challenges in estimating the health effects of time-varying exposures to multi-pollutant mixtures, such as: multi-collinearity among the exposures both within time points and across time points, and complex exposure-response relationships. To address these concerns, we develop a flexible statistical method, called lagged kernel machine regression (LKMR). LKMR identifies critical exposure windows of chemical mixtures, and accounts for complex non-linear and non-additive effects of the mixture at any given exposure window. Specifically, LKMR estimates how the effects of a mixture of exposures change with the exposure time window using a Bayesian formulation of a grouped, fused lasso penalty within a kernel machine regression (KMR) framework. A simulation study demonstrates the performance of LKMR under realistic exposure-response scenarios, and demonstrates large gains over approaches that consider each time window separately, particularly when serial correlation among the time-varying exposures is high. Furthermore, LKMR demonstrates gains over another approach that inputs all time-specific chemical concentrations together into a single KMR. We apply LKMR to estimate associations between neurodevelopment and metal mixtures in Early Life Exposures in Mexico and Neurotoxicology, a prospective cohort study of child health in Mexico City.
1-norm support vector novelty detection and its sparseness.
Zhang, Li; Zhou, WeiDa
2013-12-01
This paper proposes a 1-norm support vector novelty detection (SVND) method and discusses its sparseness. 1-norm SVND is formulated as a linear programming problem and uses two techniques for inducing sparseness, or the 1-norm regularization and the hinge loss function. We also find two upper bounds on the sparseness of 1-norm SVND, or exact support vector (ESV) and kernel Gram matrix rank bounds. The ESV bound indicates that 1-norm SVND has a sparser representation model than SVND. The kernel Gram matrix rank bound can loosely estimate the sparseness of 1-norm SVND. Experimental results show that 1-norm SVND is feasible and effective. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hanft, J M; Jones, R J
1986-06-01
Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.
MRI-Based Intelligence Quotient (IQ) Estimation with Sparse Learning
Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang
2015-01-01
In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject’s IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge. PMID:25822851
Javed, Faizan; Savkin, Andrey V; Chan, Gregory S H; Middleton, Paul M; Malouf, Philip; Steel, Elizabeth; Mackie, James; Lovell, Nigel H
2009-11-01
This study aims to assess the blood volume and heart rate (HR) responses during haemodialysis in fluid overloaded patients by a nonparametric nonlinear regression approach based on a support vector machine (SVM). Relative blood volume (RBV) and electrocardiogram (ECG) was recorded from 23 haemodynamically stable renal failure patients during regular haemodialysis. Modelling was performed on 18 fluid overloaded patients (fluid removal of >2 L). SVM-based regression was used to obtain the models of RBV change with time as well as the percentage change in HR with respect to RBV. Mean squared error (MSE) and goodness of fit (R(2)) were used for comparison among different kernel functions. The design parameters were estimated using a grid search approach and the selected models were validated by a k-fold cross-validation technique. For the model of HR versus RBV change, a radial basis function (RBF) kernel (MSE = 17.37 and R(2) = 0.932) gave the least MSE compared to linear (MSE = 25.97 and R(2) = 0.898) and polynomial (MSE = 18.18 and R(2)= 0.929). The MSE was significantly lower for training data set when using RBF kernel compared to other kernels (p < 0.01). The RBF kernel also provided a slightly better fit of RBV change with time (MSE = 1.12 and R(2) = 0.91) compared to a linear kernel (MSE = 1.46 and R(2) = 0.88). The modelled HR response was characterized by an initial drop and a subsequent rise during progressive reduction in RBV, which may be interpreted as the reflex response to a transition from central hypervolaemia to hypovolaemia. These modelled curves can be used as references to a controller that can be designed to regulate the haemodynamic variables to ensure the stability of patients undergoing haemodialysis.
P- and S-wave Receiver Function Imaging with Scattering Kernels
NASA Astrophysics Data System (ADS)
Hansen, S. M.; Schmandt, B.
2017-12-01
Full waveform inversion provides a flexible approach to the seismic parameter estimation problem and can account for the full physics of wave propagation using numeric simulations. However, this approach requires significant computational resources due to the demanding nature of solving the forward and adjoint problems. This issue is particularly acute for temporary passive-source seismic experiments (e.g. PASSCAL) that have traditionally relied on teleseismic earthquakes as sources resulting in a global scale forward problem. Various approximation strategies have been proposed to reduce the computational burden such as hybrid methods that embed a heterogeneous regional scale model in a 1D global model. In this study, we focus specifically on the problem of scattered wave imaging (migration) using both P- and S-wave receiver function data. The proposed method relies on body-wave scattering kernels that are derived from the adjoint data sensitivity kernels which are typically used for full waveform inversion. The forward problem is approximated using ray theory yielding a computationally efficient imaging algorithm that can resolve dipping and discontinuous velocity interfaces in 3D. From the imaging perspective, this approach is closely related to elastic reverse time migration. An energy stable finite-difference method is used to simulate elastic wave propagation in a 2D hypothetical subduction zone model. The resulting synthetic P- and S-wave receiver function datasets are used to validate the imaging method. The kernel images are compared with those generated by the Generalized Radon Transform (GRT) and Common Conversion Point stacking (CCP) methods. These results demonstrate the potential of the kernel imaging approach to constrain lithospheric structure in complex geologic environments with sufficiently dense recordings of teleseismic data. This is demonstrated using a receiver function dataset from the Central California Seismic Experiment which shows several dipping interfaces related to the tectonic assembly of this region. Figure 1. Scattering kernel examples for three receiver function phases. A) direct P-to-s (Ps), B) direct S-to-p and C) free-surface PP-to-s (PPs).
NASA Astrophysics Data System (ADS)
Diego Azcona, Juan; Barbés, Benigno; Wang, Lilie; Burguete, Javier
2016-01-01
This paper presents a method to obtain the pencil-beam kernels that characterize a megavoltage photon beam generated in a flattening filter free (FFF) linear accelerator (linac) by deconvolution from experimental measurements at different depths. The formalism is applied to perform independent dose calculations in modulated fields. In our previous work a formalism was developed for ideal flat fluences exiting the linac’s head. That framework could not deal with spatially varying energy fluences, so any deviation from the ideal flat fluence was treated as a perturbation. The present work addresses the necessity of implementing an exact analysis where any spatially varying fluence can be used such as those encountered in FFF beams. A major improvement introduced here is to handle the actual fluence in the deconvolution procedure. We studied the uncertainties associated to the kernel derivation with this method. Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from two linacs from different vendors, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water-equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50mm diameter circular field, collimated with a lead block. The 3D kernel for a FFF beam was obtained by deconvolution using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. Error uncertainty in the kernel derivation procedure was estimated to be within 0.2%. Eighteen modulated fields used clinically in different treatment localizations were irradiated at four measurement depths (total of fifty-four film measurements). Comparison through the gamma-index to their corresponding calculated absolute dose distributions showed a number of passing points (3%, 3mm) mostly above 99%. This new procedure is more reliable and robust than the previous one. Its ability to perform accurate independent dose calculations was demonstrated.
MO-G-17A-05: PET Image Deblurring Using Adaptive Dictionary Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valiollahzadeh, S; Clark, J; Mawlawi, O
2014-06-15
Purpose: The aim of this work is to deblur PET images while suppressing Poisson noise effects using adaptive dictionary learning (DL) techniques. Methods: The model that relates a blurred and noisy PET image to the desired image is described as a linear transform y=Hm+n where m is the desired image, H is a blur kernel, n is Poisson noise and y is the blurred image. The approach we follow to recover m involves the sparse representation of y over a learned dictionary, since the image has lots of repeated patterns, edges, textures and smooth regions. The recovery is based onmore » an optimization of a cost function having four major terms: adaptive dictionary learning term, sparsity term, regularization term, and MLEM Poisson noise estimation term. The optimization is solved by a variable splitting method that introduces additional variables. We simulated a 128×128 Hoffman brain PET image (baseline) with varying kernel types and sizes (Gaussian 9×9, σ=5.4mm; Uniform 5×5, σ=2.9mm) with additive Poisson noise (Blurred). Image recovery was performed once when the kernel type was included in the model optimization and once with the model blinded to kernel type. The recovered image was compared to the baseline as well as another recovery algorithm PIDSPLIT+ (Setzer et. al.) by calculating PSNR (Peak SNR) and normalized average differences in pixel intensities (NADPI) of line profiles across the images. Results: For known kernel types, the PSNR of the Gaussian (Uniform) was 28.73 (25.1) and 25.18 (23.4) for DL and PIDSPLIT+ respectively. For blinded deblurring the PSNRs were 25.32 and 22.86 for DL and PIDSPLIT+ respectively. NADPI between baseline and DL, and baseline and blurred for the Gaussian kernel was 2.5 and 10.8 respectively. Conclusion: PET image deblurring using dictionary learning seems to be a good approach to restore image resolution in presence of Poisson noise. GE Health Care.« less
Radiative Heat Transfer in Finite Cylindrical Enclosures with Nonhomogeneous Participating Media
NASA Technical Reports Server (NTRS)
Hsu, Pei-Feng; Ku, Jerry C.
1994-01-01
Results of a numerical solution for radiative heat transfer in homogeneous and nonhomogeneous participating media are presented. The geometry of interest is a finite axisymmetric cylindrical enclosure. The integral formulation for radiative transport is solved by the YIX method. A three-dimensional solution scheme is applied to two-dimensional axisymmetric geometry to simplify kernel calculations and to avoid difficulties associated with treating boundary conditions. As part of the effort to improve modeling capabilities for turbulent jet diffusion flames, predicted distributions for flame temperature and soot volume fraction are used to calculate radiative heat transfer from soot particles in such flames. It is shown that the nonhomogeneity of radiative property has very significant effects. The peak value of the divergence of radiative heat flux could be underestimated by 2 factor of 7 if a mean homogeneous radiative property is used. Since recent studies have shown that scattering by soot agglomerates is significant in flames, the effect of magnitude of scattering is also investigated and found to be nonnegligible.
Segmentation of the Speaker's Face Region with Audiovisual Correlation
NASA Astrophysics Data System (ADS)
Liu, Yuyu; Sato, Yoichi
The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.
A boosted optimal linear learner for retinal vessel segmentation
NASA Astrophysics Data System (ADS)
Poletti, E.; Grisan, E.
2014-03-01
Ocular fundus images provide important information about retinal degeneration, which may be related to acute pathologies or to early signs of systemic diseases. An automatic and quantitative assessment of vessel morphological features, such as diameters and tortuosity, can improve clinical diagnosis and evaluation of retinopathy. At variance with available methods, we propose a data-driven approach, in which the system learns a set of optimal discriminative convolution kernels (linear learner). The set is progressively built based on an ADA-boost sample weighting scheme, providing seamless integration between linear learner estimation and classification. In order to capture the vessel appearance changes at different scales, the kernels are estimated on a pyramidal decomposition of the training samples. The set is employed as a rotating bank of matched filters, whose response is used by the boosted linear classifier to provide a classification of each image pixel into the two classes of interest (vessel/background). We tested the approach fundus images available from the DRIVE dataset. We show that the segmentation performance yields an accuracy of 0.94.
A linear recurrent kernel online learning algorithm with sparse updates.
Fan, Haijin; Song, Qing
2014-02-01
In this paper, we propose a recurrent kernel algorithm with selectively sparse updates for online learning. The algorithm introduces a linear recurrent term in the estimation of the current output. This makes the past information reusable for updating of the algorithm in the form of a recurrent gradient term. To ensure that the reuse of this recurrent gradient indeed accelerates the convergence speed, a novel hybrid recurrent training is proposed to switch on or off learning the recurrent information according to the magnitude of the current training error. Furthermore, the algorithm includes a data-dependent adaptive learning rate which can provide guaranteed system weight convergence at each training iteration. The learning rate is set as zero when the training violates the derived convergence conditions, which makes the algorithm updating process sparse. Theoretical analyses of the weight convergence are presented and experimental results show the good performance of the proposed algorithm in terms of convergence speed and estimation accuracy. Copyright © 2013 Elsevier Ltd. All rights reserved.
Villa-Rojas, Rossana; Tang, Juming; Wang, Shaojin; Gao, Mengxiang; Kang, Dong-Hyun; Mah, Jae-Hyung; Gray, Peter; Sosa-Morales, Maria Elena; López-Malo, Aurelio
2013-01-01
Salmonellosis outbreaks related to consumption of raw almonds have encouraged the scientific community to study the inactivation kinetics of pathogens in this dry commodity. However, the low moisture content of the product presents a challenge for thermal control, because the time required to achieve the desired thermal inactivation of microorganisms increases sharply with reduced moisture content and water activity. In this study, we explored and modeled the heat inactivation of Salmonella enterica serovar Enteritidis PT 30 in almond cultivar 'Nonpareil' kernel flour at four water activity (a(w)) values (0.601, 0.720, 0.888, and 0.946) using four temperatures for each a(w). The results showed that the inactivation was well fitted by both Weibull distribution (R(2) = 0.93 to 1.00) and first-order kinetics (R(2) = 0.82 to 0.96). At higher a(w) values, the rate of inactivation increased and less time was needed to achieve the required population reduction. These results suggest that, to avoid deterioration of product quality, shorter process times at lower temperatures may be used to achieve desired inactivation levels of Salmonella Enteritidis PT 30 by simply increasing the moisture content of almonds. These goals could be achieved with the use of existing procedures already practiced by the food industry, such as washing or prewetting scalding before heat inactivation.
Hanft, Jonathan M.; Jones, Robert J.
1986-01-01
Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846
Out-of-Sample Extensions for Non-Parametric Kernel Methods.
Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang
2017-02-01
Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.
7 CFR 810.1202 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...
Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang
2016-01-01
Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.
Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang
2016-01-01
Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143
Mapping Fire Severity Using Imaging Spectroscopy and Kernel Based Image Analysis
NASA Astrophysics Data System (ADS)
Prasad, S.; Cui, M.; Zhang, Y.; Veraverbeke, S.
2014-12-01
Improved spatial representation of within-burn heterogeneity after wildfires is paramount to effective land management decisions and more accurate fire emissions estimates. In this work, we demonstrate feasibility and efficacy of airborne imaging spectroscopy (hyperspectral imagery) for quantifying wildfire burn severity, using kernel based image analysis techniques. Two different airborne hyperspectral datasets, acquired over the 2011 Canyon and 2013 Rim fire in California using the Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) sensor, were used in this study. The Rim Fire, covering parts of the Yosemite National Park started on August 17, 2013, and was the third largest fire in California's history. Canyon Fire occurred in the Tehachapi mountains, and started on September 4, 2011. In addition to post-fire data for both fires, half of the Rim fire was also covered with pre-fire images. Fire severity was measured in the field using Geo Composite Burn Index (GeoCBI). The field data was utilized to train and validate our models, wherein the trained models, in conjunction with imaging spectroscopy data were used for GeoCBI estimation wide geographical regions. This work presents an approach for using remotely sensed imagery combined with GeoCBI field data to map fire scars based on a non-linear (kernel based) epsilon-Support Vector Regression (e-SVR), which was used to learn the relationship between spectra and GeoCBI in a kernel-induced feature space. Classification of healthy vegetation versus fire-affected areas based on morphological multi-attribute profiles was also studied. The availability of pre- and post-fire imaging spectroscopy data over the Rim Fire provided a unique opportunity to evaluate the performance of bi-temporal imaging spectroscopy for assessing post-fire effects. This type of data is currently constrained because of limited airborne acquisitions before a fire, but will become widespread with future spaceborne sensors such as those on the planned NASA HyspIRI mission.
Gamado, Kokouvi; Marion, Glenn; Porphyre, Thibaud
2017-01-01
Livestock epidemics have the potential to give rise to significant economic, welfare, and social costs. Incursions of emerging and re-emerging pathogens may lead to small and repeated outbreaks. Analysis of the resulting data is statistically challenging but can inform disease preparedness reducing potential future losses. We present a framework for spatial risk assessment of disease incursions based on data from small localized historic outbreaks. We focus on between-farm spread of livestock pathogens and illustrate our methods by application to data on the small outbreak of Classical Swine Fever (CSF) that occurred in 2000 in East Anglia, UK. We apply models based on continuous time semi-Markov processes, using data-augmentation Markov Chain Monte Carlo techniques within a Bayesian framework to infer disease dynamics and detection from incompletely observed outbreaks. The spatial transmission kernel describing pathogen spread between farms, and the distribution of times between infection and detection, is estimated alongside unobserved exposure times. Our results demonstrate inference is reliable even for relatively small outbreaks when the data-generating model is known. However, associated risk assessments depend strongly on the form of the fitted transmission kernel. Therefore, for real applications, methods are needed to select the most appropriate model in light of the data. We assess standard Deviance Information Criteria (DIC) model selection tools and recently introduced latent residual methods of model assessment, in selecting the functional form of the spatial transmission kernel. These methods are applied to the CSF data, and tested in simulated scenarios which represent field data, but assume the data generation mechanism is known. Analysis of simulated scenarios shows that latent residual methods enable reliable selection of the transmission kernel even for small outbreaks whereas the DIC is less reliable. Moreover, compared with DIC, model choice based on latent residual assessment correlated better with predicted risk. PMID:28293559
Alvarez-Meza, Andres M.; Orozco-Gutierrez, Alvaro; Castellanos-Dominguez, German
2017-01-01
We introduce Enhanced Kernel-based Relevance Analysis (EKRA) that aims to support the automatic identification of brain activity patterns using electroencephalographic recordings. EKRA is a data-driven strategy that incorporates two kernel functions to take advantage of the available joint information, associating neural responses to a given stimulus condition. Regarding this, a Centered Kernel Alignment functional is adjusted to learning the linear projection that best discriminates the input feature set, optimizing the required free parameters automatically. Our approach is carried out in two scenarios: (i) feature selection by computing a relevance vector from extracted neural features to facilitating the physiological interpretation of a given brain activity task, and (ii) enhanced feature selection to perform an additional transformation of relevant features aiming to improve the overall identification accuracy. Accordingly, we provide an alternative feature relevance analysis strategy that allows improving the system performance while favoring the data interpretability. For the validation purpose, EKRA is tested in two well-known tasks of brain activity: motor imagery discrimination and epileptic seizure detection. The obtained results show that the EKRA approach estimates a relevant representation space extracted from the provided supervised information, emphasizing the salient input features. As a result, our proposal outperforms the state-of-the-art methods regarding brain activity discrimination accuracy with the benefit of enhanced physiological interpretation about the task at hand. PMID:29056897
Modeling and analysis of UN TRISO fuel for LWR application using the PARFUME code
NASA Astrophysics Data System (ADS)
Collin, Blaise P.
2014-08-01
The Idaho National Laboratory (INL) PARFUME (PARticle FUel ModEl) code was used to assess the overall fuel performance of uranium nitride (UN) tristructural isotropic (TRISO) ceramic fuel under irradiation conditions typical of a Light Water Reactor (LWR). The dimensional changes of the fuel particle layers and kernel were calculated, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated depending on the strain behavior of the constituent materials at high fast fluence and burn-up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn-up. These material properties have large uncertainties at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, a large experimental effort would be needed to establish material properties, including kernel and PyC swelling rates, under these conditions before definitive conclusions can be drawn on the behavior of UN TRISO fuel in LWRs.
ERIC Educational Resources Information Center
Badger, Elizabeth
1992-01-01
Explains a set of processes that teachers might use to structure their evaluation of students' learning and understanding. Illustrates the processes of setting goals, deciding what to assess, gathering information, and using the results through a measurement task requiring students to estimate the number of popcorn kernels in a container. (MDH)
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
Fried, Itzhak; Koch, Christof
2014-01-01
Peristimulus time histograms are a widespread form of visualizing neuronal responses. Kernel convolution methods transform these histograms into a smooth, continuous probability density function. This provides an improved estimate of a neuron's actual response envelope. We here develop a classifier, called the h-coefficient, to determine whether time-locked fluctuations in the firing rate of a neuron should be classified as a response or as random noise. Unlike previous approaches, the h-coefficient takes advantage of the more precise response envelope estimation provided by the kernel convolution method. The h-coefficient quantizes the smoothed response envelope and calculates the probability of a response of a given shape to occur by chance. We tested the efficacy of the h-coefficient in a large data set of Monte Carlo simulated smoothed peristimulus time histograms with varying response amplitudes, response durations, trial numbers, and baseline firing rates. Across all these conditions, the h-coefficient significantly outperformed more classical classifiers, with a mean false alarm rate of 0.004 and a mean hit rate of 0.494. We also tested the h-coefficient's performance in a set of neuronal responses recorded in humans. The algorithm behind the h-coefficient provides various opportunities for further adaptation and the flexibility to target specific parameters in a given data set. Our findings confirm that the h-coefficient can provide a conservative and powerful tool for the analysis of peristimulus time histograms with great potential for future development. PMID:25475352
Kan, Hirohito; Arai, Nobuyuki; Takizawa, Masahiro; Omori, Kazuyoshi; Kasai, Harumasa; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta
2018-06-11
We developed a non-regularized, variable kernel, sophisticated harmonic artifact reduction for phase data (NR-VSHARP) method to accurately estimate local tissue fields without regularization for quantitative susceptibility mapping (QSM). We then used a digital brain phantom to evaluate the accuracy of the NR-VSHARP method, and compared it with the VSHARP and iterative spherical mean value (iSMV) methods through in vivo human brain experiments. Our proposed NR-VSHARP method, which uses variable spherical mean value (SMV) kernels, minimizes L2 norms only within the volume of interest to reduce phase errors and save cortical information without regularization. In a numerical phantom study, relative local field and susceptibility map errors were determined using NR-VSHARP, VSHARP, and iSMV. Additionally, various background field elimination methods were used to image the human brain. In a numerical phantom study, the use of NR-VSHARP considerably reduced the relative local field and susceptibility map errors throughout a digital whole brain phantom, compared with VSHARP and iSMV. In the in vivo experiment, the NR-VSHARP-estimated local field could sufficiently achieve minimal boundary losses and phase error suppression throughout the brain. Moreover, the susceptibility map generated using NR-VSHARP minimized the occurrence of streaking artifacts caused by insufficient background field removal. Our proposed NR-VSHARP method yields minimal boundary losses and highly precise phase data. Our results suggest that this technique may facilitate high-quality QSM. Copyright © 2017. Published by Elsevier Inc.
A method for decoding the neurophysiological spike-response transform.
Stern, Estee; García-Crescioni, Keyla; Miller, Mark W; Peskin, Charles S; Brezina, Vladimir
2009-11-15
Many physiological responses elicited by neuronal spikes-intracellular calcium transients, synaptic potentials, muscle contractions-are built up of discrete, elementary responses to each spike. However, the spikes occur in trains of arbitrary temporal complexity, and each elementary response not only sums with previous ones, but can itself be modified by the previous history of the activity. A basic goal in system identification is to characterize the spike-response transform in terms of a small number of functions-the elementary response kernel and additional kernels or functions that describe the dependence on previous history-that will predict the response to any arbitrary spike train. Here we do this by developing further and generalizing the "synaptic decoding" approach of Sen et al. (1996). Given the spike times in a train and the observed overall response, we use least-squares minimization to construct the best estimated response and at the same time best estimates of the elementary response kernel and the other functions that characterize the spike-response transform. We avoid the need for any specific initial assumptions about these functions by using techniques of mathematical analysis and linear algebra that allow us to solve simultaneously for all of the numerical function values treated as independent parameters. The functions are such that they may be interpreted mechanistically. We examine the performance of the method as applied to synthetic data. We then use the method to decode real synaptic and muscle contraction transforms.
A method for decoding the neurophysiological spike-response transform
Stern, Estee; García-Crescioni, Keyla; Miller, Mark W.; Peskin, Charles S.; Brezina, Vladimir
2009-01-01
Many physiological responses elicited by neuronal spikes—intracellular calcium transients, synaptic potentials, muscle contractions—are built up of discrete, elementary responses to each spike. However, the spikes occur in trains of arbitrary temporal complexity, and each elementary response not only sums with previous ones, but can itself be modified by the previous history of the activity. A basic goal in system identification is to characterize the spike-response transform in terms of a small number of functions—the elementary response kernel and additional kernels or functions that describe the dependence on previous history—that will predict the response to any arbitrary spike train. Here we do this by developing further and generalizing the “synaptic decoding” approach of Sen et al. (J Neurosci 16:6307-6318, 1996). Given the spike times in a train and the observed overall response, we use least-squares minimization to construct the best estimated response and at the same time best estimates of the elementary response kernel and the other functions that characterize the spike-response transform. We avoid the need for any specific initial assumptions about these functions by using techniques of mathematical analysis and linear algebra that allow us to solve simultaneously for all of the numerical function values treated as independent parameters. The functions are such that they may be interpreted mechanistically. We examine the performance of the method as applied to synthetic data. We then use the method to decode real synaptic and muscle contraction transforms. PMID:19695289
Surface engineering of low enriched uranium-molybdenum
NASA Astrophysics Data System (ADS)
Leenaers, A.; Van den Berghe, S.; Detavernier, C.
2013-09-01
Recent attempts to qualify the LEU(Mo) dispersion plate fuel with Si addition to the Al matrix up to high power and burn-up have not yet been successful due to unacceptable fuel plate swelling at a local burn-up above 60% 235U. The root cause of the failures is clearly related directly to the formation of the U(Mo)-Al(Si) interaction layer. Excessive formation of these layers around the fuel kernels severely weakens the local mechanical integrity and eventually leads to pillowing of the plate. In 2008, SCK·CEN has launched the SELENIUM U(Mo) dispersion fuel development project in an attempt to find an alternative way to reduce the interaction between U(Mo) fuel kernels and the Al matrix to a significantly low level: by applying a coating on the U(Mo) kernels. Two fuel plates containing 8gU/cc U(Mo) coated with respectively 600 nm Si and 1000 nm ZrN in a pure Al matrix were manufactured. These plates were irradiated in the BR2 reactor up to a maximum heat flux of 470 W/cm2 until a maximum local burn-up of approximately 70% 235U (˜50% plate average) was reached. Awaiting the PIE results, the advantages of applying a coating are discussed in this paper through annealing experiments and TRIM (the Transport of Ions in Matter) calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rank
1942-03-26
When the oven was disassembled after the test, small kernels of porous material were found in both the upper and lower portion of the oven to a depth of about 2 m. The kernels were of various sizes up to 4 mm. From 1,300 metric ..cap alpha..ons of dry coal, there were 330 kg or the residue of 0.025% of the coal input. These kernels brought to mind deposits of spheroidal material termed ''caviar'', since they had rounded tops. However, they were irregularly long. After multiaxis micrography, no growth rings were found as in Leuna's lignite caviar. So, it wasmore » a question of small particles consisting almost totally of ash. The majority of the composition was Al, Fe, Na, silicic acid, S and Cl. The sulfur was found to be in sulfide form and Cl in a volatile form. The remains did not turn to caviar form since the CaO content was slight. The Al, Fe, Na, silicic acid, S and Cl were concentrated in comparison to coal ash and originate apparently from the catalysts (FeSO/sub 4/, Bayermasse, and Na/sub 2/S). It was notable that the Cl content was so high. 2 graphs, 1 table« less
Plotz, Roan D.; Grecian, W. James; Kerley, Graham I.H.; Linklater, Wayne L.
2016-01-01
Comparisons of recent estimations of home range sizes for the critically endangered black rhinoceros in Hluhluwe-iMfolozi Park (HiP), South Africa, with historical estimates led reports of a substantial (54%) increase, attributed to over-stocking and habitat deterioration that has far-reaching implications for rhino conservation. Other reports, however, suggest the increase is more likely an artefact caused by applying various home range estimators to non-standardised datasets. We collected 1939 locations of 25 black rhino over six years (2004–2009) to estimate annual home ranges and evaluate the hypothesis that they have increased in size. A minimum of 30 and 25 locations were required for accurate 95% MCP estimation of home range of adult rhinos, during the dry and wet seasons respectively. Forty and 55 locations were required for adult female and male annual MCP home ranges, respectively, and 30 locations were necessary for estimating 90% bivariate kernel home ranges accurately. Average annual 95% bivariate kernel home ranges were 20.4 ± 1.2 km2, 53 ±1.9% larger than 95% MCP ranges (9.8 km2 ± 0.9). When home range techniques used during the late-1960s in HiP were applied to our dataset, estimates were similar, indicating that ranges have not changed substantially in 50 years. Inaccurate, non-standardised, home range estimates and their comparison have the potential to mislead black rhino population management. We recommend that more care be taken to collect adequate numbers of rhino locations within standardized time periods (i.e., season or year) and that the comparison of home ranges estimated using dissimilar procedures be avoided. Home range studies of black rhino have been data deficient and procedurally inconsistent. Standardisation of methods is required. PMID:27028728
Plotz, Roan D; Grecian, W James; Kerley, Graham I H; Linklater, Wayne L
2016-01-01
Comparisons of recent estimations of home range sizes for the critically endangered black rhinoceros in Hluhluwe-iMfolozi Park (HiP), South Africa, with historical estimates led reports of a substantial (54%) increase, attributed to over-stocking and habitat deterioration that has far-reaching implications for rhino conservation. Other reports, however, suggest the increase is more likely an artefact caused by applying various home range estimators to non-standardised datasets. We collected 1939 locations of 25 black rhino over six years (2004-2009) to estimate annual home ranges and evaluate the hypothesis that they have increased in size. A minimum of 30 and 25 locations were required for accurate 95% MCP estimation of home range of adult rhinos, during the dry and wet seasons respectively. Forty and 55 locations were required for adult female and male annual MCP home ranges, respectively, and 30 locations were necessary for estimating 90% bivariate kernel home ranges accurately. Average annual 95% bivariate kernel home ranges were 20.4 ± 1.2 km(2), 53 ± 1.9% larger than 95% MCP ranges (9.8 km(2) ± 0.9). When home range techniques used during the late-1960s in HiP were applied to our dataset, estimates were similar, indicating that ranges have not changed substantially in 50 years. Inaccurate, non-standardised, home range estimates and their comparison have the potential to mislead black rhino population management. We recommend that more care be taken to collect adequate numbers of rhino locations within standardized time periods (i.e., season or year) and that the comparison of home ranges estimated using dissimilar procedures be avoided. Home range studies of black rhino have been data deficient and procedurally inconsistent. Standardisation of methods is required.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
Nonparametric estimation and testing of fixed effects panel data models
Henderson, Daniel J.; Carroll, Raymond J.; Li, Qi
2009-01-01
In this paper we consider the problem of estimating nonparametric panel data models with fixed effects. We introduce an iterative nonparametric kernel estimator. We also extend the estimation method to the case of a semiparametric partially linear fixed effects model. To determine whether a parametric, semiparametric or nonparametric model is appropriate, we propose test statistics to test between the three alternatives in practice. We further propose a test statistic for testing the null hypothesis of random effects against fixed effects in a nonparametric panel data regression model. Simulations are used to examine the finite sample performance of the proposed estimators and the test statistics. PMID:19444335
Mathematical inference in one point microrheology
NASA Astrophysics Data System (ADS)
Hohenegger, Christel; McKinley, Scott
2016-11-01
Pioneered by the work of Mason and Weitz, one point passive microrheology has been successfully applied to obtaining estimates of the loss and storage modulus of viscoelastic fluids when the mean-square displacement obeys a local power law. Using numerical simulations of a fluctuating viscoelastic fluid model, we study the problem of recovering the mechanical parameters of the fluid's memory kernel using statistical inference like mean-square displacements and increment auto-correlation functions. Seeking a better understanding of the influence of the assumptions made in the inversion process, we mathematically quantify the uncertainty in traditional one point microrheology for simulated data and demonstrate that a large family of memory kernels yields the same statistical signature. We consider both simulated data obtained from a full viscoelastic fluid simulation of the unsteady Stokes equations with fluctuations and from a Generalized Langevin Equation of the particle's motion described by the same memory kernel. From the theory of inverse problems, we propose an alternative method that can be used to recover information about the loss and storage modulus and discuss its limitations and uncertainties. NSF-DMS 1412998.
An effective fuzzy kernel clustering analysis approach for gene expression data.
Sun, Lin; Xu, Jiucheng; Yin, Jiaojiao
2015-01-01
Fuzzy clustering is an important tool for analyzing microarray data. A major problem in applying fuzzy clustering method to microarray gene expression data is the choice of parameters with cluster number and centers. This paper proposes a new approach to fuzzy kernel clustering analysis (FKCA) that identifies desired cluster number and obtains more steady results for gene expression data. First of all, to optimize characteristic differences and estimate optimal cluster number, Gaussian kernel function is introduced to improve spectrum analysis method (SAM). By combining subtractive clustering with max-min distance mean, maximum distance method (MDM) is proposed to determine cluster centers. Then, the corresponding steps of improved SAM (ISAM) and MDM are given respectively, whose superiority and stability are illustrated through performing experimental comparisons on gene expression data. Finally, by introducing ISAM and MDM into FKCA, an effective improved FKCA algorithm is proposed. Experimental results from public gene expression data and UCI database show that the proposed algorithms are feasible for cluster analysis, and the clustering accuracy is higher than the other related clustering algorithms.
Delimiting Areas of Endemism through Kernel Interpolation
Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971
Lin, Jau-Tien; Liu, Shih-Chun; Hu, Chao-Chin; Shyu, Yung-Shin; Hsu, Chia-Ying; Yang, Deng-Jye
2016-01-01
Roasting treatment increased levels of unsaturated fatty acids (linoleic, oleic and elaidic acids) as well as saturated fatty acids (palmitic and stearic acids) in almond (Prunus dulcis) kernel oils with temperature (150 or 180 °C) and duration (5, 10 or 20 min). Nonetheless, higher temperature (200 °C) and longer duration (10 or 20 min) roasting might result in breakdown of fatty acids especially for unsaturated fatty acids. Phenolic components (total phenols, flavonoids, condensed tannins and phenolic acids) of almond kernels substantially lost in the initial phase; afterward these components gradually increased with roasting temperature and duration. Similar results also observed for their antioxidant activities (scavenging DPPH and ABTS(+) radicals and ferric reducing power). The changes of phenolic acid and flavonoid compositions were also determined by HPLC. Maillard reaction products (estimated with non-enzymatic browning index) also increased with roasting temperature and duration; they might also contribute to enhancing the antioxidant attributes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Comparing fixed and variable-width Gaussian networks.
Kůrková, Věra; Kainen, Paul C
2014-09-01
The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.
Exploring the Brighter-fatter Effect with the Hyper Suprime-Cam
NASA Astrophysics Data System (ADS)
Coulton, William R.; Armstrong, Robert; Smith, Kendrick M.; Lupton, Robert H.; Spergel, David N.
2018-06-01
The brighter-fatter effect has been postulated to arise due to the build up of a transverse electric field, produced as photocharges accumulate in the pixels’ potential wells. We investigate the brighter-fatter effect in the Hyper Suprime-Cam by examining flat fields and moments of stars. We observe deviations from the expected linear relation in the photon transfer curve (PTC), luminosity-dependent correlations between pixels in flat-field images, and a luminosity-dependent point-spread function (PSF) in stellar observations. Under the key assumptions of translation invariance and Maxwell’s equations in the quasi-static limit, we give a first-principles proof that the effect can be parameterized by a translationally invariant scalar kernel. We describe how this kernel can be estimated from flat fields and discuss how this kernel has been used to remove the brighter-fatter distortions in Hyper Suprime-Cam images. We find that our correction restores the expected linear relation in the PTCs and significantly reduces, but does not completely remove, the luminosity dependence of the PSF over a wide range of magnitudes.
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
Comparative habitat use of sympatric Mexican spotted and great horned owls
Joseph L. Ganey; William M. Block; Jeffrey S. Jenness; Randolph A. Wilson
1997-01-01
To provide information on comparative habitat use, we studied radiotagged Mexican spotted owls (Strix occidentalis lucida: n = 13) and great horned owls (Bubo virginianus: n = 4) in northern Arizona. Home-range size (95% adaptive kernel estimate) did not differ significantly between species during either the breeding or nonbreeding...
Detection of fragments from internal insects in wheat samples using a laboratory entoleter
USDA-ARS?s Scientific Manuscript database
A simple, rapid method was developed for estimating the number of insect fragments in flour caused by hidden, internal-feeding insects in whole grains during storage. The method uses a small mechanical rotary device (entoleter), which accelerates whole wheat kernels to high speeds and projects them...
Using a peanut drying monitoring system to estimate costs of nonbeneficial dryer operation
USDA-ARS?s Scientific Manuscript database
Presently, the peanut industry lacks a commercially available, industry-accepted solution for real-time kernel moisture content determination during peanut drying. Samples of unshelled peanuts are extracted from the semitrailer by an operator periodically, and the samples have to be cleaned and shel...
Exact RG flow equations and quantum gravity
NASA Astrophysics Data System (ADS)
de Alwis, S. P.
2018-03-01
We discuss the different forms of the functional RG equation and their relation to each other. In particular we suggest a generalized background field version that is close in spirit to the Polchinski equation as an alternative to the Wetterich equation to study Weinberg's asymptotic safety program for defining quantum gravity, and argue that the former is better suited for this purpose. Using the heat kernel expansion and proper time regularization we find evidence in support of this program in agreement with previous work.
Testing Properties of Boolean Functions
2012-01-01
Applying the Hermite decomposition of f and linearity of expectation, E x,y [f(x)f(y) 〈x, y〉] = n∑ i=1 ∑ S,T∈Nn f̂(S)f̂(T )E x [HS(x)xi]E y [ HT (y)yi...otherwise it takes the value 0. Similarly, Ey[ HT (y)yi] = 1 iff T = ei. 38 Part I Exact Query Complexity 39 Chapter 5 Testing Juntas We begin by studying the...1− 2e− 6m /2 = 1− 2e−O( √ n). The estimate ν̃ is a U-statistic with kernel ψ∗f . This kernel satisfies ‖ψ∗f − Eψ∗f‖∞ ≤ 2‖ψ∗f‖∞ = 2 √ 4n log(4n/3
Huang, Jacob; Gholami, Behnood; Agar, Nathalie Y. R.; Norton, Isaiah; Haddad, Wassim M.; Tannenbaum, Allen R.
2013-01-01
Glioma histologies are the primary factor in prognostic estimates and are used in determining the proper course of treatment. Furthermore, due to the sensitivity of cranial environments, real-time tumor-cell classification and boundary detection can aid in the precision and completeness of tumor resection. A recent improvement to mass spectrometry known as desorption electrospray ionization operates in an ambient environment without the application of a preparation compound. This allows for a real-time acquisition of mass spectra during surgeries and other live operations. In this paper, we present a framework using sparse kernel machines to determine a glioma sample’s histopathological subtype by analyzing its chemical composition acquired by desorption electrospray ionization mass spectrometry. PMID:22256188
Gabor-based kernel PCA with fractional power polynomial models for face recognition.
Liu, Chengjun
2004-05-01
This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.
A multi-label learning based kernel automatic recommendation method for support vector machine.
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.
A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896
A general CFD framework for fault-resilient simulations based on multi-resolution information fusion
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-10-01
We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
Exploiting graph kernels for high performance biomedical relation extraction.
Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri
2018-01-30
Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.
NASA Astrophysics Data System (ADS)
Popov, Aleksey
2013-04-01
The magnetic field of the Earth has global meaning for a life on the Earth. The world geophysical science explains: - occurrence of a magnetic field of the Earth it is transformation of kinetic energy of movements of the fused iron in the liquid core of Earth - into the magnetic energy; - the warming up of a kernel of the Earth occurs due to radioactive disintegration of elements, with excretion of thermal energy. The world science does not define the reasons: - drift of a magnetic dipole on 0,2 a year to the West; - drift of lithospheric slabs and continents. The author offers: an alternative variant existing in a world science the theories "Geodynamo" - it is the theory « the Magnetic field of the Earth », created on the basis of physical laws. Education of a magnetic field of the Earth occurs at moving the electric charge located in a liquid kernel, at rotation of the Earth. At calculation of a magnetic field is used law the Bio Savara for a ring electric current: dB = . Magnetic induction in a kernel of the Earth: B = 2,58 Gs. According to the law of electromagnetic induction the Faradey, rotation of a iron kernel of the Earth in magnetic field causes occurrence of an electric field Emf which moves electrons from the center of a kernel towards the mantle. So of arise the radial electric currents. The magnetic field amplifies the iron of mantle and a kernel of the Earth. As a result of action of a radial electric field the electrons will flow from the center of a kernel in a layer of an electric charge. The central part of a kernel represents the field with a positive electric charge, which creates inverse magnetic field Binv and Emfinv When ?mfinv = ?mf ; ?inv = B, there will be an inversion a magnetic field of the Earth. It is a fact: drift of a magnetic dipole of the Earth in the western direction approximately 0,2 longitude, into a year. Radial electric currents a actions with the basic magnetic field of a Earth - it turn a kernel. It coincides with laws of electromagnetism. According to a rule of the left hand: if the magnetic field in a kernel is directed to drawing, electric current are directed to an axis of rotation of the Earth, - a action of force clockwise (to West). Definition of the force causing drift a kernel according to the law of Ampere F = IBlsin. Powerful force 3,5 × 1012 Nyton, what makes drift of the central part of a kernel of the Earth on 0,2 the longitude in year to West, and also it is engine of the mechanism of movement of slabs together with continents. Movement of a core of the Earth carry out around of a terrestrial axis one circulation in the western direction in 2000 of years. Linear speed of rotation of a kernel concerning a mantle on border the mantle a kernel: V = × 3,471 × 10 = 3,818 × 10 m/s = 33 m/day = 12 km/years. Considering greater viscosity of a mantle, the powerful energy at rotation of a kernel seize a mantle and lithospheric slabs and makes their collisions as a result of which there are earthquakes and volcano. Continents Northern and Southern America every year separate from the Europe and Africa on several centimeters. Atlantic ocean as a result of movement of these slabs with such speed was formed for 200 million years, that in comparison with the age of the Earth - several billions years, not so long time. Drift of a kernel in the western direction is a principal cause of delay of speed of rotation of the Earth. Flow of radial electric currents allot according to the law of Joule - Lenz, the quantity of warmth : Q = I2Rt = IUt, of thermal energy 6,92 × 1017 calories/year. This defines heating of a kernel and the Earth as a whole. In the valley of the median-Atlantic ridge having numerous volcanos, the lava flow constantly thus warm up waters of Atlantic ocean. It is a fact the warm current Gulf Stream. Thawing of a permafrost and ices of Arctic ocean, of glaciers of Greenland and Antarctica is acknowledgement: the warmth of earth defines character of thawing of glaciers and a permafrost. This is a global warming. The version of the author: the periods of inversion of a magnetic field of the Earth determine cycles of the Ice Age. At inversions of a magnetic field when B=0, radial electric currents are small or are absent, excretion of thermal energy minimally or an equal to zero,it is the beginning of the cooling the Earth and offensive of the Ice Age. Disappearance warm current Gulf Stream warming the north of the Europe and Canada. Drift of a magnetic dipole of the Earth in a rotation the opposite to rotation of the Earth, is acknowledgement of drift of a kernel of the Earth in a rotation the opposite to rotation of the Earth and is acknowledgement of the theory « the Magnetic field of the Earth ». The author continues to develop the theory « the Magnetic field of the Earth » and invites geophysicists to accept in it participation in it.
Local Renyi entropic profiles of DNA sequences.
Vinga, Susana; Almeida, Jonas S
2007-10-16
In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at http://kdbio.inesc-id.pt/~svinga/ep/. The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures.
Local Renyi entropic profiles of DNA sequences
Vinga, Susana; Almeida, Jonas S
2007-01-01
Background In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. Results The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at . Conclusion The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures. PMID:17939871
Wartmann, Flurina M; Purves, Ross S; van Schaik, Carel P
2010-04-01
Quantification of the spatial needs of individuals and populations is vitally important for management and conservation. Geographic information systems (GIS) have recently become important analytical tools in wildlife biology, improving our ability to understand animal movement patterns, especially when very large data sets are collected. This study aims at combining the field of GIS with primatology to model and analyse space-use patterns of wild orang-utans. Home ranges of female orang-utans in the Tuanan Mawas forest reserve in Central Kalimantan, Indonesia were modelled with kernel density estimation methods. Kernel results were compared with minimum convex polygon estimates, and were found to perform better, because they were less sensitive to sample size and produced more reliable estimates. Furthermore, daily travel paths were calculated from 970 complete follow days. Annual ranges for the resident females were approximately 200 ha and remained stable over several years; total home range size was estimated to be 275 ha. On average, each female shared a third of her home range with each neighbouring female. Orang-utan females in Tuanan built their night nest on average 414 m away from the morning nest, whereas average daily travel path length was 777 m. A significant effect of fruit availability on day path length was found. Sexually active females covered longer distances per day and may also temporarily expand their ranges.
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...
7 CFR 51.1415 - Inedible kernels.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Unconventional protein sources: apricot seed kernels.
Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M
1981-09-01
Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.
Haroldson, Mark A.; Schwartz, Charles C.; Thompson, Daniel J.; Bjornlie, Daniel D.; Gunther, Kerry A.; Cain, Steven L.; Tyers, Daniel B.; Frey, Kevin L.; Aber, Bryan C.
2014-01-01
The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.
An introduction to kernel-based learning algorithms.
Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B
2001-01-01
This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...
Design of CT reconstruction kernel specifically for clinical lung imaging
NASA Astrophysics Data System (ADS)
Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.
2005-04-01
In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.
Quality changes in macadamia kernel between harvest and farm-gate.
Walton, David A; Wallace, Helen M
2011-02-01
Macadamia integrifolia, Macadamia tetraphylla and their hybrids are cultivated for their edible kernels. After harvest, nuts-in-shell are partially dried on-farm and sorted to eliminate poor-quality kernels before consignment to a processor. During these operations, kernel quality may be lost. In this study, macadamia nuts-in-shell were sampled at five points of an on-farm postharvest handling chain from dehusking to the final storage silo to assess quality loss prior to consignment. Shoulder damage, weight of pieces and unsound kernel were assessed for raw kernels, and colour, mottled colour and surface damage for roasted kernels. Shoulder damage, weight of pieces and unsound kernel for raw kernels increased significantly between the dehusker and the final silo. Roasted kernels displayed a significant increase in dark colour, mottled colour and surface damage during on-farm handling. Significant loss of macadamia kernel quality occurred on a commercial farm during sorting and storage of nuts-in-shell before nuts were consigned to a processor. Nuts-in-shell should be dried as quickly as possible and on-farm handling minimised to maintain optimum kernel quality. 2010 Society of Chemical Industry.
A new discriminative kernel from probabilistic models.
Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert
2002-10-01
Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.
Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.
Kwak, Nojun
2016-05-20
Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.
A shortest-path graph kernel for estimating gene product semantic similarity.
Alvarez, Marco A; Qi, Xiaojun; Yan, Changhui
2011-07-29
Existing methods for calculating semantic similarity between gene products using the Gene Ontology (GO) often rely on external resources, which are not part of the ontology. Consequently, changes in these external resources like biased term distribution caused by shifting of hot research topics, will affect the calculation of semantic similarity. One way to avoid this problem is to use semantic methods that are "intrinsic" to the ontology, i.e. independent of external knowledge. We present a shortest-path graph kernel (spgk) method that relies exclusively on the GO and its structure. In spgk, a gene product is represented by an induced subgraph of the GO, which consists of all the GO terms annotating it. Then a shortest-path graph kernel is used to compute the similarity between two graphs. In a comprehensive evaluation using a benchmark dataset, spgk compares favorably with other methods that depend on external resources. Compared with simUI, a method that is also intrinsic to GO, spgk achieves slightly better results on the benchmark dataset. Statistical tests show that the improvement is significant when the resolution and EC similarity correlation coefficient are used to measure the performance, but is insignificant when the Pfam similarity correlation coefficient is used. Spgk uses a graph kernel method in polynomial time to exploit the structure of the GO to calculate semantic similarity between gene products. It provides an alternative to both methods that use external resources and "intrinsic" methods with comparable performance.
Modeling and Analysis of FCM UN TRISO Fuel Using the PARFUME Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaise Collin
2013-09-01
The PARFUME (PARticle Fuel ModEl) modeling code was used to assess the overall fuel performance of uranium nitride (UN) tri-structural isotropic (TRISO) ceramic fuel in the frame of the design and development of Fully Ceramic Matrix (FCM) fuel. A specific modeling of a TRISO particle with UN kernel was developed with PARFUME, and its behavior was assessed in irradiation conditions typical of a Light Water Reactor (LWR). The calculations were used to access the dimensional changes of the fuel particle layers and kernel, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated dependingmore » on the strain behavior of the constituent materials at high fast fluence and burn-up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the pyrolytic carbon (PyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn-up. These material properties are unknown at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, more effort is needed to determine them and positively conclude on the applicability of FCM fuel to LWRs.« less
Analysis of the spatial distribution of dengue cases in the city of Rio de Janeiro, 2011 and 2012
Carvalho, Silvia; Magalhães, Mônica de Avelar Figueiredo Mafra; Medronho, Roberto de Andrade
2017-01-01
ABSTRACT OBJECTIVE Analyze the spatial distribution of classical dengue and severe dengue cases in the city of Rio de Janeiro. METHODS Exploratory study, considering cases of classical dengue and severe dengue with laboratory confirmation of the infection in the city of Rio de Janeiro during the years 2011/2012. The georeferencing technique was applied for the cases notified in the Notification Increase Information System in the period of 2011 and 2012. For this process, the fields “street” and “number” were used. The ArcGis10 program’s Geocoding tool’s automatic process was performed. The spatial analysis was done through the kernel density estimator. RESULTS Kernel density pointed out hotspots for classic dengue that did not coincide geographically with severe dengue and were in or near favelas. The kernel ratio did not show a notable change in the spatial distribution pattern observed in the kernel density analysis. The georeferencing process showed a loss of 41% of classic dengue registries and 17% of severe dengue registries due to the address in the Notification Increase Information System form. CONCLUSIONS The hotspots near the favelas suggest that the social vulnerability of these localities can be an influencing factor for the occurrence of this aggravation since there is a deficiency of the supply and access to essential goods and services for the population. To reduce this vulnerability, interventions must be related to macroeconomic policies. PMID:28832752
Acceleration of GPU-based Krylov solvers via data transfer reduction
Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...
2015-04-08
Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less
Effects of GPS sampling intensity on home range analyses
Jeffrey J. Kolodzinski; Lawrence V. Tannenbaum; David A. Osborn; Mark C. Conner; W. Mark Ford; Karl V. Miller
2010-01-01
The two most common methods for determining home ranges, minimum convex polygon (MCP) and kernel analyses, can be affected by sampling intensity. Despite prior research, it remains unclear how high-intensity sampling regimes affect home range estimations. We used datasets from 14 GPS-collared, white-tailed deer (Odocoileus virginianus) to describe...
Boundary Kernel Estimation of the Two Sample Comparison Density Function
1989-05-01
not for the understand- ing, love, and steadfast support of my wife, Catheryn . She supported my move to statistics a mere fortnight after we were...school one learns things of a narrow and technical nature; 0 Catheryn has shown me much of what is fundamentally true and important in this world. To her
Serra-Sogas, Norma; O'Hara, Patrick D; Canessa, Rosaline; Keller, Peter; Pelot, Ronald
2008-05-01
This paper examines the use of exploratory spatial analysis for identifying hotspots of shipping-based oil pollution in the Pacific Region of Canada's Exclusive Economic Zone. It makes use of data collected from fiscal years 1997/1998 to 2005/2006 by the National Aerial Surveillance Program, the primary tool for monitoring and enforcing the provisions imposed by MARPOL 73/78. First, we present oil spill data as points in a "dot map" relative to coastlines, harbors and the aerial surveillance distribution. Then, we explore the intensity of oil spill events using the Quadrat Count method, and the Kernel Density Estimation methods with both fixed and adaptive bandwidths. We found that oil spill hotspots where more clearly defined using Kernel Density Estimation with an adaptive bandwidth, probably because of the "clustered" distribution of oil spill occurrences. Finally, we discuss the importance of standardizing oil spill data by controlling for surveillance effort to provide a better understanding of the distribution of illegal oil spills, and how these results can ultimately benefit a monitoring program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghrayeb, Shadi Z.; Ougouag, Abderrafi M.; Ouisloumen, Mohamed
2014-01-01
A multi-group formulation for the exact neutron elastic scattering kernel is developed. It incorporates the neutron up-scattering effects, stemming from lattice atoms thermal motion and accounts for it within the resulting effective nuclear cross-section data. The effects pertain essentially to resonant scattering off of heavy nuclei. The formulation, implemented into a standalone code, produces effective nuclear scattering data that are then supplied directly into the DRAGON lattice physics code where the effects on Doppler Reactivity and neutron flux are demonstrated. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering,more » which in turn affect the estimation of core reactivity and burnup characteristics. The results show an increase in values of Doppler temperature feedback coefficients up to -10% for UOX and MOX LWR fuels compared to the corresponding values derived using the traditional asymptotic elastic scattering kernel. This paper also summarizes the results done on this topic to date.« less
Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin
2014-10-21
To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)
NASA Astrophysics Data System (ADS)
Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.
2016-08-01
Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.
Structural graph-based morphometry: A multiscale searchlight framework based on sulcal pits.
Takerkart, Sylvain; Auzias, Guillaume; Brun, Lucile; Coulon, Olivier
2017-01-01
Studying the topography of the cortex has proved valuable in order to characterize populations of subjects. In particular, the recent interest towards the deepest parts of the cortical sulci - the so-called sulcal pits - has opened new avenues in that regard. In this paper, we introduce the first fully automatic brain morphometry method based on the study of the spatial organization of sulcal pits - Structural Graph-Based Morphometry (SGBM). Our framework uses attributed graphs to model local patterns of sulcal pits, and further relies on three original contributions. First, a graph kernel is defined to provide a new similarity measure between pit-graphs, with few parameters that can be efficiently estimated from the data. Secondly, we present the first searchlight scheme dedicated to brain morphometry, yielding dense information maps covering the full cortical surface. Finally, a multi-scale inference strategy is designed to jointly analyze the searchlight information maps obtained at different spatial scales. We demonstrate the effectiveness of our framework by studying gender differences and cortical asymmetries: we show that SGBM can both localize informative regions and estimate their spatial scales, while providing results which are consistent with the literature. Thanks to the modular design of our kernel and the vast array of available kernel methods, SGBM can easily be extended to include a more detailed description of the sulcal patterns and solve different statistical problems. Therefore, we suggest that our SGBM framework should be useful for both reaching a better understanding of the normal brain and defining imaging biomarkers in clinical settings. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Elarab, Manal; Ticlavilca, Andres M.; Torres-Rua, Alfonso F.; Maslova, Inga; McKee, Mac
2015-12-01
Precision agriculture requires high-resolution information to enable greater precision in the management of inputs to production. Actionable information about crop and field status must be acquired at high spatial resolution and at a temporal frequency appropriate for timely responses. In this study, high spatial resolution imagery was obtained through the use of a small, unmanned aerial system called AggieAirTM. Simultaneously with the AggieAir flights, intensive ground sampling for plant chlorophyll was conducted at precisely determined locations. This study reports the application of a relevance vector machine coupled with cross validation and backward elimination to a dataset composed of reflectance from high-resolution multi-spectral imagery (VIS-NIR), thermal infrared imagery, and vegetative indices, in conjunction with in situ SPAD measurements from which chlorophyll concentrations were derived, to estimate chlorophyll concentration from remotely sensed data at 15-cm resolution. The results indicate that a relevance vector machine with a thin plate spline kernel type and kernel width of 5.4, having LAI, NDVI, thermal and red bands as the selected set of inputs, can be used to spatially estimate chlorophyll concentration with a root-mean-squared-error of 5.31 μg cm-2, efficiency of 0.76, and 9 relevance vectors.
A qualitative interpretation of 7 August 1972 impulsive phase flare H alpha line profiles
NASA Technical Reports Server (NTRS)
Canfield, R. C.
1982-01-01
The considered investigation shows that existing models of the formation of the H-alpha line during flares appear to provide clear qualitative evidence that heating of the H-alpha forming regions of the flare chromosphere in the bright H-alpha kernels observed during the impulsive phase of solar flares is not due primarily to heating by Coulomb collisions of a power-law distribution of 10-100 keV electrons with chromospheric material. It appears rather that some shorter-range process, involving possibly conduction or optically thick radiative transfer, is favored. Such a conclusion is clearly relevant to collisionless confinement modelling. However, much work remains to be done before there will be a basis for quantitatively testing the consistency of the considered picture with chromospheric diagnostics.
Broken rice kernels and the kinetics of rice hydration and texture during cooking.
Saleh, Mohammed; Meullenet, Jean-Francois
2013-05-01
During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P < 0.05) but the unbroken kernels became significantly harder. Moisture content and moisture uptake rate were positively correlated, and cooked rice hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.
Brodbeck, Christian; Presacco, Alessandro; Simon, Jonathan Z
2018-05-15
Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli. Copyright © 2018 Elsevier Inc. All rights reserved.
On the Mean Squared Error of Nonparametric Quantile Estimators under Random Right-Censorship.
1986-09-01
SECURITY CI.ASSIFICATION lb. RESTRICTIVE MARKINGS UNCLASSIFIED 2a, SECURITY CLASSIFICATION AUTHORITY 3 . OISTRIBUTIONIAVAILASIL.ITY OF REPORT P16e 2b...UNCLASSIPIEO/UNLIMITEO 3 SAME AS RPT". 0 OTIC USERS 1 UNCLASSIFIED p." " 22. NAME OP RESPONSIBLE INOIVIOUAL 22b. TELEPHONE NUMBER 22c. OFFICE SYMBOL...in Section 3 , and the result for the kernel estimator Qn is derived in Section 4. It should be k. mentioned that the order statistic methods used by
fRMSDPred: Predicting Local RMSD Between Structural Fragments Using Sequence Information
2007-04-04
machine learning approaches for estimating the RMSD value of a pair of protein fragments. These estimated fragment-level RMSD values can be used to construct the alignment, assess the quality of an alignment, and identify high-quality alignment segments. We present algorithms to solve this fragment-level RMSD prediction problem using a supervised learning framework based on support vector regression and classification that incorporates protein profiles, predicted secondary structure, effective information encoding schemes, and novel second-order pairwise exponential kernel
Asymptotics of nonparametric L-1 regression models with dependent data
ZHAO, ZHIBIAO; WEI, YING; LIN, DENNIS K.J.
2013-01-01
We investigate asymptotic properties of least-absolute-deviation or median quantile estimates of the location and scale functions in nonparametric regression models with dependent data from multiple subjects. Under a general dependence structure that allows for longitudinal data and some spatially correlated data, we establish uniform Bahadur representations for the proposed median quantile estimates. The obtained Bahadur representations provide deep insights into the asymptotic behavior of the estimates. Our main theoretical development is based on studying the modulus of continuity of kernel weighted empirical process through a coupling argument. Progesterone data is used for an illustration. PMID:24955016
NASA Astrophysics Data System (ADS)
Porz, Lucas; Grombein, Thomas; Seitz, Kurt; Heck, Bernhard; Wenzel, Friedemann
2017-04-01
Regional height reference systems are generally related to individual vertical datums defined by specific tide gauges. The discrepancies of these vertical datums with respect to a unified global datum cause height system biases that range in an order of 1-2 m at a global scale. One approach for unification of height systems relates to the solution of a Geodetic Boundary Value Problem (GBVP). In particular, the fixed GBVP, using gravity disturbances as boundary values, is solved at GNSS/leveling benchmarks, whereupon height datum offsets can be estimated by least squares adjustment. In spherical approximation, the solution of the fixed GBVP is obtained by Hotine's spherical integral formula. However, this method relies on the global availability of gravity data. In practice, gravity data of the necessary resolution and accuracy is not accessible globally. Thus, the integration is restricted to an area within the vicinity of the computation points. The resulting truncation error can reach several meters in height, making height system unification without further consideration of this effect unfeasible. This study analyzes methods for reducing the truncation error by combining terrestrial gravity data with satellite-based global geopotential models and by modifying the integral kernel in order to accelerate the convergence of the resulting potential. For this purpose, EGM2008-derived gravity functionals are used as pseudo-observations to be integrated numerically. Geopotential models of different spectral degrees are implemented using a remove-restore-scheme. Three types of modification are applied to the Hotine-kernel and the convergence of the resulting potential is analyzed. In a further step, the impact of these operations on the estimation of height datum offsets is investigated within a closed loop simulation. A minimum integration radius in combination with a specific modification of the Hotine-kernel is suggested in order to achieve sub-cm accuracy for the estimation of height datum offsets.
Nonlinear Deep Kernel Learning for Image Annotation.
Jiu, Mingyuan; Sahbi, Hichem
2017-02-08
Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.
Multineuron spike train analysis with R-convolution linear combination kernel.
Tezuka, Taro
2018-06-01
A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Haryanto, B.; Bukit, R. Br; Situmeang, E. M.; Christina, E. P.; Pandiangan, F.
2018-02-01
The purpose of this study was to determine the performance, productivity and feasibility of the operation of palm kernel processing plant based on Energy Productivity Ratio (EPR). EPR is expressed as the ratio of output to input energy and by-product. Palm Kernel plan is process in palm kernel to become palm kernel oil. The procedure started from collecting data needed as energy input such as: palm kernel prices, energy demand and depreciation of the factory. The energy output and its by-product comprise the whole production price such as: palm kernel oil price and the remaining products such as shells and pulp price. Calculation the equality of energy of palm kernel oil is to analyze the value of Energy Productivity Ratio (EPR) bases on processing capacity per year. The investigation has been done in Kernel Oil Processing Plant PT-X at Sumatera Utara plantation. The value of EPR was 1.54 (EPR > 1), which indicated that the processing of palm kernel into palm kernel oil is feasible to be operated based on the energy productivity.
Hazard Function Estimation with Cause-of-Death Data Missing at Random.
Wang, Qihua; Dinse, Gregg E; Liu, Chunling
2012-04-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.
Small-scale modification to the lensing kernel
NASA Astrophysics Data System (ADS)
Hadzhiyska, Boryana; Spergel, David; Dunkley, Joanna
2018-02-01
Calculations of the cosmic microwave background (CMB) lensing power implemented into the standard cosmological codes such as camb and class usually treat the surface of last scatter as an infinitely thin screen. However, since the CMB anisotropies are smoothed out on scales smaller than the diffusion length due to the effect of Silk damping, the photons which carry information about the small-scale density distribution come from slightly earlier times than the standard recombination time. The dominant effect is the scale dependence of the mean redshift associated with the fluctuations during recombination. We find that fluctuations at k =0.01 Mpc-1 come from a characteristic redshift of z ≈1090 , while fluctuations at k =0.3 Mpc-1 come from a characteristic redshift of z ≈1130 . We then estimate the corrections to the lensing kernel and the related power spectra due to this effect. We conclude that neglecting it would result in a deviation from the true value of the lensing kernel at the half percent level at small CMB scales. For an all-sky, noise-free experiment, this corresponds to a ˜0.1 σ shift in the observed temperature power spectrum on small scales (2500 ≲l ≲4000 ).
Rastgarpour, Maryam; Shanbehzadeh, Jamshid
2014-01-01
Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation.
Blind motion image deblurring using nonconvex higher-order total variation model
NASA Astrophysics Data System (ADS)
Li, Weihong; Chen, Rui; Xu, Shangwen; Gong, Weiguo
2016-09-01
We propose a nonconvex higher-order total variation (TV) method for blind motion image deblurring. First, we introduce a nonconvex higher-order TV differential operator to define a new model of the blind motion image deblurring, which can effectively eliminate the staircase effect of the deblurred image; meanwhile, we employ an image sparse prior to improve the edge recovery quality. Second, to improve the accuracy of the estimated motion blur kernel, we use L1 norm and H1 norm as the blur kernel regularization term, considering the sparsity and smoothing of the motion blur kernel. Third, because it is difficult to solve the numerically computational complexity problem of the proposed model owing to the intrinsic nonconvexity, we propose a binary iterative strategy, which incorporates a reweighted minimization approximating scheme in the outer iteration, and a split Bregman algorithm in the inner iteration. And we also discuss the convergence of the proposed binary iterative strategy. Last, we conduct extensive experiments on both synthetic and real-world degraded images. The results demonstrate that the proposed method outperforms the previous representative methods in both quality of visual perception and quantitative measurement.
2013-01-01
Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...
An SVM model with hybrid kernels for hydrological time series
NASA Astrophysics Data System (ADS)
Wang, C.; Wang, H.; Zhao, X.; Xie, Q.
2017-12-01
Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multiple kernels learning-based biological entity relationship extraction method.
Dongliang, Xu; Jingchang, Pan; Bailing, Wang
2017-09-20
Automatic extracting protein entity interaction information from biomedical literature can help to build protein relation network and design new drugs. There are more than 20 million literature abstracts included in MEDLINE, which is the most authoritative textual database in the field of biomedicine, and follow an exponential growth over time. This frantic expansion of the biomedical literature can often be difficult to absorb or manually analyze. Thus efficient and automated search engines are necessary to efficiently explore the biomedical literature using text mining techniques. The P, R, and F value of tag graph method in Aimed corpus are 50.82, 69.76, and 58.61%, respectively. The P, R, and F value of tag graph kernel method in other four evaluation corpuses are 2-5% higher than that of all-paths graph kernel. And The P, R and F value of feature kernel and tag graph kernel fuse methods is 53.43, 71.62 and 61.30%, respectively. The P, R and F value of feature kernel and tag graph kernel fuse methods is 55.47, 70.29 and 60.37%, respectively. It indicated that the performance of the two kinds of kernel fusion methods is better than that of simple kernel. In comparison with the all-paths graph kernel method, the tag graph kernel method is superior in terms of overall performance. Experiments show that the performance of the multi-kernels method is better than that of the three separate single-kernel method and the dual-mutually fused kernel method used hereof in five corpus sets.
NASA Astrophysics Data System (ADS)
Sachtler, W. M. H.
1984-11-01
In equilibrium, the composition of the surface of an alloy will, in general, differ from that of the bulk. The broken-bond model is applicable to alloys with atoms of virtually equal size. If the heat of alloy formation is zero, the component of lower heat of atomization is found enriched in the surface. If both partners have equal heats of sublimination, the surface of a diluted alloy is enriched with the minority component. Size effects can enhance or weaken the electronic effects. In general, lattice strain can be relaxed by precipitating atoms of deviating size on the surface. Two-phase alloys are described by the "cherry model", i.e. one alloy phase, the "kernel" is surrounded by another alloy, the "flesh", and the surface of the outer phase, the "skin" displays a deviating surface composition as in monophasic alloys. In the presence of molecules capable of forming chemical bonds with individual metal atoms, "chemisorption induced surface segregation" can be observed at low temperatures, i.e. the surface becomes enriched with the metal forming the stronger chemisorption bonds.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...
Entanglement entropy of electromagnetic edge modes.
Donnelly, William; Wall, Aron C
2015-03-20
The vacuum entanglement entropy of Maxwell theory, when evaluated by standard methods, contains an unexpected term with no known statistical interpretation. We resolve this two-decades old puzzle by showing that this term is the entanglement entropy of edge modes: classical solutions determined by the electric field normal to the entangling surface. We explain how the heat kernel regularization applied to this term leads to the negative divergent expression found by Kabat. This calculation also resolves a recent puzzle concerning the logarithmic divergences of gauge fields in 3+1 dimensions.
Nonrelativistic trace and diffeomorphism anomalies in particle number background
NASA Astrophysics Data System (ADS)
Auzzi, Roberto; Baiguera, Stefano; Nardelli, Giuseppe
2018-04-01
Using the heat kernel method, we compute nonrelativistic trace anomalies for Schrödinger theories in flat spacetime, with a generic background gauge field for the particle number symmetry, both for a free scalar and a free fermion. The result is genuinely nonrelativistic, and it has no counterpart in the relativistic case. Contrary to naive expectations, the anomaly is not gauge invariant; this is similar to the nongauge covariance of the non-Abelian relativistic anomaly. We also show that, in the same background, the gravitational anomaly for a nonrelativistic scalar vanishes.
Methods for estimating water consumption for thermoelectric power plants in the United States
Diehl, Timothy H.; Harris, Melissa; Murphy, Jennifer C.; Hutson, Susan S.; Ladd, David E.
2013-01-01
Heat budgets were constructed for the first four generation-type categories; data at solar thermal plants were insufficient for heat budgets. These heat budgets yielded estimates of the amount of heat transferred to the condenser. The ratio of evaporation to the heat discharged through the condenser was estimated using existing heat balance models that are sensitive to environmental data; this feature allows estimation of consumption under different climatic conditions. These two estimates were multiplied to yield an estimate of consumption at each power plant.
Adaptive MCMC in Bayesian phylogenetics: an application to analyzing partitioned data in BEAST.
Baele, Guy; Lemey, Philippe; Rambaut, Andrew; Suchard, Marc A
2017-06-15
Advances in sequencing technology continue to deliver increasingly large molecular sequence datasets that are often heavily partitioned in order to accurately model the underlying evolutionary processes. In phylogenetic analyses, partitioning strategies involve estimating conditionally independent models of molecular evolution for different genes and different positions within those genes, requiring a large number of evolutionary parameters that have to be estimated, leading to an increased computational burden for such analyses. The past two decades have also seen the rise of multi-core processors, both in the central processing unit (CPU) and Graphics processing unit processor markets, enabling massively parallel computations that are not yet fully exploited by many software packages for multipartite analyses. We here propose a Markov chain Monte Carlo (MCMC) approach using an adaptive multivariate transition kernel to estimate in parallel a large number of parameters, split across partitioned data, by exploiting multi-core processing. Across several real-world examples, we demonstrate that our approach enables the estimation of these multipartite parameters more efficiently than standard approaches that typically use a mixture of univariate transition kernels. In one case, when estimating the relative rate parameter of the non-coding partition in a heterochronous dataset, MCMC integration efficiency improves by > 14-fold. Our implementation is part of the BEAST code base, a widely used open source software package to perform Bayesian phylogenetic inference. guy.baele@kuleuven.be. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
New Estimates of Land Use Intensity of Potential Bioethanol Production in the U.S.A.
NASA Astrophysics Data System (ADS)
Kheshgi, H. S.; Song, Y.; Torkamani, S.; Jain, A. K.
2016-12-01
We estimate potential bioethanol land use intensity (the inverse of potential bioethanol yield per hectare) across the United States by modeling crop yields and conversion to bioethanol (via a fermentation pathway), based on crop field studies and conversion technology analyses. We apply the process-based land surface model, the Integrated Science Assessment model (ISAM), to estimate the potential yield of four crops - corn, Miscanthus, and two variants of switchgrass (Cave-in-Rock and Alamo) - across the U.S.A. landscape for the 14-year period from 1999 through 2012, for the case with fertilizer application but without irrigation. We estimate bioethanol yield based on recent experience for corn bioethanol production from corn kernel, and current cellulosic bioethanol process design specifications under the assumption of the maximum practical harvest fraction for the energy grasses (Miscanthus and switchgrasses) and a moderate (30%) harvest fraction of corn stover. We find that each of four crops included has regions where that crop is estimated to have the lowest land use intensity (highest potential bioethanol yield per hectare). We find that minimizing potential land use intensity by including both corn and the energy grasses only improves incrementally to that of corn (using both harvested kernel and stover for bioethanol). Bioethanol land use intensity is one fundamental factor influencing the desirability of biofuels, but is not the only one; others factors include economics, competition with food production and land use, water and climate, nitrogen runoff, life-cycle emissions, and the pace of crop and technology improvement into the future.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...
Code of Federal Regulations, 2013 CFR
2013-01-01
...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will not...
7 CFR 51.2296 - Three-fourths half kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...
The Classification of Diabetes Mellitus Using Kernel k-means
NASA Astrophysics Data System (ADS)
Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.
2018-01-01
Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.
UNICOS Kernel Internals Application Development
NASA Technical Reports Server (NTRS)
Caredo, Nicholas; Craw, James M. (Technical Monitor)
1995-01-01
Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.
Detection of maize kernels breakage rate based on K-means clustering
NASA Astrophysics Data System (ADS)
Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping
2017-04-01
In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.
Graphical and Numerical Descriptive Analysis: Exploratory Tools Applied to Vietnamese Data
ERIC Educational Resources Information Center
Haughton, Dominique; Phong, Nguyen
2004-01-01
This case study covers several exploratory data analysis ideas, the histogram and boxplot, kernel density estimates, the recently introduced bagplot--a two-dimensional extension of the boxplot--as well as the violin plot, which combines a boxplot with a density shape plot. We apply these ideas and demonstrate how to interpret the output from these…
Two specific fires from 2011 are tracked for local to regional scale contribution to ozone (O3) and fine particulate matter (PM2.5) using a freely available regulatory modeling system that includes the BlueSky wildland fire emissions tool, Spare Matrix Operator Kernel Emissions (...
Representing uncertainty in a spatial invasion model that incorporates human-mediated dispersal
Frank H. Koch; Denys Yemshanov; Robert A. Haack
2013-01-01
Most modes of human-mediated dispersal of invasive species are directional and vector-based. Classical spatial spread models usually depend on probabilistic dispersal kernels that emphasize distance over direction and have limited ability to depict rare but influential long-distance dispersal events. These aspects are problematic if such models are used to estimate...
ERIC Educational Resources Information Center
Moses, Tim
2008-01-01
Equating functions are supposed to be population invariant, meaning that the choice of subpopulation used to compute the equating function should not matter. The extent to which equating functions are population invariant is typically assessed in terms of practical difference criteria that do not account for equating functions' sampling…
Harnessing AIA Diffraction Patterns to Determine Flare Footpoint Temperatures
NASA Astrophysics Data System (ADS)
Bain, H. M.; Schwartz, R. A.; Torre, G.; Krucker, S.; Raftery, C. L.
2014-12-01
In the "Standard Flare Model" energy from accelerated electrons is deposited at the footpoints of newly reconnected flare loops, heating the surrounding plasma. Understanding the relation between the multi-thermal nature of the footpoints and the energy flux from accelerated electrons is therefore fundamental to flare physics. Extreme ultraviolet (EUV) images of bright flare kernels, obtained from the Atmospheric Imaging Assembly (AIA) onboard the Solar Dynamics Observatory, are often saturated despite the implementation of automatic exposure control. These kernels produce diffraction patterns often seen in AIA images during the most energetic flares. We implement an automated image reconstruction procedure, which utilizes diffraction pattern artifacts, to de-saturate AIA images and reconstruct the flare brightness in saturated pixels. Applying this technique to recover the footpoint brightness in each of the AIA EUV passbands, we investigate the footpoint temperature distribution. Using observations from the Ramaty High Energy Solar Spectroscopic Imager (RHESSI), we will characterize the footpoint accelerated electron distribution of the flare. By combining these techniques, we investigate the relation between the nonthermal electron energy flux and the temperature response of the flare footpoints.
A robust background regression based score estimation algorithm for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
2016-12-01
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.
Millerón, M; López de Heredia, U; Lorenzo, Z; Alonso, J; Dounavi, A; Gil, L; Nanos, N
2013-03-01
Spatial discordance between primary and effective dispersal in plant populations indicates that postdispersal processes erase the seed rain signal in recruitment patterns. Five different models were used to test the spatial concordance of the primary and effective dispersal patterns in a European beech (Fagus sylvatica) population from central Spain. An ecological method was based on classical inverse modelling (SSS), using the number of seed/seedlings as input data. Genetic models were based on direct kernel fitting of mother-to-offspring distances estimated by a parentage analysis or were spatially explicit models based on the genotype frequencies of offspring (competing sources model and Moran-Clark's Model). A fully integrated mixed model was based on inverse modelling, but used the number of genotypes as input data (gene shadow model). The potential sources of error and limitations of each seed dispersal estimation method are discussed. The mean dispersal distances for seeds and saplings estimated with these five methods were higher than those obtained by previous estimations for European beech forests. All the methods show strong discordance between primary and effective dispersal kernel parameters, and for dispersal directionality. While seed rain was released mostly under the canopy, saplings were established far from mother trees. This discordant pattern may be the result of the action of secondary dispersal by animals or density-dependent effects; that is, the Janzen-Connell effect. © 2013 Blackwell Publishing Ltd.
Parallel mutual information estimation for inferring gene regulatory networks on GPUs
2011-01-01
Background Mutual information is a measure of similarity between two variables. It has been widely used in various application domains including computational biology, machine learning, statistics, image processing, and financial computing. Previously used simple histogram based mutual information estimators lack the precision in quality compared to kernel based methods. The recently introduced B-spline function based mutual information estimation method is competitive to the kernel based methods in terms of quality but at a lower computational complexity. Results We present a new approach to accelerate the B-spline function based mutual information estimation algorithm with commodity graphics hardware. To derive an efficient mapping onto this type of architecture, we have used the Compute Unified Device Architecture (CUDA) programming model to design and implement a new parallel algorithm. Our implementation, called CUDA-MI, can achieve speedups of up to 82 using double precision on a single GPU compared to a multi-threaded implementation on a quad-core CPU for large microarray datasets. We have used the results obtained by CUDA-MI to infer gene regulatory networks (GRNs) from microarray data. The comparisons to existing methods including ARACNE and TINGe show that CUDA-MI produces GRNs of higher quality in less time. Conclusions CUDA-MI is publicly available open-source software, written in CUDA and C++ programming languages. It obtains significant speedup over sequential multi-threaded implementation by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs. PMID:21672264
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.
Arimboor, Ranjith; Kumar, K Sarin; Arumughan, C
2008-05-12
A RP-HPLC-DAD method was developed and validated for the simultaneous analysis of nine phenolic acids including gallic acid, protocatechuic acid, p-hydroxybenzoic acid, vanillic acid, salicylic acid, p-coumaric acid, cinnamic acid, caffiec acid and ferulic acid in sea buckthorn (SB) (Hippophaë rhamnoides) berries and leaves. The method was validated in terms of linearity, LOD, precision, accuracy and recovery and found to be satisfactory. Phenolic acid derivatives in anatomical parts of SB berries and leaves were separated into free phenolic acids, phenolic acids bound as esters and phenolic acids bound as glycosides and profiled in HPLC. Berry pulp contained a total of 1068 mg/kg phenolic acids, of which 58.8% was derived from phenolic glycosides. Free phenolic acids and phenolic acid esters constituted 20.0% and 21.2%, respectively, of total phenolic acids in SB berry pulp. The total phenolic acid content in seed kernel (5741 mg/kg) was higher than that in berry pulp and seed coat (Table 2). Phenolic acids liberated from soluble esters constituted the major fraction of phenolic acids (57.3% of total phenolic acids) in seed kernel. 8.4% and 34.3% of total phenolic acids in seed kernel were, respectively contributed by free and phenolic acids liberated from glycosidic bonds. The total soluble phenolic acids content in seed coat (448 mg/kg) was lower than that in seed kernel and pulp (Table 2). Proportion of free phenolic acids in total phenolic acids in seed coat was higher than that in seed kernel and pulp. Phenolic acids bound as esters and glycosides, respectively contributed 49.1% and 20.3% of total phenolic acids in seed coat. The major fraction (approximately 70%) of phenolic acids in SB berries was found to be concentrated in the seeds. Gallic acid was the predominant phenolic acid both in free and bound forms in SB berry parts and leaves.
Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki
2017-01-01
This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974
Direct Importance Estimation with Gaussian Mixture Models
NASA Astrophysics Data System (ADS)
Yamada, Makoto; Sugiyama, Masashi
The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...
7 CFR 51.1450 - Serious damage.
Code of Federal Regulations, 2010 CFR
2010-01-01
...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...
7 CFR 51.1450 - Serious damage.
Code of Federal Regulations, 2011 CFR
2011-01-01
...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...
7 CFR 51.1450 - Serious damage.
Code of Federal Regulations, 2012 CFR
2012-01-01
...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...
A trace ratio maximization approach to multiple kernel-based dimensionality reduction.
Jiang, Wenhao; Chung, Fu-lai
2014-01-01
Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.
Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar
2017-01-01
Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838
Hadamard Kernel SVM with applications for breast cancer outcome predictions.
Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong
2017-12-21
Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.
Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar
2017-01-01
Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.
NASA Astrophysics Data System (ADS)
Ermida, Sofia; DaCamara, Carlos C.; Trigo, Isabel F.; Pires, Ana C.; Ghent, Darren
2017-04-01
Land Surface Temperature (LST) is a key climatological variable and a diagnostic parameter of land surface conditions. Remote sensing constitutes the most effective method to observe LST over large areas and on a regular basis. Although LST estimation from remote sensing instruments operating in the Infrared (IR) is widely used and has been performed for nearly 3 decades, there is still a list of open issues. One of these is the LST dependence on viewing and illumination geometry. This effect introduces significant discrepancies among LST estimations from different sensors, overlapping in space and time, that are not related to uncertainties in the methodologies or input data used. Furthermore, these directional effects deviate LST products from an ideally defined LST, which should represent to the ensemble of directional radiometric temperature of all surface elements within the FOV. Angular effects on LST are here conveniently estimated by means of a kernel model of the surface thermal emission, which describes the angular dependence of LST as a function of viewing and illumination geometry. The model is calibrated using LST data as provided by a wide range of sensors to optimize spatial coverage, namely: 1) a LEO sensor - the Moderate Resolution Imaging Spectroradiometer (MODIS) on-board NASA's TERRA and AQUA; and 2) 3 GEO sensors - the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on-board EUMETSAT's Meteosat Second Generation (MSG), the Japanese Meteorological Imager (JAMI) on-board the Japanese Meteorological Association (JMA) Multifunction Transport SATellite (MTSAT-2), and NASA's Geostationary Operational Environmental Satellites (GOES). As shown in our previous feasibility studies the sampling of illumination and view angles has a high impact on the obtained model parameters. This impact may be mitigated when the sampling size is increased by aggregating pixels with similar surface conditions. Here we propose a methodology where land surface is stratified by means of a cluster analysis using information on land cover type, fraction of vegetation cover and topography. The kernel model is then adjusted to LST data corresponding to each cluster. It is shown that the quality of the cluster based kernel model is very close to the pixel based one. Furthermore, the reduced number of parameters (limited to the number of identified clusters, instead of a pixel-by-pixel model calibration) allows improving the kernel model trough the incorporation of a seasonal component. The application of the here discussed procedure towards the harmonization of LST products from multi-sensors is on the framework of the ESA DUE GlobTemperature project.
Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila
2018-05-07
Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.
Using satellite radiotelemetry data to delineate and manage wildlife populations
Amstrup, Steven C.; McDonald, T.L.; Durner, George M.
2004-01-01
The greatest promise of radiotelemetry always has been a better understanding of animal movements. Telemetry has helped us know when animals are active, how active they are, how far and how fast they move, the geographic areas they occupy, and whether individuals vary in these traits. Unfortunately, the inability to estimate the error in animals utilization distributions (UDs), has prevented probabilistic linkage of movements data, which are always retrospective, with future management actions. We used the example of the harvested population of polar bears (Ursus maritimus) in the Southern Beaufort Sea to illustrate a method that provides that linkage. We employed a 2-dimensional Gaussian kernel density estimator to smooth and scale frequencies of polar bear radio locations within cells of a grid overlying our study area. True 2-dimensional smoothing allowed us to create accurate descriptions of the UDs of individuals and groups of bears. We used a new method of clustering, based upon the relative use collared bears made of each cell in our grid, to assign individual animals to populations. We applied the fast Fourier transform to make bootstrapped estimates of the error in UDs computationally feasible. Clustering and kernel smoothing identified 3 populations of polar bears in the region between Wrangel Island, Russia, and Banks Island, Canada. The relative probability of occurrence of animals from each population varied significantly among grid cells distributed across the study area. We displayed occurrence probabilities as contour maps wherein each contour line corresponded with a change in relative probability. Only at the edges of our study area and in some offshore regions were bootstrapped estimates of error in occurrence probabilities too high to allow prediction. Error estimates, which also were displayed as contours, allowed us to show that occurrence probabilities did not vary by season. Near Barrow, Alaska, 50% of bears observed are predicted to be from the Chukchi Sea population and 50% from the Southern Beaufort Sea population. At Tuktoyaktuk, Northwest Territories, Canada, 50% are from the Southern Beaufort Sea and 50% from the Northern Beaufort Sea population. The methods described here will aid managers of all wildlife that can be studied by telemetry to allocate harvests and other human perturbations to the appropriate populations, make risk assessments, and predict impacts of human activities. They will aid researchers by providing the refined descriptions of study populations that are necessary for population estimation and other investigative tasks. Arctic, Beaufort Sea, boundaries, clustering, Fourier transform, kernel, management, polar bears, population delineation, radiotelemetry, satellite, smoothing, Ursus maritimus
Debebe, Abel; Singh, Harijat; Tefera, Hailu
2014-01-01
This experiment was conducted at Debre Zeit and Akaki during 2004-2005 cropping season on F2-derived F4 bulk families of three crosses, viz, DZ-01-974 x DZ-01-2786, DZ-01-974 x DZ-Cr-37 and Alba x Kaye Murri. To estimate the correlations and path coefficients between yield and yield components, 63 F4 families were taken randomly from each of the three crosses. The 189 F4 families, five parents and two checks were space planted following in 14 x 14 simple lattice design. Study of associations among traits indicated that yield was positively associated with shoot biomass, harvest index, lodging index and panicle kernel weight at phenotypic level at Debre Zeit. At Akaki, yield had significant positive correlation with shoot biomass, harvest index, plant height, panicle length and panicle weight. At genotypic level, grain yield per plot exhibited positive association with harvest index, shoot biomass, lodging index and panicle kernel weight at Debre Zeit. By contrast, days to heading, days to maturity, plant height and panicle length showed negative association with yield. At Akaki, kernel yield per plot was positively correlated at genotypic level with all the traits considered where lodging index had the highest correlation followed by shoot biomass, panicle kernel weight and harvest index. Path coefficient analysis at both phenotypic and genotypic levels for both the locations suggested those shoot biomass and harvest indexes are the two important yield determining traits. These two traits might be useful in indirect selection for yield improvement in the material generated from the three crosses under consideration.
Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J
2017-05-01
Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.
A framework for optimal kernel-based manifold embedding of medical image data.
Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma
2015-04-01
Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.
Evaluating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Wilton, Donald R.; Champagne, Nathan J.
2008-01-01
Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.
Hazard Function Estimation with Cause-of-Death Data Missing at Random
Wang, Qihua; Dinse, Gregg E.; Liu, Chunling
2010-01-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data. PMID:22267874
Kernel Machine SNP-set Testing under Multiple Candidate Kernels
Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.
2013-01-01
Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868
Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki
2014-01-01
The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.
On processed splitting methods and high-order actions in path-integral Monte Carlo simulations.
Casas, Fernando
2010-10-21
Processed splitting methods are particularly well adapted to carry out path-integral Monte Carlo (PIMC) simulations: since one is mainly interested in estimating traces of operators, only the kernel of the method is necessary to approximate the thermal density matrix. Unfortunately, they suffer the same drawback as standard, nonprocessed integrators: kernels of effective order greater than two necessarily involve some negative coefficients. This problem can be circumvented, however, by incorporating modified potentials into the composition, thus rendering schemes of higher effective order. In this work we analyze a family of fourth-order schemes recently proposed in the PIMC setting, paying special attention to their linear stability properties, and justify their observed behavior in practice. We also propose a new fourth-order scheme requiring the same computational cost but with an enlarged stability interval.
NASA Astrophysics Data System (ADS)
Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei
2017-03-01
The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.
7 CFR 810.202 - Definition of other terms.
Code of Federal Regulations, 2014 CFR
2014-01-01
... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...
7 CFR 810.202 - Definition of other terms.
Code of Federal Regulations, 2013 CFR
2013-01-01
... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...
7 CFR 810.202 - Definition of other terms.
Code of Federal Regulations, 2012 CFR
2012-01-01
... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...
graphkernels: R and Python packages for graph comparison
Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten
2018-01-01
Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902
Aflatoxin variability in pistachios.
Mahoney, N E; Rodriguez, S B
1996-01-01
Pistachio fruit components, including hulls (mesocarps and epicarps), seed coats (testas), and kernels (seeds), all contribute to variable aflatoxin content in pistachios. Fresh pistachio kernels were individually inoculated with Aspergillus flavus and incubated 7 or 10 days. Hulled, shelled kernels were either left intact or wounded prior to inoculation. Wounded kernels, with or without the seed coat, were readily colonized by A. flavus and after 10 days of incubation contained 37 times more aflatoxin than similarly treated unwounded kernels. The aflatoxin levels in the individual wounded pistachios were highly variable. Neither fungal colonization nor aflatoxin was detected in intact kernels without seed coats. Intact kernels with seed coats had limited fungal colonization and low aflatoxin concentrations compared with their wounded counterparts. Despite substantial fungal colonization of wounded hulls, aflatoxin was not detected in hulls. Aflatoxin levels were significantly lower in wounded kernels with hulls than in kernels of hulled pistachios. Both the seed coat and a water-soluble extract of hulls suppressed aflatoxin production by A. flavus. PMID:8919781
graphkernels: R and Python packages for graph comparison.
Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten
2018-02-01
Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.
Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.
2013-01-01
Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507
Evaluation of Statistical Downscaling Skill at Reproducing Extreme Events
NASA Astrophysics Data System (ADS)
McGinnis, S. A.; Tye, M. R.; Nychka, D. W.; Mearns, L. O.
2015-12-01
Climate model outputs usually have much coarser spatial resolution than is needed by impacts models. Although higher resolution can be achieved using regional climate models for dynamical downscaling, further downscaling is often required. The final resolution gap is often closed with a combination of spatial interpolation and bias correction, which constitutes a form of statistical downscaling. We use this technique to downscale regional climate model data and evaluate its skill in reproducing extreme events. We downscale output from the North American Regional Climate Change Assessment Program (NARCCAP) dataset from its native 50-km spatial resolution to the 4-km resolution of University of Idaho's METDATA gridded surface meterological dataset, which derives from the PRISM and NLDAS-2 observational datasets. We operate on the major variables used in impacts analysis at a daily timescale: daily minimum and maximum temperature, precipitation, humidity, pressure, solar radiation, and winds. To interpolate the data, we use the patch recovery method from the Earth System Modeling Framework (ESMF) regridding package. We then bias correct the data using Kernel Density Distribution Mapping (KDDM), which has been shown to exhibit superior overall performance across multiple metrics. Finally, we evaluate the skill of this technique in reproducing extreme events by comparing raw and downscaled output with meterological station data in different bioclimatic regions according to the the skill scores defined by Perkins et al in 2013 for evaluation of AR4 climate models. We also investigate techniques for improving bias correction of values in the tails of the distributions. These techniques include binned kernel density estimation, logspline kernel density estimation, and transfer functions constructed by fitting the tails with a generalized pareto distribution.
Interaction between aerosol and the planetary boundary layer depth at sites in the US and China
NASA Astrophysics Data System (ADS)
Sawyer, V. R.
2015-12-01
The depth of the planetary boundary layer (PBL) defines a changing volume into which pollutants from the surface can disperse, which affects weather, surface air quality and radiative forcing in the lower troposphere. Model simulations have also shown that aerosol within the PBL heats the layer at the expense of the surface, changing the stability profile and therefore also the development of the PBL itself: aerosol radiative forcing within the PBL suppresses surface convection and causes shallower PBLs. However, the effect has been difficult to detect in observations. The most intensive radiosonde measurements have a temporal resolution too coarse to detect the full diurnal variability of the PBL, but remote sensing such as lidar can fill in the gaps. Using a method that combines two common PBL detection algorithms (wavelet covariance and iterative curve-fitting) PBL depth retrievals from micropulse lidar (MPL) at the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site are compared to MPL-derived PBL depths from a multiyear lidar deployment at the Hefei Radiation Observatory (HeRO). With aerosol optical depth (AOD) measurements from both sites, it can be shown that a weak inverse relationship exists between AOD and daytime PBL depth. This relationship is stronger at the more polluted HeRO site than at SGP. Figure: Mean daily AOD vs. mean daily PBL depth, with the Nadaraya-Watson estimator overlaid on the kernel density estimate. Left, SGP; right, HeRO.
Single image super-resolution reconstruction algorithm based on eage selection
NASA Astrophysics Data System (ADS)
Zhang, Yaolan; Liu, Yijun
2017-05-01
Super-resolution (SR) has become more important, because it can generate high-quality high-resolution (HR) images from low-resolution (LR) input images. At present, there are a lot of work is concentrated on developing sophisticated image priors to improve the image quality, while taking much less attention to estimating and incorporating the blur model that can also impact the reconstruction results. We present a new reconstruction method based on eager selection. This method takes full account of the factors that affect the blur kernel estimation and accurately estimating the blur process. When comparing with the state-of-the-art methods, our method has comparable performance.
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; von Davier, Alina A.
2008-01-01
The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…
Code of Federal Regulations, 2010 CFR
2010-01-01
...— Damaged kernels 1 (percent) Foreign material (percent) Other grains (percent) Skinned and broken kernels....0 10.0 15.0 1 Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered against sound barley. Notes: Malting barley shall not be infested in accordance with...
Code of Federal Regulations, 2013 CFR
2013-01-01
... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...
Code of Federal Regulations, 2014 CFR
2014-01-01
... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...
7 CFR 810.205 - Grades and grade requirements for Two-rowed Malting barley.
Code of Federal Regulations, 2010 CFR
2010-01-01
... (percent) Maximum limits of— Wild oats (percent) Foreign material (percent) Skinned and broken kernels... Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered...