Science.gov

Sample records for adaptive scatter kernel

  1. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  2. Scatter correction for cone-beam computed tomography using self-adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Xie, Shi-Peng; Luo, Li-Min

    2012-06-01

    The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.

  3. Calculates Thermal Neutron Scattering Kernel.

    1989-11-10

    Version 00 THRUSH computes the thermal neutron scattering kernel by the phonon expansion method for both coherent and incoherent scattering processes. The calculation of the coherent part is suitable only for calculating the scattering kernel for heavy water.

  4. The collapsed cone algorithm for 192Ir dosimetry using phantom-size adaptive multiple-scatter point kernels

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-01

    /phantom for which low doses at phantom edges can be overestimated by 2-5 %. It would be possible to improve the situation by using a point kernel for multiple-scatter dose adapted to the patient/phantom dimensions at hand.

  5. Adaptive wiener image restoration kernel

    DOEpatents

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  6. Asymmetric scatter kernels for software-based scatter correction of gridless mammography

    NASA Astrophysics Data System (ADS)

    Wang, Adam; Shapiro, Edward; Yoon, Sungwon; Ganguly, Arundhuti; Proano, Cesar; Colbeth, Rick; Lehto, Erkki; Star-Lack, Josh

    2015-03-01

    Scattered radiation remains one of the primary challenges for digital mammography, resulting in decreased image contrast and visualization of key features. While anti-scatter grids are commonly used to reduce scattered radiation in digital mammography, they are an incomplete solution that can add radiation dose, cost, and complexity. Instead, a software-based scatter correction method utilizing asymmetric scatter kernels is developed and evaluated in this work, which improves upon conventional symmetric kernels by adapting to local variations in object thickness and attenuation that result from the heterogeneous nature of breast tissue. This fast adaptive scatter kernel superposition (fASKS) method was applied to mammography by generating scatter kernels specific to the object size, x-ray energy, and system geometry of the projection data. The method was first validated with Monte Carlo simulation of a statistically-defined digital breast phantom, which was followed by initial validation on phantom studies conducted on a clinical mammography system. Results from the Monte Carlo simulation demonstrate excellent agreement between the estimated and true scatter signal, resulting in accurate scatter correction and recovery of 87% of the image contrast originally lost to scatter. Additionally, the asymmetric kernel provided more accurate scatter correction than the conventional symmetric kernel, especially at the edge of the breast. Results from the phantom studies on a clinical system further validate the ability of the asymmetric kernel correction method to accurately subtract the scatter signal and improve image quality. In conclusion, software-based scatter correction for mammography is a promising alternative to hardware-based approaches such as anti-scatter grids.

  7. Adaptive kernels for multi-fiber reconstruction.

    PubMed

    Barmpoutis, Angelos; Jian, Bing; Vemuri, Baba C

    2009-01-01

    In this paper we present a novel method for multi-fiber reconstruction given a diffusion-weighted MRI dataset. There are several existing methods that employ various spherical deconvolution kernels for achieving this task. However the kernels in all of the existing methods rely on certain assumptions regarding the properties of the underlying fibers, which introduce inaccuracies and unnatural limitations in them. Our model is a non trivial generalization of the spherical deconvolution model, which unlike the existing methods does not make use of a fix-shaped kernel. Instead, the shape of the kernel is estimated simultaneously with the rest of the unknown parameters by employing a general adaptive model that can theoretically approximate any spherical deconvolution kernel. The performance of our model is demonstrated using simulated and real diffusion-weighed MR datasets and compared quantitatively with several existing techniques in literature. The results obtained indicate that our model has superior performance that is close to the theoretic limit of the best possible achievable result.

  8. Kernel Manifold Alignment for Domain Adaptation.

    PubMed

    Tuia, Devis; Camps-Valls, Gustau

    2016-01-01

    The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors' knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational

  9. Kernel Manifold Alignment for Domain Adaptation

    PubMed Central

    Tuia, Devis; Camps-Valls, Gustau

    2016-01-01

    The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors’ knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational

  10. Boundary conditions for gas flow problems from anisotropic scattering kernels

    NASA Astrophysics Data System (ADS)

    To, Quy-Dong; Vu, Van-Huyen; Lauriat, Guy; Léonard, Céline

    2015-10-01

    The paper presents an interface model for gas flowing through a channel constituted of anisotropic wall surfaces. Using anisotropic scattering kernels and Chapman Enskog phase density, the boundary conditions (BCs) for velocity, temperature, and discontinuities including velocity slip and temperature jump at the wall are obtained. Two scattering kernels, Dadzie and Méolans (DM) kernel, and generalized anisotropic Cercignani-Lampis (ACL) are examined in the present paper, yielding simple BCs at the wall fluid interface. With these two kernels, we rigorously recover the analytical expression for orientation dependent slip shown in our previous works [Pham et al., Phys. Rev. E 86, 051201 (2012) and To et al., J. Heat Transfer 137, 091002 (2015)] which is in good agreement with molecular dynamics simulation results. More important, our models include both thermal transpiration effect and new equations for the temperature jump. While the same expression depending on the two tangential accommodation coefficients is obtained for slip velocity, the DM and ACL temperature equations are significantly different. The derived BC equations associated with these two kernels are of interest for the gas simulations since they are able to capture the direction dependent slip behavior of anisotropic interfaces.

  11. Analytical scatter kernels for portal imaging at 6 MV.

    PubMed

    Spies, L; Bortfeld, T

    2001-04-01

    X-ray photon scatter kernels for 6 MV electronic portal imaging are investigated using an analytical and a semi-analytical model. The models are tested on homogeneous phantoms for a range of uniform circular fields and scatterer-to-detector air gaps relevant for clinical use. It is found that a fully analytical model based on an exact treatment of photons undergoing a single Compton scatter event and an approximate treatment of second and higher order scatter events, assuming a multiple-scatter source at the center of the scatter volume, is accurate within 1% (i.e., the residual scatter signal is less than 1% of the primary signal) for field sizes up to 100 cm2 and air gaps over 30 cm, but shows significant discrepancies for larger field sizes. Monte Carlo results are presented showing that the effective multiple-scatter source is located toward the exit surface of the scatterer, rather than at its center. A second model is therefore investigated where second and higher-order scattering is instead modeled by fitting an analytical function describing a nonstationary isotropic point-scatter source to Monte Carlo generated data. This second model is shown to be accurate to within 1% for air gaps down to 20 cm, for field sizes up to 900 cm2 and phantom thicknesses up to 50 cm. PMID:11339752

  12. Analog forecasting with dynamics-adapted kernels

    NASA Astrophysics Data System (ADS)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  13. Identification of nonlinear optical systems using adaptive kernel methods

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Zhang, Changjiang; Zhang, Haoran; Feng, Genliang; Xu, Xiuling

    2005-12-01

    An identification approach of nonlinear optical dynamic systems, based on adaptive kernel methods which are modified version of least squares support vector machine (LS-SVM), is presented in order to obtain the reference dynamic model for solving real time applications such as adaptive signal processing of the optical systems. The feasibility of this approach is demonstrated with the computer simulation through identifying a Bragg acoustic-optical bistable system. Unlike artificial neural networks, the adaptive kernel methods possess prominent advantages: over fitting is unlikely to occur by employing structural risk minimization criterion, the global optimal solution can be uniquely obtained owing to that its training is performed through the solution of a set of linear equations. Also, the adaptive kernel methods are still effective for the nonlinear optical systems with a variation of the system parameter. This method is robust with respect to noise, and it constitutes another powerful tool for the identification of nonlinear optical systems.

  14. A kernel adaptive algorithm for quaternion-valued inputs.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations. PMID:25594982

  15. An information theoretic approach of designing sparse kernel adaptive filters.

    PubMed

    Liu, Weifeng; Park, Il; Principe, José C

    2009-12-01

    This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented. PMID:19923047

  16. An information theoretic approach of designing sparse kernel adaptive filters.

    PubMed

    Liu, Weifeng; Park, Il; Principe, José C

    2009-12-01

    This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented.

  17. Parametrization and application of scatter kernels for modelling scanned proton beam collimator scatter dose

    NASA Astrophysics Data System (ADS)

    Kimstrand, Peter; Traneus, Erik; Ahnesjö, Anders; Tilly, Nina

    2008-07-01

    Collimators are routinely used in proton radiotherapy to laterally confine the field and improve the penumbra. Collimator scatter contributes up to 15% of the local dose and is therefore important to include in treatment planning dose calculation. We present a method for reconstruction of the collimator scatter phase space based on the parametrization of pre-calculated scatter kernels. Collimator scatter distributions, generated by the Monte Carlo (MC) package GEANT4.8.2, were scored differential in direction and energy. The distributions were then parametrized so as to enable a fast reconstruction by sampling. MC calculated dose distributions in water based on the parametrized phase space were compared to full MC simulations that included the collimator in the simulation geometry, as well as to experimental data. The experiments were performed at the scanned proton beam line at the The Svedberg Laboratory (TSL) in Uppsala, Sweden. Dose calculations using the parametrization of this work and the full MC for isolated typical cases of collimator scatter were compared by means of the gamma index. The result showed that in total 96.7% (99.3%) of the voxels fulfilled the gamma 2.0%/2.0 mm (3.0%/3.0 mm) criterion. The dose distribution for a collimated field was calculated based on the phase space created by the collimator scatter model incorporated into the generation of the phase space of a scanned proton beam. Comparing these dose distributions to full MC simulations, including particle transport in the MLC, yielded that in total for 18 different collimated fields, 99.1% of the voxels satisfied the gamma 1.0%/1.0 mm criterion and no voxel exceeded the gamma 2.6%/2.6 mm criterion. The dose contribution of collimator scatter along the central axis as predicted by the model showed good agreement with experimental data.

  18. Data consistency-driven scatter kernel optimization for x-ray cone-beam CT

    NASA Astrophysics Data System (ADS)

    Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong

    2015-08-01

    Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.

  19. Kernel Integration Code System--Multigroup Gamma-Ray Scattering.

    1988-02-15

    GGG (G3) is the generic designation for a series of computer programs that enable the user to estimate gamma-ray scattering from a point source to a series of point detectors. Program output includes detector response due to each source energy, as well as a grouping by scattered energy in addition to a simple, unscattered beam result. Although G3 is basically a single-scatter program, it also includes a correction for multiple scattering by applying a buildupmore » factor for the path segment between the point of scatter and the detector point. Results are recorded with and without the buildup factor. Surfaces, defined by quadratic equations, are used to provide for a full three-dimensional description of the physical geometry. G3 evaluates scattering effects in those situations where more exact techniques are not economical. G3 was revised by Bettis and the name was changed to indicate that it was no longer identical to the G3 program. The name S3 was chosen since the scattering calculation has three steps: calculation of the flux arriving at the scatterer from the point source, calculation of the differential scattering cross section, and calculation of the scattered flux arriving at the detector.« less

  20. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System.

    PubMed

    Liu, Chunmei; Wang, Yirui; Gao, Shangce

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  1. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    PubMed Central

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  2. A simple method for computing the relativistic Compton scattering kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Prasad, M. K.; Kershaw, D. S.; Beason, J. D.

    1986-01-01

    Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.

  3. Scatter kernel estimation with an edge-spread function method for cone-beam computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Li, Heng; Mohan, Radhe; Zhu, X. Ronald

    2008-12-01

    The clinical applications of kilovoltage x-ray cone-beam computed tomography (CBCT) have been compromised by the limited quality of CBCT images, which typically is due to a substantial scatter component in the projection data. In this paper, we describe an experimental method of deriving the scatter kernel of a CBCT imaging system. The estimated scatter kernel can be used to remove the scatter component from the CBCT projection images, thus improving the quality of the reconstructed image. The scattered radiation was approximated as depth-dependent, pencil-beam kernels, which were derived using an edge-spread function (ESF) method. The ESF geometry was achieved with a half-beam block created by a 3 mm thick lead sheet placed on a stack of slab solid-water phantoms. Measurements for ten water-equivalent thicknesses (WET) ranging from 0 cm to 41 cm were taken with (half-blocked) and without (unblocked) the lead sheet, and corresponding pencil-beam scatter kernels or point-spread functions (PSFs) were then derived without assuming any empirical trial function. The derived scatter kernels were verified with phantom studies. Scatter correction was then incorporated into the reconstruction process to improve image quality. For a 32 cm diameter cylinder phantom, the flatness of the reconstructed image was improved from 22% to 5%. When the method was applied to CBCT images for patients undergoing image-guided therapy of the pelvis and lung, the variation in selected regions of interest (ROIs) was reduced from >300 HU to <100 HU. We conclude that the scatter reduction technique utilizing the scatter kernel effectively suppresses the artifact caused by scatter in CBCT.

  4. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods

    PubMed Central

    Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  5. Risk Classification with an Adaptive Naive Bayes Kernel Machine Model

    PubMed Central

    Minnier, Jessica; Yuan, Ming; Liu, Jun S.; Cai, Tianxi

    2014-01-01

    Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models. PMID:26236061

  6. A novel kernel extreme learning machine algorithm based on self-adaptive artificial bee colony optimisation strategy

    NASA Astrophysics Data System (ADS)

    Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Ji, Jin-Chao

    2016-04-01

    In this paper, we propose a novel learning algorithm, named SABC-MKELM, based on a kernel extreme learning machine (KELM) method for single-hidden-layer feedforward networks. In SABC-MKELM, the combination of Gaussian kernels is used as the activate function of KELM instead of simple fixed kernel learning, where the related parameters of kernels and the weights of kernels can be optimised by a novel self-adaptive artificial bee colony (SABC) approach simultaneously. SABC-MKELM outperforms six other state-of-the-art approaches in general, as it could effectively determine solution updating strategies and suitable parameters to produce a flexible kernel function involved in SABC. Simulations have demonstrated that the proposed algorithm not only self-adaptively determines suitable parameters and solution updating strategies learning from the previous experiences, but also achieves better generalisation performances than several related methods, and the results show good stability of the proposed algorithm.

  7. Support vector machine with adaptive composite kernel for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Li, Wei; Du, Qian

    2015-05-01

    With the improvement of spatial resolution of hyperspectral imagery, it is more reasonable to include spatial information in classification. The resulting spectral-spatial classification outperforms the traditional hyperspectral image classification with spectral information only. Among many spectral-spatial classifiers, support vector machine with composite kernel (SVM-CK) can provide superior performance, with one kernel for spectral information and the other for spatial information. In the original SVM-CK, the spatial information is retrieved by spatial averaging of pixels in a local neighborhood, and used in classifying the central pixel. Obviously, not all the pixels in such a local neighborhood may belong to the same class. Thus, we investigate the performance of Gaussian lowpass filter and an adaptive filter with weights being assigned based on the similarity to the central pixel. The adaptive filter can significantly improve classification accuracy while the Gaussian lowpass filter is less time-consuming and less sensitive to the window size.

  8. Improvement of SVM-Based Speech/Music Classification Using Adaptive Kernel Technique

    NASA Astrophysics Data System (ADS)

    Lim, Chungsoo; Chang, Joon-Hyuk

    In this paper, we propose a way to improve the classification performance of support vector machines (SVMs), especially for speech and music frames within a selectable mode vocoder (SMV) framework. A myriad of techniques have been proposed for SVMs, and most of them are employed during the training phase of SVMs. Instead, the proposed algorithm is applied during the test phase and works with existing schemes. The proposed algorithm modifies a kernel parameter in the decision function of SVMs to alter SVM decisions for better classification accuracy based on the previous outputs of SVMs. Since speech and music frames exhibit strong inter-frame correlation, the outputs of SVMs can guide the kernel parameter modification. Our experimental results show that the proposed algorithm has the potential for adaptively tuning classifications of support vector machines for better performance.

  9. Accurate palm vein recognition based on wavelet scattering and spectral regression kernel discriminant analysis

    NASA Astrophysics Data System (ADS)

    Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad

    2015-01-01

    Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.

  10. Multi-source adaptation joint kernel sparse representation for visual classification.

    PubMed

    Tao, JianWen; Hu, Wenjun; Wen, Shiting

    2016-04-01

    Most of the existing domain adaptation learning (DAL) methods relies on a single source domain to learn a classifier with well-generalized performance for the target domain of interest, which may lead to the so-called negative transfer problem. To this end, many multi-source adaptation methods have been proposed. While the advantages of using multi-source domains of information for establishing an adaptation model have been widely recognized, how to boost the robustness of the computational model for multi-source adaptation learning has only recently received attention. To address this issue for achieving enhanced performance, we propose in this paper a novel algorithm called multi-source Adaptation Regularization Joint Kernel Sparse Representation (ARJKSR) for robust visual classification problems. Specifically, ARJKSR jointly represents target dataset by a sparse linear combination of training data of each source domain in some optimal Reproduced Kernel Hilbert Space (RKHS), recovered by simultaneously minimizing the inter-domain distribution discrepancy and maximizing the local consistency, whilst constraining the observations from both target and source domains to share their sparse representations. The optimization problem of ARJKSR can be solved using an efficient alternative direction method. Under the framework ARJKSR, we further learn a robust label prediction matrix for the unlabeled instances of target domain based on the classical graph-based semi-supervised learning (GSSL) diagram, into which multiple Laplacian graphs constructed with the ARJKSR are incorporated. The validity of our method is examined by several visual classification problems. Results demonstrate the superiority of our method in comparison to several state-of-the-arts. PMID:26894961

  11. An adaptive kernel smoothing method for classifying Austrosimulium tillyardianum (Diptera: Simuliidae) larval instars.

    PubMed

    Cen, Guanjun; Yu, Yonghao; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks' rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby's growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods.

  12. An Adaptive Kernel Smoothing Method for Classifying Austrosimulium tillyardianum (Diptera: Simuliidae) Larval Instars

    PubMed Central

    Cen, Guanjun; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks’ rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby’s growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods. PMID:26546689

  13. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels.

    PubMed

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.

  14. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels

    PubMed Central

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J.

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively “hiding” its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378

  15. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels.

    PubMed

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378

  16. Adaptive anisotropic kernels for nonparametric estimation of absolute configurational entropies in high-dimensional configuration spaces.

    PubMed

    Hensen, Ulf; Grubmüller, Helmut; Lange, Oliver F

    2009-07-01

    The quasiharmonic approximation is the most widely used estimate for the configurational entropy of macromolecules from configurational ensembles generated from atomistic simulations. This method, however, rests on two assumptions that severely limit its applicability, (i) that a principal component analysis yields sufficiently uncorrelated modes and (ii) that configurational densities can be well approximated by Gaussian functions. In this paper we introduce a nonparametric density estimation method which rests on adaptive anisotropic kernels. It is shown that this method provides accurate configurational entropies for up to 45 dimensions thus improving on the quasiharmonic approximation. When embedded in the minimally coupled subspace framework, large macromolecules of biological interest become accessible, as demonstrated for the 67-residue coldshock protein. PMID:19658735

  17. Mapping Spatial Variations of Absorption and Scattering in the Crust: Sensitivity Kernels and Preliminary Application to the Alps

    NASA Astrophysics Data System (ADS)

    Margerin, L.; Mayor, J.; Calvet, M.

    2015-12-01

    Among the physical processes that affect the amplitude of seismic waves, attenuation is one of the most poorly understood and undetermined factor. Two basic mechanisms control seismic attenuation in the crust: scattering by small-scale heterogeneities, and absorption of seismic energy by inelastic and irreversible processes. A number of techniques have been devised to retrieve attenuation information from the modeling of direct seismic waves emitted by earthquakes. However, a major issue with the use of ballistic signals lies in the fact that their amplitude is affected by multiple factors that are difficult to disentangle in practice: radiation pattern, focussing/defocussing or site effects. Moreover, since both scattering and absorption manifest themselves as an approximately exponential decay of direct wave amplitude with distance, it is not possible to separate their effects from attenuation measurements based on ballistic waves only. In this work, we propose a multiple scattering approach to map independently scattering and absorption properties of the crust using seismic coda waves. To this end, we introduce a model of energy transport of seismic energy known as radiative transfer and use perturbation theory to derive sensitivity kernels for the intensity detected in the coda. Numerical evaluation of these kernels demonstrates that coda waves possess distinct spatial sensitivities to absorption and scattering. These results pave the way for the development of a genuine tomographic approach to the mapping of absorption and scattering in the crust. Preliminary results on the absorption structure of the Alps in the 1-32 Hz frequency reveal some interesting correlations with the geology at spatial scales ranging from a few tens to a few thousand kilometers. Regions of high absorption delineate sedimentary structures such as basins, grabens and alluvial valleys while localized zones of weak absorption correlate with mantellic or plutonic intrusions such as the

  18. Deriving Sensitivity Kernels of Coda-Wave Travel Times to Velocity Changes Based on the Three-Dimensional Single Isotropic Scattering Model

    NASA Astrophysics Data System (ADS)

    Nakahara, Hisashi; Emoto, Kentaro

    2016-08-01

    Recently, coda-wave interferometry has been used to monitor temporal changes in subsurface structures. Seismic velocity changes have been detected by coda-wave interferometry in association with large earthquakes and volcanic eruptions. To constrain the spatial extent of the velocity changes, spatial homogeneity is often assumed. However, it is important to locate the region of the velocity changes correctly to understand physical mechanisms causing them. In this paper, we are concerned with the sensitivity kernels relating travel times of coda waves to velocity changes. In previous studies, sensitivity kernels have been formulated for two-dimensional single scattering and multiple scattering, three-dimensional multiple scattering, and diffusion. In this paper, we formulate and derive analytical expressions of the sensitivity kernels for three-dimensional single-scattering case. These sensitivity kernels show two peaks at both source and receiver locations, which is similar to the previous studies using different scattering models. The two peaks are more pronounced for later lapse time. We validate our formulation by comparing it with finite-difference simulations of acoustic wave propagation. Our formulation enables us to evaluate the sensitivity kernels analytically, which is particularly useful for the analysis of body waves from deeper earthquakes.

  19. Segmentation of Brain Tissues from Magnetic Resonance Images Using Adaptively Regularized Kernel-Based Fuzzy C-Means Clustering.

    PubMed

    Elazab, Ahmed; Wang, Changmiao; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Hu, Qingmao

    2015-01-01

    An adaptively regularized kernel-based fuzzy C-means clustering framework is proposed for segmentation of brain magnetic resonance images. The framework can be in the form of three algorithms for the local average grayscale being replaced by the grayscale of the average filter, median filter, and devised weighted images, respectively. The algorithms employ the heterogeneity of grayscales in the neighborhood and exploit this measure for local contextual information and replace the standard Euclidean distance with Gaussian radial basis kernel functions. The main advantages are adaptiveness to local context, enhanced robustness to preserve image details, independence of clustering parameters, and decreased computational costs. The algorithms have been validated against both synthetic and clinical magnetic resonance images with different types and levels of noises and compared with 6 recent soft clustering algorithms. Experimental results show that the proposed algorithms are superior in preserving image details and segmentation accuracy while maintaining a low computational complexity. PMID:26793269

  20. Segmentation of Brain Tissues from Magnetic Resonance Images Using Adaptively Regularized Kernel-Based Fuzzy C-Means Clustering.

    PubMed

    Elazab, Ahmed; Wang, Changmiao; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Hu, Qingmao

    2015-01-01

    An adaptively regularized kernel-based fuzzy C-means clustering framework is proposed for segmentation of brain magnetic resonance images. The framework can be in the form of three algorithms for the local average grayscale being replaced by the grayscale of the average filter, median filter, and devised weighted images, respectively. The algorithms employ the heterogeneity of grayscales in the neighborhood and exploit this measure for local contextual information and replace the standard Euclidean distance with Gaussian radial basis kernel functions. The main advantages are adaptiveness to local context, enhanced robustness to preserve image details, independence of clustering parameters, and decreased computational costs. The algorithms have been validated against both synthetic and clinical magnetic resonance images with different types and levels of noises and compared with 6 recent soft clustering algorithms. Experimental results show that the proposed algorithms are superior in preserving image details and segmentation accuracy while maintaining a low computational complexity.

  1. Osteoarthritis Classification Using Self Organizing Map Based on Gabor Kernel and Contrast-Limited Adaptive Histogram Equalization

    PubMed Central

    Anifah, Lilik; Purnama, I Ketut Eddy; Hariadi, Mochamad; Purnomo, Mauridhi Hery

    2013-01-01

    Localization is the first step in osteoarthritis (OA) classification. Manual classification, however, is time-consuming, tedious, and expensive. The proposed system is designed as decision support system for medical doctors to classify the severity of knee OA. A method has been proposed here to localize a joint space area for OA and then classify it in 4 steps to classify OA into KL-Grade 0, KL-Grade 1, KL-Grade 2, KL-Grade 3 and KL-Grade 4, which are preprocessing, segmentation, feature extraction, and classification. In this proposed system, right and left knee detection was performed by employing the Contrast-Limited Adaptive Histogram Equalization (CLAHE) and the template matching. The Gabor kernel, row sum graph and moment methods were used to localize the junction space area of knee. CLAHE is used for preprocessing step, i.e.to normalize the varied intensities. The segmentation process was conducted using the Gabor kernel, template matching, row sum graph and gray level center of mass method. Here GLCM (contrast, correlation, energy, and homogeinity) features were employed as training data. Overall, 50 data were evaluated for training and 258 data for testing. Experimental results showed the best performance by using gabor kernel with parameters α=8, θ=0, Ψ=[0 π/2], γ=0,8, N=4 and with number of iterations being 5000, momentum value 0.5 and α0=0.6 for the classification process. The run gave classification accuracy rate of 93.8% for KL-Grade 0, 70% for KL-Grade 1, 4% for KL-Grade 2, 10% for KL-Grade 3 and 88.9% for KL-Grade 4. PMID:23525188

  2. Equalizing resolution in smoothed-particle hydrodynamics calculations using self-adaptive sinc kernels

    NASA Astrophysics Data System (ADS)

    García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin

    2014-10-01

    Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.

  3. ASSESSMENT OF CLINICAL IMAGE QUALITY IN PAEDIATRIC ABDOMINAL CT EXAMINATIONS: DEPENDENCY ON THE LEVEL OF ADAPTIVE STATISTICAL ITERATIVE RECONSTRUCTION (ASiR) AND THE TYPE OF CONVOLUTION KERNEL.

    PubMed

    Larsson, Joel; Båth, Magnus; Ledenius, Kerstin; Caisander, Håkan; Thilander-Klang, Anne

    2016-06-01

    The purpose of this study was to investigate the effect of different combinations of convolution kernel and the level of Adaptive Statistical iterative Reconstruction (ASiR™) on diagnostic image quality as well as visualisation of anatomical structures in paediatric abdominal computed tomography (CT) examinations. Thirty-five paediatric patients with abdominal pain with non-specified pathology undergoing abdominal CT were included in the study. Transaxial stacks of 5-mm-thick images were retrospectively reconstructed at various ASiR levels, in combination with three convolution kernels. Four paediatric radiologists rated the diagnostic image quality and the delineation of six anatomical structures in a blinded randomised visual grading study. Image quality at a given ASiR level was found to be dependent on the kernel, and a more edge-enhancing kernel benefitted from a higher ASiR level. An ASiR level of 70 % together with the Soft™ or Standard™ kernel was suggested to be the optimal combination for paediatric abdominal CT examinations.

  4. Adaptive kernel independent component analysis and UV spectrometry applied to characterize the procedure for processing prepared rhubarb roots.

    PubMed

    Wang, Guoqing; Hou, Zhenyu; Peng, Yang; Wang, Yanjun; Sun, Xiaoli; Sun, Yu-an

    2011-11-01

    By determination of the number of absorptive chemical components (ACCs) in mixtures using median absolute deviation (MAD) analysis and extraction of spectral profiles of ACCs using kernel independent component analysis (KICA), an adaptive KICA (AKICA) algorithm was proposed. The proposed AKICA algorithm was used to characterize the procedure for processing prepared rhubarb roots by resolution of the measured mixed raw UV spectra of the rhubarb samples that were collected at different steaming intervals. The results show that the spectral features of ACCs in the mixtures can be directly estimated without chemical and physical pre-separation and other prior information. The estimated three independent components (ICs) represent different chemical components in the mixtures, which are mainly polysaccharides (IC1), tannin (IC2), and anthraquinone glycosides (IC3). The variations of the relative concentrations of the ICs can account for the chemical and physical changes during the processing procedure: IC1 increases significantly before the first 5 h, and is nearly invariant after 6 h; IC2 has no significant changes or is slightly decreased during the processing procedure; IC3 decreases significantly before the first 5 h and decreases slightly after 6 h. The changes of IC1 can explain why the colour became black and darkened during the processing procedure, and the changes of IC3 can explain why the processing procedure can reduce the bitter and dry taste of the rhubarb roots. The endpoint of the processing procedure can be determined as 5-6 h, when the increasing or decreasing trends of the estimated ICs are insignificant. The AKICA-UV method provides an alternative approach for the characterization of the processing procedure of rhubarb roots preparation, and provides a novel way for determination of the endpoint of the traditional Chinese medicine (TCM) processing procedure by inspection of the change trends of the ICs.

  5. MC Kernel: Broadband Waveform Sensitivity Kernels for Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; van Driel, Martin; Auer, Ludwig; Hosseini, Kasra; Sigloch, Karin; Nissen-Meyer, Tarje

    2016-04-01

    We present MC Kernel, a software implementation to calculate seismic sensitivity kernels on arbitrary tetrahedral or hexahedral grids across the whole observable seismic frequency band. Seismic sensitivity kernels are the basis for seismic tomography, since they map measurements to model perturbations. Their calculation over the whole frequency range was so far only possible with approximative methods (Dahlen et al. 2000). Fully numerical methods were restricted to the lower frequency range (usually below 0.05 Hz, Tromp et al. 2005). With our implementation, it's possible to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (www.axisem.info), and thus on spherically symmetric models. The advantage is that frequencies up to 0.2 Hz and higher can be accessed. Since the usage of irregular, adapted grids is an integral part of regularisation in seismic tomography, MC Kernel works in a inversion-grid-centred fashion: A Monte-Carlo integration method is used to project the kernel onto each basis function, which allows to control the desired precision of the kernel estimation. Also, it means that the code concentrates calculation effort on regions of interest without prior assumptions on the kernel shape. The code makes extensive use of redundancies in calculating kernels for different receivers or frequency-pass-bands for one earthquake, to facilitate its usage in large-scale global seismic tomography.

  6. Time-reversed adapted-perturbation (TRAP) optical focusing onto dynamic objects inside scattering media

    PubMed Central

    Ma, Cheng; Xu, Xiao; Liu, Yan; Wang, Lihong V.

    2014-01-01

    The ability to steer and focus light inside scattering media has long been sought for a multitude of applications. To form optical foci inside scattering media, the only feasible strategy at present is to guide photons by using either implanted1 or virtual2–4 guide stars, which can be inconvenient and limits potential applications. Here, we report a scheme for focusing light inside scattering media by employing intrinsic dynamics as guide stars. By time-reversing the perturbed component of the scattered light adaptively, we show that it is possible to focus light to the origin of the perturbation. Using the approach, we demonstrate non-invasive dynamic light focusing onto moving targets and imaging of a time-variant object obscured by highly scattering media. Anticipated applications include imaging and photoablation of angiogenic vessels in tumours as well as other biomedical uses. PMID:25530797

  7. Kernel Phase and Kernel Amplitude in Fizeau Imaging

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin J. S.

    2016-09-01

    Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent fhistory of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.

  8. A versatile setup using femtosecond adaptive spectroscopic techniques for coherent anti-Stokes Raman scattering

    SciTech Connect

    Shen, Yujie; Voronine, Dmitri V.; Sokolov, Alexei V.; Scully, Marlan O.

    2015-08-15

    We report a versatile setup based on the femtosecond adaptive spectroscopic techniques for coherent anti-Stokes Raman scattering. The setup uses a femtosecond Ti:Sapphire oscillator source and a folded 4f pulse shaper, in which the pulse shaping is carried out through conventional optical elements and does not require a spatial light modulator. Our setup is simple in alignment, and can be easily switched between the collinear single-beam and the noncollinear two-beam configurations. We demonstrate the capability for investigating both transparent and highly scattering samples by detecting transmitted and reflected signals, respectively.

  9. Analytical equations for CT dose profiles derived using a scatter kernel of Monte Carlo parentage with broad applicability to CT dosimetry problems

    SciTech Connect

    Dixon, Robert L.; Boone, John M.

    2011-07-15

    Purpose: Knowledge of the complete axial dose profile f(z), including its long scatter tails, provides the most complete (and flexible) description of the accumulated dose in CT scanning. The CTDI paradigm (including CTDI{sub vol}) requires shift-invariance along z (identical dose profiles spaced at equal intervals), and is therefore inapplicable to many of the new and complex shift-variant scan protocols, e.g., high dose perfusion studies using variable (or zero) pitch. In this work, a convolution-based beam model developed by Dixon et al.[Med. Phys. 32, 3712-3728, (2005)] updated with a scatter LSF kernel (or DSF) derived from a Monte Carlo simulation by Boone [Med. Phys. 36, 4547-4554 (2009)] is used to create an analytical equation for the axial dose profile f(z) in a cylindrical phantom. Using f(z), equations are derived which provide the analytical description of conventional (axial and helical) dose, demonstrating its physical underpinnings; and likewise for the peak axial dose f(0) appropriate to stationary phantom cone beam CT, (SCBCT). The methodology can also be applied to dose calculations in shift-variant scan protocols. This paper is an extension of our recent work Dixon and Boone [Med. Phys. 37, 2703-2718 (2010)], which dealt only with the properties of the peak dose f(0), its relationship to CTDI, and its appropriateness to SCBCT. Methods: The experimental beam profile data f(z) of Mori et al.[Med. Phys. 32, 1061-1069 (2005)] from a 256 channel prototype cone beam scanner for beam widths (apertures) ranging from a = 28 to 138 mm are used to corroborate the theoretical axial profiles in a 32 cm PMMA body phantom. Results: The theoretical functions f(z) closely-matched the central axis experimental profile data{sup 11} for all apertures (a = 28 -138 mm). Integration of f(z) likewise yields analytical equations for all the (CTDI-based) dosimetric quantities of conventional CT (including CTDI{sub L} itself) in addition to the peak dose f(0) relevant to

  10. Proton dose calculation on scatter-corrected CBCT image: Feasibility study for adaptive proton therapy

    PubMed Central

    Park, Yang-Kyun; Sharp, Gregory C.; Phillips, Justin; Winey, Brian A.

    2015-01-01

    Purpose: To demonstrate the feasibility of proton dose calculation on scatter-corrected cone-beam computed tomographic (CBCT) images for the purpose of adaptive proton therapy. Methods: CBCT projection images were acquired from anthropomorphic phantoms and a prostate patient using an on-board imaging system of an Elekta infinity linear accelerator. Two previously introduced techniques were used to correct the scattered x-rays in the raw projection images: uniform scatter correction (CBCTus) and a priori CT-based scatter correction (CBCTap). CBCT images were reconstructed using a standard FDK algorithm and GPU-based reconstruction toolkit. Soft tissue ROI-based HU shifting was used to improve HU accuracy of the uncorrected CBCT images and CBCTus, while no HU change was applied to the CBCTap. The degree of equivalence of the corrected CBCT images with respect to the reference CT image (CTref) was evaluated by using angular profiles of water equivalent path length (WEPL) and passively scattered proton treatment plans. The CBCTap was further evaluated in more realistic scenarios such as rectal filling and weight loss to assess the effect of mismatched prior information on the corrected images. Results: The uncorrected CBCT and CBCTus images demonstrated substantial WEPL discrepancies (7.3 ± 5.3 mm and 11.1 ± 6.6 mm, respectively) with respect to the CTref, while the CBCTap images showed substantially reduced WEPL errors (2.4 ± 2.0 mm). Similarly, the CBCTap-based treatment plans demonstrated a high pass rate (96.0% ± 2.5% in 2 mm/2% criteria) in a 3D gamma analysis. Conclusions: A priori CT-based scatter correction technique was shown to be promising for adaptive proton therapy, as it achieved equivalent proton dose distributions and water equivalent path lengths compared to those of a reference CT in a selection of anthropomorphic phantoms. PMID:26233175

  11. Forward scattering detection of a submerged moving target based on adaptive filtering technique.

    PubMed

    He, Chuanlin; Yang, Kunde; Lei, Bo; Ma, Yuanliang

    2015-09-01

    Forward scattered waves are always overwhelmed by severely intense direct blasts when a submerged target crosses the source-receiver line. A processing scheme called direct blast suppression based on adaptive filtering (DBS-AF) is proposed to suppress such blasts. A verification experiment was conducted in a lake with a vertical hydrophone array and 10 kHz CW impulses. Processing results show that the direct blast is suppressed in a single channel, and an intruding target is identified by the lobes in the detection curve. The detection performance is improved by adopting a time-delay beam-former on the array as a pre-processing technique. PMID:26428829

  12. Adaptive optics applied to coherent anti-Stokes Raman scattering microscopy

    NASA Astrophysics Data System (ADS)

    Girkin, John M.; Poland, Simon P.; Wright, Amanda J.; Freudiger, Christian; Evans, Conor L.; Xie, X. Sunney

    2008-02-01

    We report on the use of adaptive optics in coherent anti-Stokes Raman scattering microscopy (CARS) to improve the image brightness and quality at increased optical penetration depths in biological material. The principle of the technique is to shape the incoming wavefront in such a way that it counteracts the aberrations introduced by imperfect optics and the varying refractive index of the sample. In recent years adaptive optics have been implemented in multiphoton and confocal microscopy. CARS microscopy is proving to be a powerful tool for non-invasive and label-free biomedical imaging with vibrational contrast. As the contrast mechanism is based on a 3 rd order non-linear optical process, it is highly susceptible to aberrations, thus CARS signals are commonly lost beyond the depth of ~100 μm in tissue. We demonstrate the combination of adaptive optics and CARS microscopy for deep-tissue imaging using a deformable membrane mirror. A random search optimization algorithm using the CARS intensity as the figure of merit determined the correct mirror-shape in order to correct for the aberrations. We highlight two different methods of implementation, using a look up table technique and by performing the optimizing in situ. We demonstrate a significant increase in brightness and image quality in an agarose/polystyrene-bead sample and white chicken muscle, pushing the penetration depth beyond 200 μm.

  13. The context-tree kernel for strings.

    PubMed

    Cuturi, Marco; Vert, Jean-Philippe

    2005-10-01

    We propose a new kernel for strings which borrows ideas and techniques from information theory and data compression. This kernel can be used in combination with any kernel method, in particular Support Vector Machines for string classification, with notable applications in proteomics. By using a Bayesian averaging framework with conjugate priors on a class of Markovian models known as probabilistic suffix trees or context-trees, we compute the value of this kernel in linear time and space while only using the information contained in the spectrum of the considered strings. This is ensured through an adaptation of a compression method known as the context-tree weighting algorithm. Encouraging classification results are reported on a standard protein homology detection experiment, showing that the context-tree kernel performs well with respect to other state-of-the-art methods while using no biological prior knowledge.

  14. Modern industrial simulation tools: Kernel-level integration of high performance parallel processing, object-oriented numerics, and adaptive finite element analysis. Final report, July 16, 1993--September 30, 1997

    SciTech Connect

    Deb, M.K.; Kennon, S.R.

    1998-04-01

    A cooperative R&D effort between industry and the US government, this project, under the HPPP (High Performance Parallel Processing) initiative of the Dept. of Energy, started the investigations into parallel object-oriented (OO) numerics. The basic goal was to research and utilize the emerging technologies to create a physics-independent computational kernel for applications using adaptive finite element method. The industrial team included Computational Mechanics Co., Inc. (COMCO) of Austin, TX (as the primary contractor), Scientific Computing Associates, Inc. (SCA) of New Haven, CT, Texaco and CONVEX. Sandia National Laboratory (Albq., NM) was the technology partner from the government side. COMCO had the responsibility of the main kernel design and development, SCA had the lead in parallel solver technology and guidance on OO technologies was Sandia`s main expertise in this venture. CONVEX and Texaco supported the partnership by hardware resource and application knowledge, respectively. As such, a minimum of fifty-percent cost-sharing was provided by the industry partnership during this project. This report describes the R&D activities and provides some details about the prototype kernel and example applications.

  15. Robotic intelligence kernel

    DOEpatents

    Bruemmer, David J.

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  16. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  17. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318

  18. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  19. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  20. Kernel simplex growing algorithm for hyperspectral endmember extraction

    NASA Astrophysics Data System (ADS)

    Zhao, Liaoying; Zheng, Junpeng; Li, Xiaorun; Wang, Lijiao

    2014-01-01

    In order to effectively extract endmembers for hyperspectral imagery where linear mixing model may not be appropriate due to multiple scattering effects, this paper extends the simplex growing algorithm (SGA) to its kernel version. A new simplex volume formula without dimension reduction is used in SGA to form a new simplex growing algorithm (NSGA). The original data are nonlinearly mapped into a high-dimensional space where the scatters can be ignored. To avoid determining complex nonlinear mapping, a kernel function is used to extend the NSGA to kernel NSGA (KNSGA). Experimental results of simulated and real data prove that the proposed KNSGA approach outperforms SGA and NSGA.

  1. Adaptation of the University of Wisconsin High Spectral Resolution Lidar for Polarization and Multiple Scattering Measurements

    NASA Technical Reports Server (NTRS)

    Eloranta, E. W.; Piironen, P. K.

    1996-01-01

    Quantitative lidar measurements of aerosol scattering are hampered by the need for calibrations and the problem of correcting observed backscatter profiles for the effects of attenuation. The University of Wisconsin High Spectral Resolution Lidar (HSRL) addresses these problems by separating molecular scattering contributions from the aerosol scattering; the molecular scattering is then used as a calibration target that is available at each point in the observed profiles. While the HSRl approach has intrinsic advantages over competing techniques, realization of these advantages requires implementation of a technically demanding system which is potentially very sensitive to changes in temperature and mechanical alignments. This paper describes a new implementation of the HSRL in an instrumented van which allows measurements during field experiments. The HSRL was modified to measure depolarization. In addition, both the signal amplitude and depolarization variations with receiver field of view are simultaneously measured. This allows for discrimination of ice clouds from water clouds and observation of multiple scattering contributions to the lidar return.

  2. Iterative software kernels

    SciTech Connect

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  3. A self-adaptive method for creating high efficiency communication channels through random scattering media.

    PubMed

    Hao, Xiang; Martin-Rouault, Laure; Cui, Meng

    2014-07-29

    Controlling the propagation of electromagnetic waves is important to a broad range of applications. Recent advances in controlling wave propagation in random scattering media have enabled optical focusing and imaging inside random scattering media. In this work, we propose and demonstrate a new method to deliver optical power more efficiently through scattering media. Drastically different from the random matrix characterization approach, our method can rapidly establish high efficiency communication channels using just a few measurements, regardless of the number of optical modes, and provides a practical and robust solution to boost the signal levels in optical or short wave communications. We experimentally demonstrated analog and digital signal transmission through highly scattering media with greatly improved performance. Besides scattering, our method can also reduce the loss of signal due to absorption. Experimentally, we observed that our method forced light to go around absorbers, leading to even higher signal improvement than in the case of purely scattering media. Interestingly, the resulting signal improvement is highly directional, which provides a new means against eavesdropping.

  4. A Spectrum Tree Kernel

    NASA Astrophysics Data System (ADS)

    Kuboyama, Tetsuji; Hirata, Kouichi; Kashima, Hisashi; F. Aoki-Kinoshita, Kiyoko; Yasuda, Hiroshi

    Learning from tree-structured data has received increasing interest with the rapid growth of tree-encodable data in the World Wide Web, in biology, and in other areas. Our kernel function measures the similarity between two trees by counting the number of shared sub-patterns called tree q-grams, and runs, in effect, in linear time with respect to the number of tree nodes. We apply our kernel function with a support vector machine (SVM) to classify biological data, the glycans of several blood components. The experimental results show that our kernel function performs as well as one exclusively tailored to glycan properties.

  5. Correction for patient table-induced scattered radiation in cone-beam computed tomography (CBCT)

    SciTech Connect

    Sun Mingshan; Nagy, Tamas; Virshup, Gary; Partain, Larry; Oelhafen, Markus; Star-Lack, Josh

    2011-04-15

    Purpose: In image-guided radiotherapy, an artifact typically seen in axial slices of x-ray cone-beam computed tomography (CBCT) reconstructions is a dark region or ''black hole'' situated below the scan isocenter. The authors trace the cause of the artifact to scattered radiation produced by radiotherapy patient tabletops and show it is linked to the use of the offset-detector acquisition mode to enlarge the imaging field-of-view. The authors present a hybrid scatter kernel superposition (SKS) algorithm to correct for scatter from both the object-of-interest and the tabletop. Methods: Monte Carlo simulations and phantom experiments were first performed to identify the source of the black hole artifact. For correction, a SKS algorithm was developed that uses separate kernels to estimate scatter from the patient tabletop and the object-of-interest. Each projection is divided into two regions, one defined by the shadow cast by the tabletop on the imager and one defined by the unshadowed region. The region not shadowed by the tabletop is processed using the recently developed fast adaptive scatter kernel superposition (fASKS) method which employs asymmetric kernels that best model scatter transport through bodylike objects. The shadowed region is convolved with a combination of slab-derived symmetric SKS kernels and asymmetric fASKS kernels. The composition of the hybrid kernels is projection-angle-dependent. To test the algorithm, pelvis phantom and in vivo data were acquired using a CBCT test stand, a Varian Acuity simulator, and a Varian On-Board Imager, all of which have similar geometries and components. Artifact intensities and Hounsfield unit (HU) accuracies in the reconstructions were assessed before and after the correction. Results: The hybrid kernel algorithm provided effective correction and produced substantially better scatter estimates than the symmetric SKS or asymmetric fASKS methods alone. HU nonuniformities in the reconstructed pelvis phantom were

  6. Robotic Intelligence Kernel: Communications

    SciTech Connect

    Walton, Mike C.

    2009-09-16

    The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.

  7. Mie light-scattering granulometer with adaptive numerical filtering. I. Theory.

    PubMed

    Hespel, L; Delfour, A

    2000-12-20

    A search procedure based on a least-squares method including a regularization scheme constructed from numerical filtering is presented. This method, with the addition of a nephelometer, can be used to determine the particle-size distributions of various scattering media (aerosols, fogs, rocket exhausts, motor plumes) from angular static light-scattering measurements. For retrieval of the distribution function, the experimental data are matched with theoretical patterns derived from Mie theory. The method is numerically investigated with simulated data, and the performance of the inverse procedure is evaluated. The results show that the retrieved distribution function is quite reliable, even for strong levels of noise.

  8. Robotic Intelligence Kernel: Driver

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.

  9. Linearized Kernel Dictionary Learning

    NASA Astrophysics Data System (ADS)

    Golts, Alona; Elad, Michael

    2016-06-01

    In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.

  10. The Palomar kernel-phase experiment: testing kernel phase interferometry for ground-based astronomical observations

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin; Tuthill, Peter; Hinkley, Sasha; Ireland, Michael J.; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D.; Martinache, Frantz

    2016-01-01

    At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently developed adaptive optics post-processing technique, called kernel-phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here, we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the α Ophiuchi binary system near periastron, using the Palomar 200-Inch Telescope. This is the first case in which kernel phase has been used with a full aperture to resolve a system close to the diffraction limit with ground-based extreme adaptive optics observations. Excellent agreement in astrometric quantities is found between kernel phase and masking, and kernel phase significantly outperforms PSF fitting and bispectral analysis, demonstrating its viability as an alternative to conventional non-redundant masking under appropriate conditions.

  11. Fast and adaptive method for SAR superresolution imaging based on point scattering model and optimal basis selection.

    PubMed

    Wang, Zheng-ming; Wang, Wei-wei

    2009-07-01

    A novel fast and adaptive method for synthetic aperture radar (SAR) superresolution imaging is developed. Based on the point scattering model in the phase history domain, a dictionary is constructed so that the superresolution imaging process can be converted to a problem of sparse parameter estimation. The approximate orthogonality of this dictionary is exploited by theoretical derivation and experimental verification. Based on the orthogonality of the dictionary, we propose a fast algorithm for basis selection. Meanwhile, a threshold for obtaining the number and positions of the scattering centers is determined automatically from the inner product curves of the bases and observed data. Furthermore, the sensitivity of the threshold on estimation performance is analyzed. To reduce the burden of mass calculation and memory, a simplified superresolution imaging process is designed according to the characteristics of the imaging parameters. The experimental results of the simulated images and an MSTAR image illustrate the validity of this method and its robustness in the case of the high noise level. Compared with the traditional regularization method with the sparsity constraint, our proposed method suffers less computation complexity and has better adaptability.

  12. Initial-state splitting kernels in cold nuclear matter

    NASA Astrophysics Data System (ADS)

    Ovanesyan, Grigory; Ringer, Felix; Vitev, Ivan

    2016-09-01

    We derive medium-induced splitting kernels for energetic partons that undergo interactions in dense QCD matter before a hard-scattering event at large momentum transfer Q2. Working in the framework of the effective theory SCETG, we compute the splitting kernels beyond the soft gluon approximation. We present numerical studies that compare our new results with previous findings. We expect the full medium-induced splitting kernels to be most relevant for the extension of initial-state cold nuclear matter energy loss phenomenology in both p+A and A+A collisions.

  13. Kernel mucking in top

    SciTech Connect

    LeFebvre, W.

    1994-08-01

    For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.

  14. SEEDS ADAPTIVE OPTICS IMAGING OF THE ASYMMETRIC TRANSITION DISK OPH IRS 48 IN SCATTERED LIGHT

    SciTech Connect

    Follette, Katherine B.; Close, Laird M.; Grady, Carol A.; Swearingen, Jeremy R.; Sitko, Michael L.; Champney, Elizabeth H.; Van der Marel, Nienke; Maaskant, Koen; Min, Michiel; Takami, Michihiro; Kuchner, Marc J; McElwain, Michael W.; Muto, Takayuki; Mayama, Satoshi; Fukagawa, Misato; Russell, Ray W.; Kudo, Tomoyuki; Kusakabe, Nobuhiko; Hashimoto, Jun; Abe, Lyu; and others

    2015-01-10

    We present the first resolved near-infrared imagery of the transition disk Oph IRS 48 (WLY 2-48), which was recently observed with ALMA to have a strongly asymmetric submillimeter flux distribution. H-band polarized intensity images show a ∼60 AU radius scattered light cavity with two pronounced arcs of emission, one from northeast to southeast and one smaller, fainter, and more distant arc in the northwest. K-band scattered light imagery reveals a similar morphology, but with a clear third arc along the southwestern rim of the disk cavity. This arc meets the northwestern arc at nearly a right angle, revealing the presence of a spiral arm or local surface brightness deficit in the disk, and explaining the east-west brightness asymmetry in the H-band data. We also present 0.8-5.4 μm IRTF SpeX spectra of this object, which allow us to constrain the spectral class to A0 ± 1 and measure a low mass accretion rate of 10{sup –8.5} M {sub ☉} yr{sup –1}, both consistent with previous estimates. We investigate a variety of reddening laws in order to fit the multiwavelength spectral energy distribution of Oph IRS 48 and find a best fit consistent with a younger, higher luminosity star than previous estimates.

  15. Robotic Intelligence Kernel: Visualization

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.

  16. Robotic Intelligence Kernel: Architecture

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  17. Controlling Scattering Instabilities and Adapting to Unknown and Changing Plasma Conditions Using STUD Pulses

    NASA Astrophysics Data System (ADS)

    Afeyan, Bedros; Hüller, Stefan

    2012-10-01

    We will show the results of changing STUD pulse configurations in order to maintain strict control of parametric instabilities in high energy density plasmas (HEDP). Nonlinear optical processes (NLOP) in HEDP respond to changing plasma conditions which are unknown and not easily knowable by standard experimental procedures. Adapting to changing and unknown plasma conditions is one feature of STUD pulses which is absent in other beam conditioning techniques. We demonstrate this by simulating long enough that plasma conditions change, instability gains are altered and new STUD pulse configurations become necessary. Two such configurations are spliced together or run independently and compared. All available methods of changing STUD pulse characteristics are explored, such as duty cycle (20% vs 50%) and modulation period (cutting hot spots in half and into quarters) as well as phase scrambling and number of spikes before the spatial distribution of hot spots is randomized (1, 2, 3 and infinity).

  18. Twin kernel embedding.

    PubMed

    Guo, Yi; Gao, Junbin; Kwan, Paul W

    2008-08-01

    In most existing dimensionality reduction algorithms, the main objective is to preserve relational structure among objects of the input space in a low dimensional embedding space. This is achieved by minimizing the inconsistency between two similarity/dissimilarity measures, one for the input data and the other for the embedded data, via a separate matching objective function. Based on this idea, a new dimensionality reduction method called Twin Kernel Embedding (TKE) is proposed. TKE addresses the problem of visualizing non-vectorial data that is difficult for conventional methods in practice due to the lack of efficient vectorial representation. TKE solves this problem by minimizing the inconsistency between the similarity measures captured respectively by their kernel Gram matrices in the two spaces. In the implementation, by optimizing a nonlinear objective function using the gradient descent algorithm, a local minimum can be reached. The results obtained include both the optimal similarity preserving embedding and the appropriate values for the hyperparameters of the kernel. Experimental evaluation on real non-vectorial datasets confirmed the effectiveness of TKE. TKE can be applied to other types of data beyond those mentioned in this paper whenever suitable measures of similarity/dissimilarity can be defined on the input data. PMID:18566501

  19. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  20. Facial expression recognition using local binary patterns and discriminant kernel locally linear embedding

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaoming; Zhang, Shiqing

    2012-12-01

    Given the nonlinear manifold structure of facial images, a new kernel-based supervised manifold learning algorithm based on locally linear embedding (LLE), called discriminant kernel locally linear embedding (DKLLE), is proposed for facial expression recognition. The proposed DKLLE aims to nonlinearly extract the discriminant information by maximizing the interclass scatter while minimizing the intraclass scatter in a reproducing kernel Hilbert space. DKLLE is compared with LLE, supervised locally linear embedding (SLLE), principal component analysis (PCA), linear discriminant analysis (LDA), kernel principal component analysis (KPCA), and kernel linear discriminant analysis (KLDA). Experimental results on two benchmarking facial expression databases, i.e., the JAFFE database and the Cohn-Kanade database, demonstrate the effectiveness and promising performance of DKLLE.

  1. Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Theodoridis, Sergios

    2008-12-01

    Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.

  2. A feature extraction method of the particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization for Brillouin scattering spectra

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Zhao, Yu; Fu, Xinghu; Xu, Jinrui

    2016-10-01

    A novel particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization is proposed for extracting the features of Brillouin scattering spectra. Firstly, the adaptive inertia weight parameter of the velocity is introduced to the basic particle swarm algorithm. Based on the current iteration number of particles and the adaptation value, the algorithm can change the weight coefficient and adjust the iteration speed of searching space for particles, so the local optimization ability can be enhanced. Secondly, the logical self-mapping chaotic search is carried out by using the chaos optimization in particle swarm optimization algorithm, which makes the particle swarm optimization algorithm jump out of local optimum. The novel algorithm is compared with finite element analysis-Levenberg Marquardt algorithm, particle swarm optimization-Levenberg Marquardt algorithm and particle swarm optimization algorithm by changing the linewidth, the signal-to-noise ratio and the linear weight ratio of Brillouin scattering spectra. Then the algorithm is applied to the feature extraction of Brillouin scattering spectra in different temperatures. The simulation analysis and experimental results show that this algorithm has a high fitting degree and small Brillouin frequency shift error for different linewidth, SNR and linear weight ratio. Therefore, this algorithm can be applied to the distributed optical fiber sensing system based on Brillouin optical time domain reflection, which can effectively improve the accuracy of Brillouin frequency shift extraction.

  3. An Ensemble Approach to Building Mercer Kernels with Prior Information

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  4. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  5. Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    PubMed Central

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  6. Flexible kernel memory.

    PubMed

    Nowicki, Dimitri; Siegelmann, Hava

    2010-06-11

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces.

  7. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  8. Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels.

    PubMed

    Jayasumana, Sadeep; Hartley, Richard; Salzmann, Mathieu; Li, Hongdong; Harandi, Mehrtash

    2015-12-01

    In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. PMID:26539851

  9. Removing blur kernel noise via a hybrid ℓp norm

    NASA Astrophysics Data System (ADS)

    Yu, Xin; Zhang, Shunli; Zhao, Xiaolin; Zhang, Li

    2015-01-01

    When estimating a sharp image from a blurred one, blur kernel noise often leads to inaccurate recovery. We develop an effective method to estimate a blur kernel which is able to remove kernel noise and prevent the production of an overly sparse kernel. Our method is based on an iterative framework which alternatingly recovers the sharp image and estimates the blur kernel. In the image recovery step, we utilize the total variation (TV) regularization to recover latent images. In solving TV regularization, we propose a new criterion which adaptively terminates the iterations before convergence. While improving the efficiency, the quality of the final results is not degraded. In the kernel estimation step, we develop a metric to measure the usefulness of image edges, by which we can reduce the ambiguity of kernel estimation caused by small-scale edges. We also propose a hybrid ℓp norm, which is composed of ℓ2 norm and ℓp norm with 0.7≤p<1, to construct a sparsity constraint. Using the hybrid ℓp norm, we reduce a wider range of kernel noise and recover a more accurate blur kernel. The experiments show that the proposed method achieves promising results on both synthetic and real images.

  10. Learning With Jensen-Tsallis Kernels.

    PubMed

    Ghoshdastidar, Debarghya; Adsul, Ajay P; Dukkipati, Ambedkar

    2016-10-01

    Jensen-type [Jensen-Shannon (JS) and Jensen-Tsallis] kernels were first proposed by Martins et al. (2009). These kernels are based on JS divergences that originated in the information theory. In this paper, we extend the Jensen-type kernels on probability measures to define positive-definite kernels on Euclidean space. We show that the special cases of these kernels include dot-product kernels. Since Jensen-type divergences are multidistribution divergences, we propose their multipoint variants, and study spectral clustering and kernel methods based on these. We also provide experimental studies on benchmark image database and gene expression database that show the benefits of the proposed kernels compared with the existing kernels. The experiments on clustering also demonstrate the use of constructing multipoint similarities.

  11. RTOS kernel in portable electrocardiograph

    NASA Astrophysics Data System (ADS)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  12. Identification of Damaged Wheat Kernels and Cracked-Shell Hazelnuts with Impact Acoustics Time-Frequency Patterns

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A new adaptive time-frequency (t-f) analysis and classification procedure is applied to impact acoustic signals for detecting hazelnuts with cracked shells and three types of damaged wheat kernels. Kernels were dropped onto a steel plate, and the resulting impact acoustic signals were recorded with ...

  13. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  14. Point-Kernel Shielding Code System.

    1982-02-17

    Version 00 QAD-BSA is a three-dimensional, point-kernel shielding code system based upon the CCC-48/QAD series. It is designed to calculate photon dose rates and heating rates using exponential attenuation and infinite medium buildup factors. Calculational provisions include estimates of fast neutron penetration using data computed by the moments method. Included geometry routines can describe complicated source and shield geometries. An internal library contains data for many frequently used structural and shielding materials, enabling the codemore » to solve most problems with only source strengths and problem geometry required as input. This code system adapts especially well to problems requiring multiple sources and sources with asymmetrical geometry. In addition to being edited separately, the total interaction rates from many sources may be edited at each detector point. Calculated photon interaction rates agree closely with those obtained using QAD-P5A.« less

  15. Broadband Waveform Sensitivity Kernels for Large-Scale Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Stähler, S. C.; van Driel, M.; Hosseini, K.; Auer, L.; Sigloch, K.

    2015-12-01

    Seismic sensitivity kernels, i.e. the basis for mapping misfit functionals to structural parameters in seismic inversions, have received much attention in recent years. Their computation has been conducted via ray-theory based approaches (Dahlen et al., 2000) or fully numerical solutions based on the adjoint-state formulation (e.g. Tromp et al., 2005). The core problem is the exuberant computational cost due to the large number of source-receiver pairs, each of which require solutions to the forward problem. This is exacerbated in the high-frequency regime where numerical solutions become prohibitively expensive. We present a methodology to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (abstract ID# 77891, www.axisem.info), and thus on spherically symmetric models. As a consequence of this method's numerical efficiency even in high-frequency regimes, kernels can be computed in a time- and frequency-dependent manner, thus providing the full generic mapping from perturbed waveform to perturbed structure. Such waveform kernels can then be used for a variety of misfit functions, structural parameters and refiltered into bandpasses without recomputing any wavefields. A core component of the kernel method presented here is the mapping from numerical wavefields to inversion meshes. This is achieved by a Monte-Carlo approach, allowing for convergent and controllable accuracy on arbitrarily shaped tetrahedral and hexahedral meshes. We test and validate this accuracy by comparing to reference traveltimes, show the projection onto various locally adaptive inversion meshes and discuss computational efficiency for ongoing tomographic applications in the range of millions of observed body-wave data between periods of 2-30s.

  16. Wave systems with direct processes and localized losses or gains: The nonunitary Poisson kernel

    NASA Astrophysics Data System (ADS)

    Martínez-Argüello, A. M.; Méndez-Sánchez, R. A.; Martínez-Mares, M.

    2012-07-01

    We study the scattering of waves in systems with losses or gains simulated by imaginary potentials. This is done for a complex delta potential that corresponds to a spatially localized absorption or amplification. In the Argand plane the scattering matrix moves on a circle C centered on the real axis, but not at the origin, that is tangent to the unit circle. From the numerical simulations it is concluded that the distribution of the scattering matrix, when measured from the center of the circle C, agrees with the nonunitary Poisson kernel. This result is also obtained analytically by extending the analyticity condition, of unitary scattering matrices, to the no-unitary ones. We use this nonunitary Poisson kernel to obtain the distribution of nonunitary scattering matrices when measured from the origin of the Argand plane. The obtained marginal distributions have excellent agreement with the numerical results.

  17. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  18. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  19. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  20. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  1. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  2. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  3. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  4. Four-particle decay of the Bethe-Salpeter kernel in the high-temperature Ising model

    NASA Astrophysics Data System (ADS)

    Auil, F.

    2002-12-01

    In this article we study the four-particle decay of the Bethe-Salpeter (B-S) kernel for the high-temperature Ising model. We use the hyperplane decoupling method [T. Spencer, Commun. Math. Phys. 44, 143 (1975); R. S. Schor, Nucl. Phys. B 222, 71 (1983)] to prove exponential decay in a set of variables particularly adapted to the methods of Spencer and Zirilli [Commun. Math. Phys. 49, 1 (1976)] for the analysis of scattering and bound states in QFT, transcribed to lattice theories by Auil and Barata [Ann. Henri Poincare 2, 1065 (2001)]. We study arbitrary derivatives of the general n-point correlation functions with respect to the interpolating variables, and we are able to obtain, in some cases, information about the third derivatives of the B-S kernel. As a later consequence, we have two-body asymptotic completeness for the (massive) Euclidean lattice field theory implemented by this model. This allows us to analyze the Ornstein-Zernike behavior of four-point functions, related to the specific heat of the model.

  5. Wigner functions defined with Laplace transform kernels.

    PubMed

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton.

  6. Kernel Near Principal Component Analysis

    SciTech Connect

    MARTIN, SHAWN B.

    2002-07-01

    We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.

  7. Adaptive density estimator for galaxy surveys

    NASA Astrophysics Data System (ADS)

    Saar, Enn

    2016-10-01

    Galaxy number or luminosity density serves as a basis for many structure classification algorithms. Several methods are used to estimate this density. Among them kernel methods have probably the best statistical properties and allow also to estimate the local sample errors of the estimate. We introduce a kernel density estimator with an adaptive data-driven anisotropic kernel, describe its properties and demonstrate the wealth of additional information it gives us about the local properties of the galaxy distribution.

  8. Kernel-based least squares policy iteration for reinforcement learning.

    PubMed

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating

  9. An adaptive-in-temperature method for on-the-fly sampling of thermal neutron scattering data in continuous-energy Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    Pavlou, Andrew Theodore

    The Monte Carlo simulation of full-core neutron transport requires high fidelity data to represent not only the various types of possible interactions that can occur, but also the temperature and energy regimes for which these data are relevant. For isothermal conditions, nuclear cross section data are processed in advance of running a simulation. In reality, the temperatures in a neutronics simulation are not fixed, but change with respect to the temperatures computed from an associated heat transfer or thermal hydraulic (TH) code. To account for the temperature change, a code user must either 1) compute new data at the problem temperature inline during the Monte Carlo simulation or 2) pre-compute data at a variety of temperatures over the range of possible values. Inline data processing is computationally inefficient while pre-computing data at many temperatures can be memory expensive. An alternative on-the-fly approach to handle the temperature component of nuclear data is desired. By on-the-fly we mean a procedure that adjusts cross section data to the correct temperature adaptively during the Monte Carlo random walk instead of before the running of a simulation. The on-the-fly procedure should also preserve simulation runtime efficiency. While on-the-fly methods have recently been developed for higher energy regimes, the double differential scattering of thermal neutrons has not been examined in detail until now. In this dissertation, an on-the-fly sampling method is developed by investigating the temperature dependence of the thermal double differential scattering distributions. The temperature dependence is analyzed with a linear least squares regression test to develop fit coefficients that are used to sample thermal scattering data at any temperature. The amount of pre-stored thermal scattering data has been drastically reduced from around 25 megabytes per temperature per nuclide to only a few megabytes per nuclide by eliminating the need to compute data

  10. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    PubMed

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.

  11. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    PubMed

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach. PMID:24805227

  12. Stem kernels for RNA sequence analyses.

    PubMed

    Sakakibara, Yasubumi; Popendorf, Kris; Ogawa, Nana; Asai, Kiyoshi; Sato, Kengo

    2007-10-01

    Several computational methods based on stochastic context-free grammars have been developed for modeling and analyzing functional RNA sequences. These grammatical methods have succeeded in modeling typical secondary structures of RNA, and are used for structural alignment of RNA sequences. However, such stochastic models cannot sufficiently discriminate member sequences of an RNA family from nonmembers and hence detect noncoding RNA regions from genome sequences. A novel kernel function, stem kernel, for the discrimination and detection of functional RNA sequences using support vector machines (SVMs) is proposed. The stem kernel is a natural extension of the string kernel, specifically the all-subsequences kernel, and is tailored to measure the similarity of two RNA sequences from the viewpoint of secondary structures. The stem kernel examines all possible common base pairs and stem structures of arbitrary lengths, including pseudoknots between two RNA sequences, and calculates the inner product of common stem structure counts. An efficient algorithm is developed to calculate the stem kernels based on dynamic programming. The stem kernels are then applied to discriminate members of an RNA family from nonmembers using SVMs. The study indicates that the discrimination ability of the stem kernel is strong compared with conventional methods. Furthermore, the potential application of the stem kernel is demonstrated by the detection of remotely homologous RNA families in terms of secondary structures. This is because the string kernel is proven to work for the remote homology detection of protein sequences. These experimental results have convinced us to apply the stem kernel in order to find novel RNA families from genome sequences. PMID:17933013

  13. Predicting Protein Function Using Multiple Kernels.

    PubMed

    Yu, Guoxian; Rangwala, Huzefa; Domeniconi, Carlotta; Zhang, Guoji; Zhang, Zili

    2015-01-01

    High-throughput experimental techniques provide a wide variety of heterogeneous proteomic data sources. To exploit the information spread across multiple sources for protein function prediction, these data sources are transformed into kernels and then integrated into a composite kernel. Several methods first optimize the weights on these kernels to produce a composite kernel, and then train a classifier on the composite kernel. As such, these approaches result in an optimal composite kernel, but not necessarily in an optimal classifier. On the other hand, some approaches optimize the loss of binary classifiers and learn weights for the different kernels iteratively. For multi-class or multi-label data, these methods have to solve the problem of optimizing weights on these kernels for each of the labels, which are computationally expensive and ignore the correlation among labels. In this paper, we propose a method called Predicting Protein Function using Multiple Kernels (ProMK). ProMK iteratively optimizes the phases of learning optimal weights and reduces the empirical loss of multi-label classifier for each of the labels simultaneously. ProMK can integrate kernels selectively and downgrade the weights on noisy kernels. We investigate the performance of ProMK on several publicly available protein function prediction benchmarks and synthetic datasets. We show that the proposed approach performs better than previously proposed protein function prediction approaches that integrate multiple data sources and multi-label multiple kernel learning methods. The codes of our proposed method are available at https://sites.google.com/site/guoxian85/promk.

  14. Kernel earth mover's distance for EEG classification.

    PubMed

    Daliri, Mohammad Reza

    2013-07-01

    Here, we propose a new kernel approach based on the earth mover's distance (EMD) for electroencephalography (EEG) signal classification. The EEG time series are first transformed into histograms in this approach. The distance between these histograms is then computed using the EMD in a pair-wise manner. We bring the distances into a kernel form called kernel EMD. The support vector classifier can then be used for the classification of EEG signals. The experimental results on the real EEG data show that the new kernel method is very effective, and can classify the data with higher accuracy than traditional methods.

  15. Molecular Hydrodynamics from Memory Kernels.

    PubMed

    Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin

    2016-04-01

    The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t^{-3/2}. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius. PMID:27104730

  16. Cross-person activity recognition using reduced kernel extreme learning machine.

    PubMed

    Deng, Wan-Yu; Zheng, Qing-Hua; Wang, Zhong-Min

    2014-05-01

    Activity recognition based on mobile embedded accelerometer is very important for developing human-centric pervasive applications such as healthcare, personalized recommendation and so on. However, the distribution of accelerometer data is heavily affected by varying users. The performance will degrade when the model trained on one person is used to others. To solve this problem, we propose a fast and accurate cross-person activity recognition model, known as TransRKELM (Transfer learning Reduced Kernel Extreme Learning Machine) which uses RKELM (Reduced Kernel Extreme Learning Machine) to realize initial activity recognition model. In the online phase OS-RKELM (Online Sequential Reduced Kernel Extreme Learning Machine) is applied to update the initial model and adapt the recognition model to new device users based on recognition results with high confidence level efficiently. Experimental results show that, the proposed model can adapt the classifier to new device users quickly and obtain good recognition performance.

  17. Cross-person activity recognition using reduced kernel extreme learning machine.

    PubMed

    Deng, Wan-Yu; Zheng, Qing-Hua; Wang, Zhong-Min

    2014-05-01

    Activity recognition based on mobile embedded accelerometer is very important for developing human-centric pervasive applications such as healthcare, personalized recommendation and so on. However, the distribution of accelerometer data is heavily affected by varying users. The performance will degrade when the model trained on one person is used to others. To solve this problem, we propose a fast and accurate cross-person activity recognition model, known as TransRKELM (Transfer learning Reduced Kernel Extreme Learning Machine) which uses RKELM (Reduced Kernel Extreme Learning Machine) to realize initial activity recognition model. In the online phase OS-RKELM (Online Sequential Reduced Kernel Extreme Learning Machine) is applied to update the initial model and adapt the recognition model to new device users based on recognition results with high confidence level efficiently. Experimental results show that, the proposed model can adapt the classifier to new device users quickly and obtain good recognition performance. PMID:24513850

  18. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  19. Phylodynamic Inference with Kernel ABC and Its Application to HIV Epidemiology

    PubMed Central

    Poon, Art F.Y.

    2015-01-01

    The shapes of phylogenetic trees relating virus populations are determined by the adaptation of viruses within each host, and by the transmission of viruses among hosts. Phylodynamic inference attempts to reverse this flow of information, estimating parameters of these processes from the shape of a virus phylogeny reconstructed from a sample of genetic sequences from the epidemic. A key challenge to phylodynamic inference is quantifying the similarity between two trees in an efficient and comprehensive way. In this study, I demonstrate that a new distance measure, based on a subset tree kernel function from computational linguistics, confers a significant improvement over previous measures of tree shape for classifying trees generated under different epidemiological scenarios. Next, I incorporate this kernel-based distance measure into an approximate Bayesian computation (ABC) framework for phylodynamic inference. ABC bypasses the need for an analytical solution of model likelihood, as it only requires the ability to simulate data from the model. I validate this “kernel-ABC” method for phylodynamic inference by estimating parameters from data simulated under a simple epidemiological model. Results indicate that kernel-ABC attained greater accuracy for parameters associated with virus transmission than leading software on the same data sets. Finally, I apply the kernel-ABC framework to study a recent outbreak of a recombinant HIV subtype in China. Kernel-ABC provides a versatile framework for phylodynamic inference because it can fit a broader range of models than methods that rely on the computation of exact likelihoods. PMID:26006189

  20. Bayesian Kernel Mixtures for Counts

    PubMed Central

    Canale, Antonio; Dunson, David B.

    2011-01-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437

  1. Bayesian Kernel Mixtures for Counts.

    PubMed

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437

  2. MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES

    PubMed Central

    Dunson, David B.

    2013-01-01

    Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563

  3. LoCoH: Nonparameteric Kernel Methods for Constructing Home Ranges and Utilization Distributions

    PubMed Central

    Getz, Wayne M.; Fortmann-Roe, Scott; Wilmers, Christopher C.

    2007-01-01

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: “fixed sphere-of-influence,” or r-LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an “adaptive sphere-of-influence,” or a-LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a), and compare them to the original “fixed-number-of-points,” or k-LoCoH (all kernels constructed from k-1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a-LoCoH is generally superior to k- and r-LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu). PMID:17299587

  4. LoCoH: Non-parameteric kernel methods for constructing home ranges and utilization distributions

    USGS Publications Warehouse

    Getz, Wayne M.; Fortmann-Roe, Scott; Cross, Paul C.; Lyons, Andrew J.; Ryan, Sadie J.; Wilmers, Christopher C.

    2007-01-01

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: ‘‘fixed sphere-of-influence,’’ or r -LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an ‘‘adaptive sphere-of-influence,’’ or a -LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a ), and compare them to the original ‘‘fixed-number-of-points,’’ or k -LoCoH (all kernels constructed from k -1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a -LoCoH is generally superior to k - and r -LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).

  5. LoCoH: nonparameteric kernel methods for constructing home ranges and utilization distributions.

    PubMed

    Getz, Wayne M; Fortmann-Roe, Scott; Cross, Paul C; Lyons, Andrew J; Ryan, Sadie J; Wilmers, Christopher C

    2007-02-14

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: "fixed sphere-of-influence," or r-LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an "adaptive sphere-of-influence," or a-LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a), and compare them to the original "fixed-number-of-points," or k-LoCoH (all kernels constructed from k-1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a-LoCoH is generally superior to k- and r-LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).

  6. Modeling non-stationarity of kernel weights for k-space reconstruction in partially parallel imaging

    PubMed Central

    Miao, Jun; Wong, Wilbur C. K.; Narayan, Sreenath; Huo, Donglai; Wilson, David L.

    2011-01-01

    Purpose: In partially parallel imaging, most k-space-based reconstruction algorithms such as GRAPPA adopt a single finite-size kernel to approximate the true relationship between sampled and nonsampled signals. However, the estimation of this kernel based on k-space signals is imperfect, and the authors are investigating methods dealing with local variation of k-space signals. Methods: To model nonstationarity of kernel weights, similar to performing a spatially adaptive regularization, the authors fit a set of linear functions using concepts from geographically weighted regression, a methodology used in geophysical analysis. Instead of a reconstruction with a single set of kernel weights, the authors use multiple sets. A missing signal is reconstructed with its kernel weights set determined by k-space clustering. Simulated and acquired MR data with several different image content and acquisition schemes, including MR tagging, were tested. A perceptual difference model (Case-PDM) was used to quantitatively evaluate the quality of over 1000 test images, and to optimize the parameters of our algorithm. Results: A MOdeling Non-stationarity of KErnel wEightS (“MONKEES”) reconstruction with two sets of kernel weights gave reconstructions with significantly better image quality than the original GRAPPA in all test images. Using more sets produced improved image quality but with diminishing returns. As a rule of thumb, at least two sets of kernel weights, one from low- and the other from high frequency k-space, should be used. Conclusions: The authors conclude that the MONKEES can significantly and robustly improve the image quality in parallel MR imaging, particularly, cardiac imaging. PMID:21928649

  7. Cross-domain question classification in community question answering via kernel mapping

    NASA Astrophysics Data System (ADS)

    Su, Lei; Hu, Zuoliang; Yang, Bin; Li, Yiyang; Chen, Jun

    2015-10-01

    An increasingly popular method for retrieving information is via the community question answering (CQA) systems such as Yahoo! Answers and Baidu Knows. In CQA, question classification plays an important role to find the answers. However, the labeled training examples for statistical question classifier are fairly expensive to obtain, as they require the experienced human efforts. Meanwhile, unlabeled data are readily available. This paper employs the method of domain adaptation via kernel mapping to solve this problem. In detail, the kernel approach is utilized to map the target-domain data and the source-domain data into a common space, where the question classifiers are trained under the closer conditional probabilities. The kernel mapping function is constructed by domain knowledge. Therefore, domain knowledge could be transferred from the labeled examples in the source domain to the unlabeled ones in the targeted domain. The statistical training model can be improved by using a large number of unlabeled data. Meanwhile, the Hadoop Platform is used to construct the mapping mechanism to reduce the time complexity. Map/Reduce enable kernel mapping for domain adaptation in parallel in the Hadoop Platform. Experimental results show that the accuracy of question classification could be improved by the method of kernel mapping. Furthermore, the parallel method in the Hadoop Platform could effective schedule the computing resources to reduce the running time.

  8. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently.

  9. Comparison of scatter correction methods for CBCT

    NASA Astrophysics Data System (ADS)

    Suri, Roland E.; Virshup, Gary; Zurkirchen, Luis; Kaissl, Wolfgang

    2006-03-01

    In contrast to the narrow fan of clinical Computed Tomography (CT) scanners, Cone Beam scanners irradiate a much larger proportion of the object, which causes additional X-ray scattering. The most obvious scatter artefact is that the middle area of the object becomes darker than the outer area, as the density in the middle of the object is underestimated (cupping). Methods for estimating scatter were investigated that can be applied to each single projection without requiring a preliminary reconstruction. Scatter reduction by the Uniform Scatter Fraction method was implemented in the Varian CBCT software version 2.0. This scatter correction method is recommended for full fan scans using air norm. However, this method did not sufficiently correct artefacts in half fan scans and was not sufficiently robust if used in combination with a Single Norm. Therefore, a physical scatter model was developed that estimates scatter for each projection using the attenuation profile of the object. This model relied on laboratory experiments in which scatter kernels were measured for Plexiglas plates of varying thicknesses. Preliminary results suggest that this kernel model may solve the shortcomings of the Uniform Scatter Fraction model.

  10. Kernel score statistic for dependent data.

    PubMed

    Malzahn, Dörthe; Friedrichs, Stefanie; Rosenberger, Albert; Bickeböller, Heike

    2014-01-01

    The kernel score statistic is a global covariance component test over a set of genetic markers. It provides a flexible modeling framework and does not collapse marker information. We generalize the kernel score statistic to allow for familial dependencies and to adjust for random confounder effects. With this extension, we adjust our analysis of real and simulated baseline systolic blood pressure for polygenic familial background. We find that the kernel score test gains appreciably in power through the use of sequencing compared to tag-single-nucleotide polymorphisms for very rare single nucleotide polymorphisms with <1% minor allele frequency.

  11. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2014-01-01 2014-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...

  12. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2012-01-01 2012-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...

  13. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2013-01-01 2013-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...

  14. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2011-01-01 2011-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...

  15. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...

  16. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...

  17. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...

  18. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...

  19. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  20. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  1. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...

  2. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...

  3. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  4. KITTEN Lightweight Kernel 0.1 Beta

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten providesmore » unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.« less

  5. Quantum kernel applications in medicinal chemistry.

    PubMed

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design. PMID:22857535

  6. Variational Dirichlet Blur Kernel Estimation.

    PubMed

    Zhou, Xu; Mateos, Javier; Zhou, Fugen; Molina, Rafael; Katsaggelos, Aggelos K

    2015-12-01

    Blind image deconvolution involves two key objectives: 1) latent image and 2) blur estimation. For latent image estimation, we propose a fast deconvolution algorithm, which uses an image prior of nondimensional Gaussianity measure to enforce sparsity and an undetermined boundary condition methodology to reduce boundary artifacts. For blur estimation, a linear inverse problem with normalization and nonnegative constraints must be solved. However, the normalization constraint is ignored in many blind image deblurring methods, mainly because it makes the problem less tractable. In this paper, we show that the normalization constraint can be very naturally incorporated into the estimation process by using a Dirichlet distribution to approximate the posterior distribution of the blur. Making use of variational Dirichlet approximation, we provide a blur posterior approximation that considers the uncertainty of the estimate and removes noise in the estimated kernel. Experiments with synthetic and real data demonstrate that the proposed method is very competitive to the state-of-the-art blind image restoration methods. PMID:26390458

  7. Weighted Bergman Kernels and Quantization}

    NASA Astrophysics Data System (ADS)

    Engliš, Miroslav

    Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion for x,y near z, where φ(x,y) is an almost-analytic extension of &\\phi(x)=φ(x,x) and similarly for ψ. Further, . If in addition Ω is of finite type, φ,ψ behave reasonably at the boundary, and - log φ, - log ψ are strictly plurisubharmonic on Ω, we obtain also an analogous asymptotic expansion for the Berezin transform and give applications to the Berezin quantization. Finally, for Ω smoothly bounded and strictly pseudoconvex and φ a smooth strictly plurisubharmonic defining function for Ω, we also obtain results on the Berezin-Toeplitz quantization.

  8. TICK: Transparent Incremental Checkpointing at Kernel Level

    SciTech Connect

    Petrini, Fabrizio; Gioiosa, Roberto

    2004-10-25

    TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5

  9. Formalism for neutron cross section covariances in the resonance region using kernel approximation

    SciTech Connect

    Oblozinsky, P.; Cho,Y-S.; Matoon,C.M.; Mughabghab,S.F.

    2010-04-09

    We describe analytical formalism for estimating neutron radiative capture and elastic scattering cross section covariances in the resolved resonance region. We use capture and scattering kernels as the starting point and show how to get average cross sections in broader energy bins, derive analytical expressions for cross section sensitivities, and deduce cross section covariances from the resonance parameter uncertainties in the recently published Atlas of Neutron Resonances. The formalism elucidates the role of resonance parameter correlations which become important if several strong resonances are located in one energy group. Importance of potential scattering uncertainty as well as correlation between potential scattering and resonance scattering is also examined. Practical application of the formalism is illustrated on {sup 55}Mn(n,{gamma}) and {sup 55}Mn(n,el).

  10. Xyloglucans from flaxseed kernel cell wall: Structural and conformational characterisation.

    PubMed

    Ding, Huihuang H; Cui, Steve W; Goff, H Douglas; Chen, Jie; Guo, Qingbin; Wang, Qi

    2016-10-20

    The structure of ethanol precipitated fraction from 1M KOH extracted flaxseed kernel polysaccharides (KPI-EPF) was studied for better understanding the molecular structures of flaxseed kernel cell wall polysaccharides. Based on methylation/GC-MS, NMR spectroscopy, and MALDI-TOF-MS analysis, the dominate sugar residues of KPI-EPF fraction comprised of (1,4,6)-linked-β-d-glucopyranose (24.1mol%), terminal α-d-xylopyranose (16.2mol%), (1,2)-α-d-linked-xylopyranose (10.7mol%), (1,4)-β-d-linked-glucopyranose (10.7mol%), and terminal β-d-galactopyranose (8.5mol%). KPI-EPF was proposed as xyloglucans: The substitution rate of the backbone is 69.3%; R1 could be T-α-d-Xylp-(1→, or none; R2 could be T-α-d-Xylp-(1→, T-β-d-Galp-(1→2)-α-d-Xylp-(1→, or T-α-l-Araf-(1→2)-α-d-Xylp-(1→; R3 could be T-α-d-Xylp-(1→, T-β-d-Galp-(1→2)-α-d-Xylp-(1→, T-α-l-Fucp-(1→2)-β-d-Galp-(1→2)-α-d-Xylp-(1→, or none. The Mw of KPI-EPF was calculated to be 1506kDa by static light scattering (SLS). The structure-sensitive parameter (ρ) of KPI-EPF was calculated as 1.44, which confirmed the highly branched structure of extracted xyloglucans. This new findings on flaxseed kernel xyloglucans will be helpful for understanding its fermentation properties and potential applications. PMID:27474598

  11. A kernel autoassociator approach to pattern classification.

    PubMed

    Zhang, Haihong; Huang, Weimin; Huang, Zhiyong; Zhang, Bailing

    2005-06-01

    Autoassociators are a special type of neural networks which, by learning to reproduce a given set of patterns, grasp the underlying concept that is useful for pattern classification. In this paper, we present a novel nonlinear model referred to as kernel autoassociators based on kernel methods. While conventional non-linear autoassociation models emphasize searching for the non-linear representations of input patterns, a kernel autoassociator takes a kernel feature space as the nonlinear manifold, and places emphasis on the reconstruction of input patterns from the kernel feature space. Two methods are proposed to address the reconstruction problem, using linear and multivariate polynomial functions, respectively. We apply the proposed model to novelty detection with or without novelty examples and study it on the promoter detection and sonar target recognition problems. We also apply the model to mclass classification problems including wine recognition, glass recognition, handwritten digit recognition, and face recognition. The experimental results show that, compared with conventional autoassociators and other recognition systems, kernel autoassociators can provide better or comparable performance for concept learning and recognition in various domains. PMID:15971928

  12. A kernel autoassociator approach to pattern classification.

    PubMed

    Zhang, Haihong; Huang, Weimin; Huang, Zhiyong; Zhang, Bailing

    2005-06-01

    Autoassociators are a special type of neural networks which, by learning to reproduce a given set of patterns, grasp the underlying concept that is useful for pattern classification. In this paper, we present a novel nonlinear model referred to as kernel autoassociators based on kernel methods. While conventional non-linear autoassociation models emphasize searching for the non-linear representations of input patterns, a kernel autoassociator takes a kernel feature space as the nonlinear manifold, and places emphasis on the reconstruction of input patterns from the kernel feature space. Two methods are proposed to address the reconstruction problem, using linear and multivariate polynomial functions, respectively. We apply the proposed model to novelty detection with or without novelty examples and study it on the promoter detection and sonar target recognition problems. We also apply the model to mclass classification problems including wine recognition, glass recognition, handwritten digit recognition, and face recognition. The experimental results show that, compared with conventional autoassociators and other recognition systems, kernel autoassociators can provide better or comparable performance for concept learning and recognition in various domains.

  13. PET Image Reconstruction Using Kernel Method

    PubMed Central

    Wang, Guobao; Qi, Jinyi

    2014-01-01

    Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249

  14. Kernels and point processes associated with Whittaker functions

    NASA Astrophysics Data System (ADS)

    Blower, Gordon; Chen, Yang

    2016-09-01

    This article considers Whittaker's confluent hypergeometric function Wκ,μ where κ is real and μ is real or purely imaginary. Then φ(x) = x-μ-1/2Wκ,μ(x) arises as the scattering function of a continuous time linear system with state space L2(1/2, ∞) and input and output spaces C. The Hankel operator Γφ on L2(0, ∞) is expressed as a matrix with respect to the Laguerre basis and gives the Hankel matrix of moments of a Jacobi weight w0(x) = xb(1 - x)a. The operation of translating φ is equivalent to deforming w0 to give wt(x) = e-t/xxb(1 - x)a. The determinant of the Hankel matrix of moments of wɛ satisfies the σ form of Painlevé's transcendental differential equation PV. It is shown that Γφ gives rise to the Whittaker kernel from random matrix theory, as studied by Borodin and Olshanski [Commun. Math. Phys. 211, 335-358 (2000)]. Whittaker kernels are closely related to systems of orthogonal polynomials for a Pollaczek-Jacobi type weight lying outside the usual Szegö class.

  15. A tensor-product-kernel framework for multiscale neural activity decoding and control.

    PubMed

    Li, Lin; Brockmeier, Austin J; Choi, John S; Francis, Joseph T; Sanchez, Justin C; Príncipe, José C

    2014-01-01

    Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569

  16. Fast generation of sparse random kernel graphs

    SciTech Connect

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.

  17. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  18. Kernel bandwidth estimation for nonparametric modeling.

    PubMed

    Bors, Adrian G; Nasios, Nikolaos

    2009-12-01

    Kernel density estimation is a nonparametric procedure for probability density modeling, which has found several applications in various fields. The smoothness and modeling ability of the functional approximation are controlled by the kernel bandwidth. In this paper, we describe a Bayesian estimation method for finding the bandwidth from a given data set. The proposed bandwidth estimation method is applied in three different computational-intelligence methods that rely on kernel density estimation: 1) scale space; 2) mean shift; and 3) quantum clustering. The third method is a novel approach that relies on the principles of quantum mechanics. This method is based on the analogy between data samples and quantum particles and uses the SchrOdinger potential as a cost function. The proposed methodology is used for blind-source separation of modulated signals and for terrain segmentation based on topography information.

  19. Experimental study of turbulent flame kernel propagation

    SciTech Connect

    Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve

    2008-07-15

    Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{sub j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)

  20. On Bayesian adaptive video super resolution.

    PubMed

    Liu, Ce; Sun, Deqing

    2014-02-01

    Although multiframe super resolution has been extensively studied in past decades, super resolving real-world video sequences still remains challenging. In existing systems, either the motion models are oversimplified or important factors such as blur kernel and noise level are assumed to be known. Such models cannot capture the intrinsic characteristics that may differ from one sequence to another. In this paper, we propose a Bayesian approach to adaptive video super resolution via simultaneously estimating underlying motion, blur kernel, and noise level while reconstructing the original high-resolution frames. As a result, our system not only produces very promising super resolution results outperforming the state of the art, but also adapts to a variety of noise levels and blur kernels. To further analyze the effect of noise and blur kernel, we perform a two-step analysis using the Cramer-Rao bounds. We study how blur kernel and noise influence motion estimation with aliasing signals, how noise affects super resolution with perfect motion, and finally how blur kernel and noise influence super resolution with unknown motion. Our analysis results confirm empirical observations, in particular that an intermediate size blur kernel achieves the optimal image reconstruction results.

  1. Volatile compound formation during argan kernel roasting.

    PubMed

    El Monfalouti, Hanae; Charrouf, Zoubida; Giordano, Manuela; Guillaume, Dominique; Kartah, Badreddine; Harhar, Hicham; Gharby, Saïd; Denhez, Clément; Zeppa, Giuseppe

    2013-01-01

    Virgin edible argan oil is prepared by cold-pressing argan kernels previously roasted at 110 degrees C for up to 25 minutes. The concentration of 40 volatile compounds in virgin edible argan oil was determined as a function of argan kernel roasting time. Most of the volatile compounds begin to be formed after 15 to 25 minutes of roasting. This suggests that a strictly controlled roasting time should allow the modulation of argan oil taste and thus satisfy different types of consumers. This could be of major importance considering the present booming use of edible argan oil.

  2. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  3. Utilizing Kernelized Advection Schemes in Ocean Models

    NASA Astrophysics Data System (ADS)

    Zadeh, N.; Balaji, V.

    2008-12-01

    There has been a recent effort in the ocean model community to use a set of generic FORTRAN library routines for advection of scalar tracers in the ocean. In a collaborative project called Hybrid Ocean Model Environement (HOME), vastly different advection schemes (space-differencing schemes for advection equation) become available to modelers in the form of subroutine calls (kernels). In this talk we explore the possibility of utilizing ESMF data structures in wrapping these kernels so that they can be readily used in ESMF gridded components.

  4. Kernel abortion in maize. II. Distribution of /sup 14/C among kernel carboydrates

    SciTech Connect

    Hanft, J.M.; Jones, R.J.

    1986-06-01

    This study was designed to compare the uptake and distribution of /sup 14/C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35/sup 0/C were transferred to (/sup 14/C)sucrose media 10 days after pollination. Kernels cultured at 35/sup 0/C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on (/sup 14/C)sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35/sup 0/C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35/sup 0/C compared to kernels cultured at 30/sup 0/C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35/sup 0/C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30/sup 0/C (89%). Kernels cultured at 35/sup 0/C had a correspondingly higher proportion of /sup 14/C in endosperm fructose, glucose, and sucrose.

  5. Accuracy of Reduced and Extended Thin-Wire Kernels

    SciTech Connect

    Burke, G J

    2008-11-24

    Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.

  6. Gabor-based kernel PCA with doubly nonlinear mapping for face recognition with a single face image.

    PubMed

    Xie, Xudong; Lam, Kin-Man

    2006-09-01

    In this paper, a novel Gabor-based kernel principal component analysis (PCA) with doubly nonlinear mapping is proposed for human face recognition. In our approach, the Gabor wavelets are used to extract facial features, then a doubly nonlinear mapping kernel PCA (DKPCA) is proposed to perform feature transformation and face recognition. The conventional kernel PCA nonlinearly maps an input image into a high-dimensional feature space in order to make the mapped features linearly separable. However, this method does not consider the structural characteristics of the face images, and it is difficult to determine which nonlinear mapping is more effective for face recognition. In this paper, a new method of nonlinear mapping, which is performed in the original feature space, is defined. The proposed nonlinear mapping not only considers the statistical property of the input features, but also adopts an eigenmask to emphasize those important facial feature points. Therefore, after this mapping, the transformed features have a higher discriminating power, and the relative importance of the features adapts to the spatial importance of the face images. This new nonlinear mapping is combined with the conventional kernel PCA to be called "doubly" nonlinear mapping kernel PCA. The proposed algorithm is evaluated based on the Yale database, the AR database, the ORL database and the YaleB database by using different face recognition methods such as PCA, Gabor wavelets plus PCA, and Gabor wavelets plus kernel PCA with fractional power polynomial models. Experiments show that consistent and promising results are obtained.

  7. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  8. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    SciTech Connect

    Charles Barnes; CLay Richardson; Scott Nagley; John Hunn; Eric Shaber

    2010-10-01

    Babcock and Wilcox (B&W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-µm, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B&W produced 425-µm, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B&W also produced 500-µm, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B&W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  9. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... the separated half of a kernel with not more than one-eighth broken off....

  10. Kernel Temporal Differences for Neural Decoding

    PubMed Central

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  11. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  12. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  13. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  14. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  15. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  16. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...

  17. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...

  18. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...

  19. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...

  20. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a..., packaging, transporting, or holding food, subject to the provisions of this section. (a) Tamarind...

  1. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  2. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  3. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  4. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  5. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  6. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  7. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  8. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of...

  9. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  10. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of...

  11. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  12. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  13. Rapid scatter estimation for CBCT using the Boltzmann transport equation

    NASA Astrophysics Data System (ADS)

    Sun, Mingshan; Maslowski, Alex; Davis, Ian; Wareing, Todd; Failla, Gregory; Star-Lack, Josh

    2014-03-01

    Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU's running in parallel.

  14. Chare kernel; A runtime support system for parallel computations

    SciTech Connect

    Shu, W. ); Kale, L.V. )

    1991-03-01

    This paper presents the chare kernel system, which supports parallel computations with irregular structure. The chare kernel is a collection of primitive functions that manage chares, manipulative messages, invoke atomic computations, and coordinate concurrent activities. Programs written in the chare kernel language can be executed on different parallel machines without change. Users writing such programs concern themselves with the creation of parallel actions but not with assigning them to specific processors. The authors describe the design and implementation of the chare kernel. Performance of chare kernel programs on two hypercube machines, the Intel iPSC/2 and the NCUBE, is also given.

  15. Kernel weights optimization for error diffusion halftoning method

    NASA Astrophysics Data System (ADS)

    Fedoseev, Victor

    2015-02-01

    This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.

  16. Online kernel principal component analysis: a reduced-order model.

    PubMed

    Honeine, Paul

    2012-09-01

    Kernel principal component analysis (kernel-PCA) is an elegant nonlinear extension of one of the most used data analysis and dimensionality reduction techniques, the principal component analysis. In this paper, we propose an online algorithm for kernel-PCA. To this end, we examine a kernel-based version of Oja's rule, initially put forward to extract a linear principal axe. As with most kernel-based machines, the model order equals the number of available observations. To provide an online scheme, we propose to control the model order. We discuss theoretical results, such as an upper bound on the error of approximating the principal functions with the reduced-order model. We derive a recursive algorithm to discover the first principal axis, and extend it to multiple axes. Experimental results demonstrate the effectiveness of the proposed approach, both on synthetic data set and on images of handwritten digits, with comparison to classical kernel-PCA and iterative kernel-PCA.

  17. A Novel Framework for Learning Geometry-Aware Kernels.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo

    2016-05-01

    The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.

  18. Quark-hadron duality: Pinched kernel approach

    NASA Astrophysics Data System (ADS)

    Dominguez, C. A.; Hernandez, L. A.; Schilcher, K.; Spiesberger, H.

    2016-08-01

    Hadronic spectral functions measured by the ALEPH collaboration in the vector and axial-vector channels are used to study potential quark-hadron duality violations (DV). This is done entirely in the framework of pinched kernel finite energy sum rules (FESR), i.e. in a model independent fashion. The kinematical range of the ALEPH data is effectively extended up to s = 10 GeV2 by using an appropriate kernel, and assuming that in this region the spectral functions are given by perturbative QCD. Support for this assumption is obtained by using e+ e‑ annihilation data in the vector channel. Results in both channels show a good saturation of the pinched FESR, without further need of explicit models of DV.

  19. Wilson Dslash Kernel From Lattice QCD Optimization

    SciTech Connect

    Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  20. Searching and Indexing Genomic Databases via Kernelization

    PubMed Central

    Gagie, Travis; Puglisi, Simon J.

    2015-01-01

    The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper, we survey the 20-year history of this idea and discuss its relation to kernelization in parameterized complexity. PMID:25710001

  1. Random scattering matrices for Andreev quantum dots with nonideal leads

    NASA Astrophysics Data System (ADS)

    Béri, B.

    2009-06-01

    We calculate the distribution of the scattering matrix at the Fermi level for chaotic normal-superconducting systems for the case of arbitrary coupling of the scattering region to the scattering channels. The derivation is based on the assumption of uniformly distributed scattering matrices at ideal coupling, which holds in the absence of a gap in the quasiparticle excitation spectrum. The resulting distribution is the analog of the Poisson kernel for the nonstandard symmetry classes introduced by Altland and Zirnbauer. We show that unlike the Poisson kernel, the analyticity-ergodicity constraint does not apply to our result. As a simple application, we calculate the distribution of the conductance for a single-channel chaotic Andreev quantum dot in a magnetic field.

  2. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.

  3. Semi-Supervised Kernel Mean Shift Clustering.

    PubMed

    Anand, Saket; Mittal, Sushil; Tuzel, Oncel; Meer, Peter

    2014-06-01

    Mean shift clustering is a powerful nonparametric technique that does not require prior knowledge of the number of clusters and does not constrain the shape of the clusters. However, being completely unsupervised, its performance suffers when the original distance metric fails to capture the underlying cluster structure. Despite recent advances in semi-supervised clustering methods, there has been little effort towards incorporating supervision into mean shift. We propose a semi-supervised framework for kernel mean shift clustering (SKMS) that uses only pairwise constraints to guide the clustering procedure. The points are first mapped to a high-dimensional kernel space where the constraints are imposed by a linear transformation of the mapped points. This is achieved by modifying the initial kernel matrix by minimizing a log det divergence-based objective function. We show the advantages of SKMS by evaluating its performance on various synthetic and real datasets while comparing with state-of-the-art semi-supervised clustering algorithms. PMID:26353281

  4. Kernel methods for phenotyping complex plant architecture.

    PubMed

    Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien

    2014-02-01

    The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.

  5. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. PMID:26829605

  6. Kernel-based machine learning techniques for infrasound signal classification

    NASA Astrophysics Data System (ADS)

    Tuma, Matthias; Igel, Christian; Mialle, Pierrick

    2014-05-01

    Infrasound monitoring is one of four remote sensing technologies continuously employed by the CTBTO Preparatory Commission. The CTBTO's infrasound network is designed to monitor the Earth for potential evidence of atmospheric or shallow underground nuclear explosions. Upon completion, it will comprise 60 infrasound array stations distributed around the globe, of which 47 were certified in January 2014. Three stages can be identified in CTBTO infrasound data processing: automated processing at the level of single array stations, automated processing at the level of the overall global network, and interactive review by human analysts. At station level, the cross correlation-based PMCC algorithm is used for initial detection of coherent wavefronts. It produces estimates for trace velocity and azimuth of incoming wavefronts, as well as other descriptive features characterizing a signal. Detected arrivals are then categorized into potentially treaty-relevant versus noise-type signals by a rule-based expert system. This corresponds to a binary classification task at the level of station processing. In addition, incoming signals may be grouped according to their travel path in the atmosphere. The present work investigates automatic classification of infrasound arrivals by kernel-based pattern recognition methods. It aims to explore the potential of state-of-the-art machine learning methods vis-a-vis the current rule-based and task-tailored expert system. To this purpose, we first address the compilation of a representative, labeled reference benchmark dataset as a prerequisite for both classifier training and evaluation. Data representation is based on features extracted by the CTBTO's PMCC algorithm. As classifiers, we employ support vector machines (SVMs) in a supervised learning setting. Different SVM kernel functions are used and adapted through different hyperparameter optimization routines. The resulting performance is compared to several baseline classifiers. All

  7. SU-E-J-135: Feasibility of Using Quantitative Cone Beam CT for Proton Adaptive Planning

    SciTech Connect

    Jingqian, W; Wang, Q; Zhang, X; Wen, Z; Zhu, X; Frank, S; Li, H; Tsui, T; Zhu, L; Wei, J

    2015-06-15

    Purpose: To investigate the feasibility of using scatter corrected cone beam CT (CBCT) for proton adaptive planning. Methods: Phantom study was used to evaluate the CT number difference between the planning CT (pCT), quantitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units using adaptive scatter kernel superposition (ASKS) technique, and raw CBCT (rCBCT). After confirming the CT number accuracy, prostate patients, each with a pCT and several sets of weekly CBCT, were investigated for this study. Spot scanning proton treatment plans were independently generated on pCT, qCBCT and rCBCT. The treatment plans were then recalculated on all images. Dose-volume-histogram (DVH) parameters and gamma analysis were used to compare between dose distributions. Results: Phantom study suggested that Hounsfield unit accuracy for different materials are within 20 HU for qCBCT and over 250 HU for rCBCT. For prostate patients, proton dose could be calculated accurately on qCBCT but not on rCBCT. When the original plan was recalculated on qCBCT, tumor coverage was maintained when anatomy was consistent with pCT. However, large dose variance was observed when patient anatomy change. Adaptive plan using qCBCT was able to recover tumor coverage and reduce dose to normal tissue. Conclusion: It is feasible to use qu antitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units for proton dose calculation and adaptive planning in proton therapy. Partly supported by Varian Medical Systems.

  8. Nonlinear feature extraction using kernel principal component analysis with non-negative pre-image.

    PubMed

    Kallas, Maya; Honeine, Paul; Richard, Cedric; Amoud, Hassan; Francis, Clovis

    2010-01-01

    The inherent physical characteristics of many real-life phenomena, including biological and physiological aspects, require adapted nonlinear tools. Moreover, the additive nature in some situations involve solutions expressed as positive combinations of data. In this paper, we propose a nonlinear feature extraction method, with a non-negativity constraint. To this end, the kernel principal component analysis is considered to define the most relevant features in the reproducing kernel Hilbert space. These features are the nonlinear principal components with high-order correlations between input variables. A pre-image technique is required to get back to the input space. With a non-negative constraint, we show that one can solve the pre-image problem efficiently, using a simple iterative scheme. Furthermore, the constrained solution contributes to the stability of the algorithm. Experimental results on event-related potentials (ERP) illustrate the efficiency of the proposed method.

  9. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  10. Multiple kernel learning for sparse representation-based classification.

    PubMed

    Shrivastava, Ashish; Patel, Vishal M; Chellappa, Rama

    2014-07-01

    In this paper, we propose a multiple kernel learning (MKL) algorithm that is based on the sparse representation-based classification (SRC) method. Taking advantage of the nonlinear kernel SRC in efficiently representing the nonlinearities in the high-dimensional feature space, we propose an MKL method based on the kernel alignment criteria. Our method uses a two step training method to learn the kernel weights and sparse codes. At each iteration, the sparse codes are updated first while fixing the kernel mixing coefficients, and then the kernel mixing coefficients are updated while fixing the sparse codes. These two steps are repeated until a stopping criteria is met. The effectiveness of the proposed method is demonstrated using several publicly available image classification databases and it is shown that this method can perform significantly better than many competitive image classification algorithms. PMID:24835226

  11. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps.

    PubMed

    Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Lund, Torben Ellegaard; Hansen, Lars Kai

    2011-04-01

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification models. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We show that the performance of linear models is reduced for certain scan labelings/categorizations in this data set, while the nonlinear models provide more flexibility. We show that the sensitivity map can be used to visualize nonlinear versions of kernel logistic regression, the kernel Fisher discriminant, and the SVM, and conclude that the sensitivity map is a versatile and computationally efficient tool for visualization of nonlinear kernel models in neuroimaging.

  12. Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.

    SciTech Connect

    CHIBANI, OMAR

    1999-05-12

    Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twice the source particle range.

  13. A Kernel-based Account of Bibliometric Measures

    NASA Astrophysics Data System (ADS)

    Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji

    The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.

  14. Embedded real-time operating system micro kernel design

    NASA Astrophysics Data System (ADS)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  15. Robust visual tracking via speedup multiple kernel ridge regression

    NASA Astrophysics Data System (ADS)

    Qian, Cheng; Breckon, Toby P.; Li, Hui

    2015-09-01

    Most of the tracking methods attempt to build up feature spaces to represent the appearance of a target. However, limited by the complex structure of the distribution of features, the feature spaces constructed in a linear manner cannot characterize the nonlinear structure well. We propose an appearance model based on kernel ridge regression for visual tracking. Dense sampling is fulfilled around the target image patches to collect the training samples. In order to obtain a kernel space in favor of describing the target appearance, multiple kernel learning is introduced into the selection of kernels. Under the framework, instead of a single kernel, a linear combination of kernels is learned from the training samples to create a kernel space. Resorting to the circulant property of a kernel matrix, a fast interpolate iterative algorithm is developed to seek coefficients that are assigned to these kernels so as to give an optimal combination. After the regression function is learned, all candidate image patches gathered are taken as the input of the function, and the candidate with the maximal response is regarded as the object image patch. Extensive experimental results demonstrate that the proposed method outperforms other state-of-the-art tracking methods.

  16. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  17. LFK. Livermore FORTRAN Kernel Computer Test

    SciTech Connect

    McMahon, F.H.

    1990-05-01

    LFK, the Livermore FORTRAN Kernels, is a computer performance test that measures a realistic floating-point performance range for FORTRAN applications. Informally known as the Livermore Loops test, the LFK test may be used as a computer performance test, as a test of compiler accuracy (via checksums) and efficiency, or as a hardware endurance test. The LFK test, which focuses on FORTRAN as used in computational physics, measures the joint performance of the computer CPU, the compiler, and the computational structures in units of Megaflops/sec or Mflops. A C language version of subroutine KERNEL is also included which executes 24 samples of C numerical computation. The 24 kernels are a hydrodynamics code fragment, a fragment from an incomplete Cholesky conjugate gradient code, the standard inner product function of linear algebra, a fragment from a banded linear equations routine, a segment of a tridiagonal elimination routine, an example of a general linear recurrence equation, an equation of state fragment, part of an alternating direction implicit integration code, an integrate predictor code, a difference predictor code, a first sum, a first difference, a fragment from a two-dimensional particle-in-cell code, a part of a one-dimensional particle-in-cell code, an example of how casually FORTRAN can be written, a Monte Carlo search loop, an example of an implicit conditional computation, a fragment of a two-dimensional explicit hydrodynamics code, a general linear recurrence equation, part of a discrete ordinates transport program, a simple matrix calculation, a segment of a Planckian distribution procedure, a two-dimensional implicit hydrodynamics fragment, and determination of the location of the first minimum in an array.

  18. Oil point pressure of Indian almond kernels

    NASA Astrophysics Data System (ADS)

    Aregbesola, O.; Olatunde, G.; Esuola, S.; Owolarafe, O.

    2012-07-01

    The effect of preprocessing conditions such as moisture content, heating temperature, heating time and particle size on oil point pressure of Indian almond kernel was investigated. Results showed that oil point pressure was significantly (P < 0.05) affected by above mentioned parameters. It was also observed that oil point pressure reduced with increase in heating temperature and heating time for both coarse and fine particles. Furthermore, an increase in moisture content resulted in increased oil point pressure for coarse particles while there was a reduction in oil point pressure with increase in moisture content for fine particles.

  19. Verification of Chare-kernel programs

    SciTech Connect

    Bhansali, S.; Kale, L.V. )

    1989-01-01

    Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.

  20. Prediction of kernel density of corn using single-kernel near infrared spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...

  1. TH-C-BRD-04: Beam Modeling and Validation with Triple and Double Gaussian Dose Kernel for Spot Scanning Proton Beams

    SciTech Connect

    Hirayama, S; Takayanagi, T; Fujii, Y; Fujimoto, R; Fujitaka, S; Umezawa, M; Nagamine, Y; Hosaka, M; Yasui, K; Toshito, T

    2014-06-15

    Purpose: To present the validity of our beam modeling with double and triple Gaussian dose kernels for spot scanning proton beams in Nagoya Proton Therapy Center. This study investigates the conformance between the measurements and calculation results in absolute dose with two types of beam kernel. Methods: A dose kernel is one of the important input data required for the treatment planning software. The dose kernel is the 3D dose distribution of an infinitesimal pencil beam of protons in water and consists of integral depth doses and lateral distributions. We have adopted double and triple Gaussian model as lateral distribution in order to take account of the large angle scattering due to nuclear reaction by fitting simulated inwater lateral dose profile for needle proton beam at various depths. The fitted parameters were interpolated as a function of depth in water and were stored as a separate look-up table for the each beam energy. The process of beam modeling is based on the method of MDACC [X.R.Zhu 2013]. Results: From the comparison results between the absolute doses calculated by double Gaussian model and those measured at the center of SOBP, the difference is increased up to 3.5% in the high-energy region because the large angle scattering due to nuclear reaction is not sufficiently considered at intermediate depths in the double Gaussian model. In case of employing triple Gaussian dose kernels, the measured absolute dose at the center of SOBP agrees with calculation within ±1% regardless of the SOBP width and maximum range. Conclusion: We have demonstrated the beam modeling results of dose distribution employing double and triple Gaussian dose kernel. Treatment planning system with the triple Gaussian dose kernel has been successfully verified and applied to the patient treatment with a spot scanning technique in Nagoya Proton Therapy Center.

  2. Linear and kernel methods for multi- and hypervariate change detection

    NASA Astrophysics Data System (ADS)

    Nielsen, Allan A.; Canty, Morton J.

    2010-10-01

    The iteratively re-weighted multivariate alteration detection (IR-MAD) algorithm may be used both for unsuper- vised change detection in multi- and hyperspectral remote sensing imagery as well as for automatic radiometric normalization of multi- or hypervariate multitemporal image sequences. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA), kernel MAF and kernel MNF analyses handle nonlinearities by implicitly transforming data into high (even innite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In image analysis the Gram matrix is often prohibitively large (its size is the number of pixels in the image squared). In this case we may sub-sample the image and carry out the kernel eigenvalue analysis on a set of training data samples only. To obtain a transformed version of the entire image we then project all pixels, which we call the test data, mapped nonlinearly onto the primal eigenvectors. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric normalization and kernel PCA/MAF/MNF transformations have been written

  3. Fructan metabolism in developing wheat (Triticum aestivum L.) kernels.

    PubMed

    Verspreet, Joran; Cimini, Sara; Vergauwen, Rudy; Dornez, Emmie; Locato, Vittoria; Le Roy, Katrien; De Gara, Laura; Van den Ende, Wim; Delcour, Jan A; Courtin, Christophe M

    2013-12-01

    Although fructans play a crucial role in wheat kernel development, their metabolism during kernel maturation is far from being understood. In this study, all major fructan-metabolizing enzymes together with fructan content, fructan degree of polymerization and the presence of fructan oligosaccharides were examined in developing wheat kernels (Triticum aestivum L. var. Homeros) from anthesis until maturity. Fructan accumulation occurred mainly in the first 2 weeks after anthesis, and a maximal fructan concentration of 2.5 ± 0.3 mg fructan per kernel was reached at 16 days after anthesis (DAA). Fructan synthesis was catalyzed by 1-SST (sucrose:sucrose 1-fructosyltransferase) and 6-SFT (sucrose:fructan 6-fructosyltransferase), and to a lesser extent by 1-FFT (fructan:fructan 1-fructosyltransferase). Despite the presence of 6G-kestotriose in wheat kernel extracts, the measured 6G-FFT (fructan:fructan 6G-fructosyltransferase) activity levels were low. During kernel filling, which lasted from 2 to 6 weeks after anthesis, kernel fructan content decreased from 2.5 ± 0.3 to 1.31 ± 0.12 mg fructan per kernel (42 DAA) and the average fructan degree of polymerization decreased from 7.3 ± 0.4 (14 DAA) to 4.4 ± 0.1 (42 DAA). FEH (fructan exohydrolase) reached maximal activity between 20 and 28 DAA. No fructan-metabolizing enzyme activities were registered during the final phase of kernel maturation, and fructan content and structure remained unchanged. This study provides insight into the complex metabolism of fructans during wheat kernel development and relates fructan turnover to the general phases of kernel development.

  4. Aligning Biomolecular Networks Using Modular Graph Kernels

    NASA Astrophysics Data System (ADS)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  5. Bergman kernel, balanced metrics and black holes

    NASA Astrophysics Data System (ADS)

    Klevtsov, Semyon

    In this thesis we explore the connections between the Kahler geometry and Landau levels on compact manifolds. We rederive the expansion of the Bergman kernel on Kahler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory. The physics interpretation of this result is as an expansion of the projector of wavefunctions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kahler form. This is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry. We also generalize this expansion to supersymmetric quantum mechanics and more general magnetic fields, and explore its applications. These include the quantum Hall effect in curved space, the balanced metrics and Kahler gravity. In particular, we conjecture that for a probe in a BPS black hole in type II strings compactified on Calabi-Yau manifolds, the moduli space metric is the balanced metric.

  6. Delimiting Areas of Endemism through Kernel Interpolation

    PubMed Central

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  7. Pareto-path multitask multiple kernel learning.

    PubMed

    Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2015-01-01

    A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches. PMID:25532155

  8. Scientific Computing Kernels on the Cell Processor

    SciTech Connect

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  9. Pareto-path multitask multiple kernel learning.

    PubMed

    Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2015-01-01

    A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.

  10. Stable Local Volatility Calibration Using Kernel Splines

    NASA Astrophysics Data System (ADS)

    Coleman, Thomas F.; Li, Yuying; Wang, Cheng

    2010-09-01

    We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.

  11. Transcriptome analysis of Ginkgo biloba kernels

    PubMed Central

    He, Bing; Gu, Yincong; Xu, Meng; Wang, Jianwen; Cao, Fuliang; Xu, Li-an

    2015-01-01

    Ginkgo biloba is a dioecious species native to China with medicinally and phylogenetically important characteristics; however, genomic resources for this species are limited. In this study, we performed the first transcriptome sequencing for Ginkgo kernels at five time points using Illumina paired-end sequencing. Approximately 25.08-Gb clean reads were obtained, and 68,547 unigenes with an average length of 870 bp were generated by de novo assembly. Of these unigenes, 29,987 (43.74%) were annotated in publicly available plant protein database. A total of 3,869 genes were identified as significantly differentially expressed, and enrichment analysis was conducted at different time points. Furthermore, metabolic pathway analysis revealed that 66 unigenes were responsible for terpenoid backbone biosynthesis, with up to 12 up-regulated unigenes involved in the biosynthesis of ginkgolide and bilobalide. Differential gene expression analysis together with real-time PCR experiments indicated that the synthesis of bilobalide may have interfered with the ginkgolide synthesis process in the kernel. These data can remarkably expand the existing transcriptome resources of Ginkgo, and provide a valuable platform to reveal more on developmental and metabolic mechanisms of this species. PMID:26500663

  12. Analysis of maize (Zea mays) kernel density and volume using micro-computed tomography and single-kernel near infrared spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Maize kernel density impacts milling quality of the grain due to kernel hardness. Harder kernels are correlated with higher test weight and are more resistant to breakage during harvest and transport. Softer kernels, in addition to being susceptible to mechanical damage, are also prone to pathogen ...

  13. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  14. Evidence-based kernels: fundamental units of behavioral influence.

    PubMed

    Embry, Dennis D; Biglan, Anthony

    2008-09-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior.

  15. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  16. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of...

  17. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of...

  18. Evidence-Based Kernels: Fundamental Units of Behavioral Influence

    ERIC Educational Resources Information Center

    Embry, Dennis D.; Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…

  19. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  20. Sugar uptake into kernels of tunicate tassel-seed maize

    SciTech Connect

    Thomas, P.A.; Felker, F.C.; Crawford, C.G. )

    1990-05-01

    A maize (Zea mays L.) strain expressing both the tassel-seed (Ts-5) and tunicate (Tu) characters was developed which produces glume-covered kernels on the tassel, often born on 7-10 mm pedicels. Vigorous plants produce up to 100 such kernels interspersed with additional sessile kernels. This floral unit provides a potentially valuable experimental system for studying sugar uptake into developing maize seeds. When detached kernels (with glumes and pedicel intact) are placed in incubation solution, fluid flows up the pedicel and into the glumes, entering the pedicel apoplast near the kernel base. The unusual anatomical features of this maize strain permit experimental access to the pedicel apoplast with much less possibility of kernel base tissue damage than with kernels excised from the cob. ({sup 14}C)Fructose incorporation into soluble and insoluble fractions of endosperm increased for 8 days. Endosperm uptake of sucrose, fructose, and D-glucose was significantly greater than that of L-glucose. Fructose uptake was significantly inhibited by CCCP, DNP, and PCMBS. These results suggest the presence of an active, non-diffusion component of sugar transport in maize kernels.

  1. Introduction to Kernel Methods: Classification of Multivariate Data

    NASA Astrophysics Data System (ADS)

    Fauvel, M.

    2016-05-01

    In this chapter, kernel methods are presented for the classification of multivariate data. An introduction example is given to enlighten the main idea of kernel methods. Then emphasis is done on the Support Vector Machine. Structural risk minimization is presented, and linear and non-linear SVM are described. Finally, a full example of SVM classification is given on simulated hyperspectral data.

  2. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  3. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  4. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  5. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  6. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  7. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  8. Scale Space Graph Representation and Kernel Matching for Non Rigid and Textured 3D Shape Retrieval.

    PubMed

    Garro, Valeria; Giachetti, Andrea

    2016-06-01

    In this paper we introduce a novel framework for 3D object retrieval that relies on tree-based shape representations (TreeSha) derived from the analysis of the scale-space of the Auto Diffusion Function (ADF) and on specialized graph kernels designed for their comparison. By coupling maxima of the Auto Diffusion Function with the related basins of attraction, we can link the information at different scales encoding spatial relationships in a graph description that is isometry invariant and can easily incorporate texture and additional geometrical information as node and edge features. Using custom graph kernels it is then possible to estimate shape dissimilarities adapted to different specific tasks and on different categories of models, making the procedure a powerful and flexible tool for shape recognition and retrieval. Experimental results demonstrate that the method can provide retrieval scores similar or better than state-of-the-art on textured and non textured shape retrieval benchmarks and give interesting insights on effectiveness of different shape descriptors and graph kernels.

  9. Accumulation of storage products in oat during kernel development.

    PubMed

    Banaś, A; Dahlqvist, A; Debski, H; Gummeson, P O; Stymne, S

    2000-12-01

    Lipids, proteins and starch are the main storage products in oat seeds. As a first step in elucidating the regulatory mechanisms behind the deposition of these compounds, two different oat varieties, 'Freja' and 'Matilda', were analysed during kernel development. In both cultivars, the majority of the lipids accumulated at very early stage of development but Matilda accumulated about twice the amount of lipids compared to Freja. Accumulation of proteins and starch started also in the early stage of kernel development but, in contrast to lipids, continued over a considerably longer period. The high-oil variety Matilda also accumulated higher amounts of proteins than Freja. The starch content in Freja kernels was higher than in Matilda kernels and the difference was most pronounced during the early stage of development when oil synthesis was most active. Oleosin accumulation continued during the whole period of kernel development.

  10. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  11. OSKI: A Library of Automatically Tuned Sparse Matrix Kernels

    SciTech Connect

    Vuduc, R; Demmel, J W; Yelick, K A

    2005-07-19

    The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.

  12. Feasibility of near infrared spectroscopy for analyzing corn kernel damage and viability of soybean and corn kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The current US corn grading system accounts for the portion of damaged kernels, which is measured by time-consuming and inaccurate visual inspection. Near infrared spectroscopy (NIRS), a non-destructive and fast analytical method, was tested as a tool for discriminating corn kernels with heat and f...

  13. A visualization tool for the kernel-driven model with improved ability in data analysis and kernel assessment

    NASA Astrophysics Data System (ADS)

    Dong, Yadong; Jiao, Ziti; Zhang, Hu; Bai, Dongni; Zhang, Xiaoning; Li, Yang; He, Dandan

    2016-10-01

    The semi-empirical, kernel-driven Bidirectional Reflectance Distribution Function (BRDF) model has been widely used for many aspects of remote sensing. With the development of the kernel-driven model, there is a need to further assess the performance of newly developed kernels. The use of visualization tools can facilitate the analysis of model results and the assessment of newly developed kernels. However, the current version of the kernel-driven model does not contain a visualization function. In this study, a user-friendly visualization tool, named MaKeMAT, was developed specifically for the kernel-driven model. The POLDER-3 and CAR BRDF datasets were used to demonstrate the applicability of MaKeMAT. The visualization of inputted multi-angle measurements enhances understanding of multi-angle measurements and allows the choice of measurements with good representativeness. The visualization of modeling results facilitates the assessment of newly developed kernels. The study shows that the visualization tool MaKeMAT can promote the widespread application of the kernel-driven model.

  14. Kernel-machine-based classification in multi-polarimetric SAR data

    NASA Astrophysics Data System (ADS)

    Middelmann, Wolfgang; Ebert, Alfons; Thoennessen, Ulrich

    2005-05-01

    The focus of this paper is the classification of military vehicles in multi-polarimetric high-resolution spotlight SAR images in an ATR framework. Kernel machines as robust classification methods are the basis of our approach. A novel kernel machine the Relevance Vector Machine with integrated Generator (RVMG) controlling the trade-off between classification quality and computational effort is used. It combines the high classification quality of the Support Vector Machine by margin maximization and the low effort of the Relevance Vector Machine caused by the special statistical approach. Moreover multi-class classification capability is given by an efficient decision heuristic, an adaptive feature extraction based on Fourier coefficients allows the module to do real time execution, and a parameterized reject criterion is proposed in this paper. Investigations with a nine class data set from QinetiQ deal with fully polarimetric SAR data. The objective is to assess polarimetric features in combination with several kernel machines. Tests approve the high potential of RVMG. Moreover it is shown that polarimetric features can improve the classification quality for hard targets. Among these the simple energy based features prove more favorable than complex ones. Especially the two coplanar polarizations embody the essential information, but a better generalizability is caused by using all four channels. An important property of a classifier used in the ATR framework is the capability to reject objects not belonging to any of the trained classes. Therefore the QinetiQ data are divided into four training classes and five classes of confusion objects. The classification module with reject criterion is controlled by the reject parameter and the kernel parameter. Both parameters are varied to determine ROC curves related to different polarimetric features.

  15. Privacy preserving RBF kernel support vector machine.

    PubMed

    Li, Haoran; Xiong, Li; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2014-01-01

    Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data. PMID:25013805

  16. Kernel density estimation using graphical processing unit

    NASA Astrophysics Data System (ADS)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  17. Single scattering by red blood cells.

    PubMed

    Hammer, M; Schweitzer, D; Michel, B; Thamm, E; Kolb, A

    1998-11-01

    A highly diluted suspension of red blood cells (hematocrit 0.01) was illuminated with an Ar or a dye laser in the wavelength range of 458-660 nm. The extinction and the angle-resolved intensity of scattered light were measured and compared with the predictions of Mie theory, the Rayleigh-Gans approximation, and the anomalous diffraction approximation. Furthermore, empirical phase functions were fitted to the measurements. The measurements were in satisfactory agreement with the predictions of Mie theory. However, better agreement was found with the anomalous diffraction model. In the Rayleigh-Gans approximation, only small-angle scattering is described appropriately. The scattering phase function of erythrocytes may be represented by the Gegenbauer kernel phase function. PMID:18301575

  18. The flare kernel in the impulsive phase

    NASA Technical Reports Server (NTRS)

    Dejager, C.

    1986-01-01

    The impulsive phase of a flare is characterized by impulsive bursts of X-ray and microwave radiation, related to impulsive footpoint heating up to 50 or 60 MK, by upward gas velocities (150 to 400 km/sec) and by a gradual increase of the flare's thermal energy content. These phenomena, as well as non-thermal effects, are all related to the impulsive energy injection into the flare. The available observations are also quantitatively consistent with a model in which energy is injected into the flare by beams of energetic electrons, causing ablation of chromospheric gas, followed by convective rise of gas. Thus, a hole is burned into the chromosphere; at the end of impulsive phase of an average flare the lower part of that hole is situated about 1800 km above the photosphere. H alpha and other optical and UV line emission is radiated by a thin layer (approx. 20 km) at the bottom of the flare kernel. The upward rising and outward streaming gas cools down by conduction in about 45 s. The non-thermal effects in the initial phase are due to curtailing of the energy distribution function by escape of energetic electrons. The single flux tube model of a flare does not fit with these observations; instead we propose the spaghetti-bundle model. Microwave and gamma-ray observations suggest the occurrence of dense flare knots of approx. 800 km diameter, and of high temperature. Future observations should concentrate on locating the microwave/gamma-ray sources, and on determining the kernel's fine structure and the related multi-loop structure of the flaring area.

  19. Labeled Graph Kernel for Behavior Analysis.

    PubMed

    Zhao, Ruiqi; Martinez, Aleix M

    2016-08-01

    Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data.

  20. Equivalence of kernel machine regression and kernel distance covariance for multidimensional phenotype association studies.

    PubMed

    Hua, Wen-Yu; Ghosh, Debashis

    2015-09-01

    Associating genetic markers with a multidimensional phenotype is an important yet challenging problem. In this work, we establish the equivalence between two popular methods: kernel-machine regression (KMR), and kernel distance covariance (KDC). KMR is a semiparametric regression framework that models covariate effects parametrically and genetic markers non-parametrically, while KDC represents a class of methods that include distance covariance (DC) and Hilbert-Schmidt independence criterion (HSIC), which are nonparametric tests of independence. We show that the equivalence between the score test of KMR and the KDC statistic under certain conditions can lead to a novel generalization of the KDC test that incorporates covariates. Our contributions are 3-fold: (1) establishing the equivalence between KMR and KDC; (2) showing that the principles of KMR can be applied to the interpretation of KDC; (3) the development of a broader class of KDC statistics, where the class members are statistics corresponding to different kernel combinations. Finally, we perform simulation studies and an analysis of real data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. The ADNI study suggest that SNPs of FLJ16124 exhibit pairwise interaction effects that are strongly correlated to the changes of brain region volumes. PMID:25939365

  1. Equivalence of kernel machine regression and kernel distance covariance for multidimensional phenotype association studies.

    PubMed

    Hua, Wen-Yu; Ghosh, Debashis

    2015-09-01

    Associating genetic markers with a multidimensional phenotype is an important yet challenging problem. In this work, we establish the equivalence between two popular methods: kernel-machine regression (KMR), and kernel distance covariance (KDC). KMR is a semiparametric regression framework that models covariate effects parametrically and genetic markers non-parametrically, while KDC represents a class of methods that include distance covariance (DC) and Hilbert-Schmidt independence criterion (HSIC), which are nonparametric tests of independence. We show that the equivalence between the score test of KMR and the KDC statistic under certain conditions can lead to a novel generalization of the KDC test that incorporates covariates. Our contributions are 3-fold: (1) establishing the equivalence between KMR and KDC; (2) showing that the principles of KMR can be applied to the interpretation of KDC; (3) the development of a broader class of KDC statistics, where the class members are statistics corresponding to different kernel combinations. Finally, we perform simulation studies and an analysis of real data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. The ADNI study suggest that SNPs of FLJ16124 exhibit pairwise interaction effects that are strongly correlated to the changes of brain region volumes.

  2. Probability-confidence-kernel-based localized multiple kernel learning with lp norm.

    PubMed

    Han, Yina; Liu, Guizhong

    2012-06-01

    Localized multiple kernel learning (LMKL) is an attractive strategy for combining multiple heterogeneous features in terms of their discriminative power for each individual sample. However, models excessively fitting to a specific sample would obstacle the extension to unseen data, while a more general form is often insufficient for diverse locality characterization. Hence, both learning sample-specific local models for each training datum and extending the learned models to unseen test data should be equally addressed in designing LMKL algorithm. In this paper, for an integrative solution, we propose a probability confidence kernel (PCK), which measures per-sample similarity with respect to probabilistic-prediction-based class attribute: The class attribute similarity complements the spatial-similarity-based base kernels for more reasonable locality characterization, and the predefined form of involved class probability density function facilitates the extension to the whole input space and ensures its statistical meaning. Incorporating PCK into support-vectormachine-based LMKL framework, we propose a new PCK-LMKL with arbitrary l(p)-norm constraint implied in the definition of PCKs, where both the parameters in PCK and the final classifier can be efficiently optimized in a joint manner. Evaluations of PCK-LMKL on both benchmark machine learning data sets (ten University of California Irvine (UCI) data sets) and challenging computer vision data sets (15-scene data set and Caltech-101 data set) have shown to achieve state-of-the-art performances.

  3. Gaussian kernel width optimization for sparse Bayesian learning.

    PubMed

    Mohsenzadeh, Yalda; Sheikhzadeh, Hamid

    2015-04-01

    Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters. PMID:25794377

  4. Correlation and classification of single kernel fluorescence hyperspectral data with aflatoxin concentration in corn kernels inoculated with Aspergillus flavus spores.

    PubMed

    Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D

    2010-05-01

    The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to

  5. Wilson loops and QCD/string scattering amplitudes

    SciTech Connect

    Makeenko, Yuri; Olesen, Poul

    2009-07-15

    We generalize modern ideas about the duality between Wilson loops and scattering amplitudes in N=4 super Yang-Mills theory to large N QCD by deriving a general relation between QCD meson scattering amplitudes and Wilson loops. We then investigate properties of the open-string disk amplitude integrated over reparametrizations. When the Wilson-loop is approximated by the area behavior, we find that the QCD scattering amplitude is a convolution of the standard Koba-Nielsen integrand and a kernel. As usual poles originate from the first factor, whereas no (momentum-dependent) poles can arise from the kernel. We show that the kernel becomes a constant when the number of external particles becomes large. The usual Veneziano amplitude then emerges in the kinematical regime, where the Wilson loop can be reliably approximated by the area behavior. In this case, we obtain a direct duality between Wilson loops and scattering amplitudes when spatial variables and momenta are interchanged, in analogy with the N=4 super Yang-Mills theory case.

  6. Scattering problems in elastodynamics

    NASA Astrophysics Data System (ADS)

    Diatta, Andre; Kadic, Muamer; Wegener, Martin; Guenneau, Sebastien

    2016-09-01

    In electromagnetism, acoustics, and quantum mechanics, scattering problems can routinely be solved numerically by virtue of perfectly matched layers (PMLs) at simulation domain boundaries. Unfortunately, the same has not been possible for general elastodynamic wave problems in continuum mechanics. In this Rapid Communication, we introduce a corresponding scattered-field formulation for the Navier equation. We derive PMLs based on complex-valued coordinate transformations leading to Cosserat elasticity-tensor distributions not obeying the minor symmetries. These layers are shown to work in two dimensions, for all polarizations, and all directions. By adaptative choice of the decay length, the deep subwavelength PMLs can be used all the way to the quasistatic regime. As demanding examples, we study the effectiveness of cylindrical elastodynamic cloaks of the Cosserat type and approximations thereof.

  7. Tracking diffusion of conditioning water in single wheat kernels of different hardnesses by near infrared hyperspectral imaging.

    PubMed

    Manley, Marena; du Toit, Gerida; Geladi, Paul

    2011-02-01

    The combination of near infrared (NIR) hyperspectral imaging and chemometrics was used to follow the diffusion of conditioning water over time in wheat kernels of different hardnesses. Conditioning was attempted with deionised water (dH(2)O) and deuterium oxide (D(2)O). The images were recorded at different conditioning times (0-36 h) from 1000 to 2498 nm with a line scan imaging system. After multivariate cleaning and spectral pre-processing (either multiplicative scatter correction or standard normal variate and Savitzky-Golay smoothing) six principal components (PCs) were calculated. These were studied visually interactively as score images and score plots. As no clear clusters were present in the score plots, changes in the score plots were investigated by means of classification gradients made within the respective PCs. Classes were selected in the direction of a PC (from positive to negative or negative to positive score values) in almost equal segments. Subsequently loading line plots were used to provide a spectroscopic explanation of the classification gradients. It was shown that the first PC explained kernel curvature. PC3 was shown to be related to a moisture-starch contrast and could explain the progress of water uptake. The positive influence of protein was also observed. The behaviour of soft, hard and very hard kernels was different in this respect, with the uptake of water observed much earlier in the soft kernels than in the harder ones. The harder kernels also showed a stronger influence of protein in the loading line plots. Difference spectra showed interpretable changes over time for water but not for D(2)O which had a too low signal in the wavelength range used. NIR hyperspectral imaging together with exploratory chemometrics, as detailed in this paper, may have wider applications than merely conditioning studies. PMID:21237309

  8. Bridging the gap between the KERNEL and RT-11

    SciTech Connect

    Hendra, R.G.

    1981-06-01

    A software package is proposed to allow users of the PL-11 language, and the LSI-11 KERNEL in general, to use their PL-11 programs under RT-11. Further, some general purpose extensions to the KERNEL are proposed that facilitate some number conversions and strong manipulations. A Floating Point Package of procedures to allow full use of the hardware floating point capability of the LSI-11 computers is proposed. Extensions to the KERNEL that allow a user to read, write and delete disc files in the manner of RT-11 is also proposed. A device directory listing routine is also included.

  9. Spectrophotometric method for determination of phosphine residues in cashew kernels.

    PubMed

    Rangaswamy, J R

    1988-01-01

    A spectrophotometric method reported for determination of phosphine (PH3) residues in wheat has been extended for determination of these residues in cashew kernels. Unlike the spectrum for wheat, the spectrum of PH3 residue-AgNO3 chromophore from cashew kernels does not show an absorption maximum at 400 nm; nevertheless, reading the absorbance at 400 nm afforded good recoveries of 90-98%. No interference occurred from crop materials, and crop controls showed low absorbance; the method can be applied for determinations as low as 0.01 ppm PH3 residue in cashew kernels.

  10. Multitasking kernel for the C and Fortran programming languages

    SciTech Connect

    Brooks, E.D. III

    1984-09-01

    A multitasking kernel for the C and Fortran programming languages which runs on the Unix operating system is presented. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the coding, debugging and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessors. The performance evaluation features require no changes in the source code of the application and are implemented as a set of compile and run time options in the kernel.

  11. Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.

    1999-05-12

    Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twicemore » the source particle range.« less

  12. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    PubMed

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel. PMID:27250181

  13. Kernel-based Linux emulation for Plan 9.

    SciTech Connect

    Minnich, Ronald G.

    2010-09-01

    CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9. In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.

  14. Inheritance of Kernel Color in Corn: Explanations and Investigations.

    ERIC Educational Resources Information Center

    Ford, Rosemary H.

    2000-01-01

    Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)

  15. Intelligent classification methods of grain kernels using computer vision analysis

    NASA Astrophysics Data System (ADS)

    Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo

    2011-06-01

    In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.

  16. Isolation and purification of D-mannose from palm kernel.

    PubMed

    Zhang, Tao; Pan, Ziguo; Qian, Chao; Chen, Xinzhi

    2009-09-01

    An economically viable procedure for the isolation and purification of d-mannose from palm kernel was developed in this research. The palm kernel was catalytically hydrolyzed with sulfuric acid at 100 degrees C and then fermented by mannan-degrading enzymes. The solution after fermentation underwent filtration in a silica gel column, desalination by ion-exchange resin, and crystallization in ethanol to produce pure d-mannose in a total yield of 48.4% (based on the weight of the palm kernel). Different enzymes were investigated, and the results indicated that endo-beta-mannanase was the best enzyme to promote the hydrolysis of the oligosaccharides isolated from the palm kernel. The pure d-mannose sample was characterized by FTIR, (1)H NMR, and (13)C NMR spectra.

  17. Scatter of X-rays on polished surfaces

    NASA Technical Reports Server (NTRS)

    Hasinger, G.

    1981-01-01

    In investigating the dispersion properties of telescope mirrors used in X-ray astronomy, the slight scattering characteristics of X-ray radiation by statistically rough surfaces were examined. The mathematics and geometry of scattering theory are described. The measurement test assembly is described and results of measurements on samples of plane mirrors are given. Measurement results are evaluated. The direct beam, the convolution of the direct beam and the scattering halo, curve fitting by the method of least squares, various autocorrelation functions, results of the fitting procedure for small scattering, and deviations in the kernel of the scattering distribution are presented. A procedure for quality testing of mirror systems through diagnosis of rough surfaces is described.

  18. Interferogram interpolation method research on TSMFTIS based on kernel regression with relative deviation

    NASA Astrophysics Data System (ADS)

    Huang, Fengzhen; Li, Jingzhen; Cao, Jun

    2015-02-01

    Temporally and Spatially Modulated Fourier Transform Imaging Spectrometer (TSMFTIS) is a new imaging spectrometer without moving mirrors and slits. As applied in remote sensing, TSMFTIS needs to rely on push-broom of the flying platform to obtain the interferogram of the target detected, and if the moving state of the flying platform changed during the imaging process, the target interferogram picked up from the remote sensing image sequence will deviate from the ideal interferogram, then the target spectrum recovered shall not reflect the real characteristic of the ground target object. Therefore, in order to achieve a high precision spectrum recovery of the target detected, the geometry position of the target point on the TSMFTIS image surface can be calculated in accordance with the sub-pixel image registration method, and the real point interferogram of the target can be obtained with image interpolation method. The core idea of the interpolation methods (nearest, bilinear and cubic etc) are to obtain the grey value of the point to be interpolated by weighting the grey value of the pixel around and with the kernel function constructed by the distance between the pixel around and the point to be interpolated. This paper adopts the gauss-based kernel regression mode, present a kernel function that consists of the grey information making use of the relative deviation and the distance information, then the kernel function is controlled by the deviation degree between the grey value of the pixel around and the means value so as to adjust weights self adaptively. The simulation adopts the partial spectrum data obtained by the pushbroom hyperspectral imager (PHI) as the spectrum of the target, obtains the successively push broomed motion error image in combination with the related parameter of the actual aviation platform; then obtains the interferogram of the target point with the above interpolation method; finally, recovers spectrogram with the nonuniform fast

  19. The Dynamic Kernel Scheduler-Part 1

    NASA Astrophysics Data System (ADS)

    Adelmann, Andreas; Locans, Uldis; Suter, Andreas

    2016-10-01

    Emerging processor architectures such as GPUs and Intel MICs provide a huge performance potential for high performance computing. However developing software that uses these hardware accelerators introduces additional challenges for the developer. These challenges may include exposing increased parallelism, handling different hardware designs, and using multiple development frameworks in order to utilise devices from different vendors. The Dynamic Kernel Scheduler (DKS) is being developed in order to provide a software layer between the host application and different hardware accelerators. DKS handles the communication between the host and the device, schedules task execution, and provides a library of built-in algorithms. Algorithms available in the DKS library will be written in CUDA, OpenCL, and OpenMP. Depending on the available hardware, the DKS can select the appropriate implementation of the algorithm. The first DKS version was created using CUDA for the Nvidia GPUs and OpenMP for Intel MIC. DKS was further integrated into OPAL (Object-oriented Parallel Accelerator Library) in order to speed up a parallel FFT based Poisson solver and Monte Carlo simulations for particle-matter interaction used for proton therapy degrader modelling. DKS was also used together with Minuit2 for parameter fitting, where χ2 and max-log-likelihood functions were offloaded to the hardware accelerator. The concepts of the DKS, first results, and plans for the future will be shown in this paper.

  20. Protoribosome by quantum kernel energy method.

    PubMed

    Huang, Lulu; Krupkin, Miri; Bashan, Anat; Yonath, Ada; Massa, Lou

    2013-09-10

    Experimental evidence suggests the existence of an RNA molecular prebiotic entity, called by us the "protoribosome," which may have evolved in the RNA world before evolution of the genetic code and proteins. This vestige of the RNA world, which possesses all of the capabilities required for peptide bond formation, seems to be still functioning in the heart of all of the contemporary ribosome. Within the modern ribosome this remnant includes the peptidyl transferase center. Its highly conserved nucleotide sequence is suggestive of its robustness under diverse environmental conditions, and hence on its prebiotic origin. Its twofold pseudosymmetry suggests that this entity could have been a dimer of self-folding RNA units that formed a pocket within which two activated amino acids might be accommodated, similar to the binding mode of modern tRNA molecules that carry amino acids or peptidyl moieties. Using quantum mechanics and crystal coordinates, this work studies the question of whether the putative protoribosome has properties necessary to function as an evolutionary precursor to the modern ribosome. The quantum model used in the calculations is density functional theory--B3LYP/3-21G*, implemented using the kernel energy method to make the computations practical and efficient. It occurs that the necessary conditions that would characterize a practicable protoribosome--namely (i) energetic structural stability and (ii) energetically stable attachment to substrates--are both well satisfied.

  1. Local Kernel for Brains Classification in Schizophrenia

    NASA Astrophysics Data System (ADS)

    Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.

    In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.

  2. Kernel MAD Algorithm for Relative Radiometric Normalization

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Tang, Ping; Hu, Changmiao

    2016-06-01

    The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  3. Kernel spectral clustering with memory effect

    NASA Astrophysics Data System (ADS)

    Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.

    2013-05-01

    Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.

  4. Resummed memory kernels in generalized system-bath master equations

    SciTech Connect

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  5. The Weighted Super Bergman Kernels Over the Supermatrix Spaces

    NASA Astrophysics Data System (ADS)

    Feng, Zhiming

    2015-12-01

    The purpose of this paper is threefold. Firstly, using Howe duality for , we obtain integral formulas of the super Schur functions with respect to the super standard Gaussian distributions. Secondly, we give explicit expressions of the super Szegö kernels and the weighted super Bergman kernels for the Cartan superdomains of type I. Thirdly, combining these results, we obtain duality relations of integrals over the unitary groups and the Cartan superdomains, and the marginal distributions of the weighted measure.

  6. Kernel approximation for solving few-body integral equations

    NASA Astrophysics Data System (ADS)

    Christie, I.; Eyre, D.

    1986-06-01

    This paper investigates an approximate method for solving integral equations that arise in few-body problems. The method is to replace the kernel by a degenerate kernel defined on a finite dimensional subspace of piecewise Lagrange polynomials. Numerical accuracy of the method is tested by solving the two-body Lippmann-Schwinger equation with non-separable potentials, and the three-body Amado-Lovelace equation with separable two-body potentials.

  7. Modeling reactive transport with particle tracking and kernel estimators

    NASA Astrophysics Data System (ADS)

    Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-04-01

    Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.

  8. Rapid detection of kernel rots and mycotoxins in maize by near-infrared reflectance spectroscopy.

    PubMed

    Berardo, Nicola; Pisacane, Vincenza; Battilani, Paola; Scandolara, Andrea; Pietri, Amedeo; Marocco, Adriano

    2005-10-19

    Near-infrared (NIR) spectroscopy is a practical spectroscopic procedure for the detection of organic compounds in matter. It is particularly useful because of its nondestructiveness, accuracy, rapid response, and easy operation. This work assesses the applicability of NIR for the rapid identification of micotoxigenic fungi and their toxic metabolites produced in naturally and artificially contaminated products. Two hundred and eighty maize samples were collected both from naturally contaminated maize crops grown in 16 areas in north-central Italy and from ears artificially inoculated with Fusarium verticillioides. All samples were analyzed for fungi infection, ergosterol, and fumonisin B1 content. The results obtained indicated that NIR could accurately predict the incidence of kernels infected by fungi, and by F. verticillioides in particular, as well as the quantity of ergosterol and fumonisin B1 in the meal. The statistics of the calibration and of the cross-validation for mold infection and for ergosterol and fumonisin B1 contents were significant. The best predictive ability for the percentage of global fungal infection and F. verticillioides was obtained using a calibration model utilizing maize kernels (r2 = 0.75 and SECV = 7.43) and maize meals (r2 = 0.79 and SECV = 10.95), respectively. This predictive performance was confirmed by the scatter plot of measured F. verticillioides infection versus NIR-predicted values in maize kernel samples (r2 = 0.80). The NIR methodology can be applied for monitoring mold contamination in postharvest maize, in particular F. verticilliodes and fumonisin presence, to distinguish contaminated lots from clean ones, and to avoid cross-contamination with other material during storage and may become a powerful tool for monitoring the safety of the food supply.

  9. Fast discontinuous Galerkin lattice-Boltzmann simulations on GPUs via maximal kernel fusion

    NASA Astrophysics Data System (ADS)

    Mazzeo, Marco D.

    2013-03-01

    A GPU implementation of the discontinuous Galerkin lattice-Boltzmann method with square spectral elements, and highly optimised for speed and precision of calculations is presented. An extensive analysis of the numerous variants of the fluid solver unveils that best performance is obtained by maximising CUDA kernel fusion and by arranging the resulting kernel tasks so as to trigger memory coherent and scattered loads in a specific manner, albeit at the cost of introducing cross-thread load unbalancing. Surprisingly, any attempt to vanish this, to maximise thread occupancy and to adopt conventional work tiling or distinct custom kernels highly tuned via ad hoc data and computation layouts invariably deteriorate performance. As such, this work sheds light into the possibility to hide fetch latencies of workloads involving heterogeneous loads in a way that is more effective than what is achieved with frequently suggested techniques. When simulating the lid-driven cavity on a NVIDIA GeForce GTX 480 via a 5-stage 4th-order Runge-Kutta (RK) scheme, the first four digits of the obtained centreline velocity values, or more, converge to those of the state-of-the-art literature data at a simulation speed of 7.0G primitive variable updates per second during the collision stage and 4.4G ones during each RK step of the advection by employing double-precision arithmetic (DPA) and a computational grid of 642 4×4-point elements only. The new programming engine leads to about 2× performance w.r.t. the best programming guidelines in the field. The new fluid solver on the above GPU is also 20-30 times faster than a highly optimised version running on a single core of a Intel Xeon X5650 2.66 GHz.

  10. Scaling limit of deeply virtual Compton scattering

    SciTech Connect

    A. Radyushkin

    2000-07-01

    The author outlines a perturbative QCD approach to the analysis of the deeply virtual Compton scattering process {gamma}{sup *}p {r_arrow} {gamma}p{prime} in the limit of vanishing momentum transfer t=(p{prime}{minus}p){sup 2}. The DVCS amplitude in this limit exhibits a scaling behavior described by a two-argument distributions F(x,y) which specify the fractions of the initial momentum p and the momentum transfer r {equivalent_to} p{prime}{minus}p carried by the constituents of the nucleon. The kernel R(x,y;{xi},{eta}) governing the evolution of the non-forward distributions F(x,y) has a remarkable property: it produces the GLAPD evolution kernel P(x/{xi}) when integrated over y and reduces to the Brodsky-Lepage evolution kernel V(y,{eta}) after the x-integration. This property is used to construct the solution of the one-loop evolution equation for the flavor non-singlet part of the non-forward quark distribution.

  11. Enzymatic treatment of peanut kernels to reduce allergen levels.

    PubMed

    Yu, Jianmei; Ahmedna, Mohamed; Goktepe, Ipek; Cheng, Hsiaopo; Maleki, Soheila

    2011-08-01

    This study investigated the use of enzymatic treatment to reduce peanut allergens in peanut kernels as affected by processing conditions. Two major peanut allergens, Ara h 1 and Ara h 2, were used as indicators of process effectiveness. Enzymatic treatment effectively reduced Ara h 1 and Ara h 2 in roasted peanut kernels by up to 100% under optimal conditions. For instance, treatment of roasted peanut kernels with α-chymotrypsin and trypsin for 1-3h significantly increased the solubility of peanut protein while reducing Ara h 1 and Ara h 2 in peanut kernel extracts by 100% and 98%, respectively, based on ELISA readings. Ara h 1 and Ara h 2 levels in peanut protein extracts were inversely correlated with protein solubility in roasted peanut. Blanching of kernels enhanced the effectiveness of enzyme treatment in roasted peanuts but not in raw peanuts. The optimal concentration of enzyme was determined by response surface to be in the range of 0.1-0.2%. No consistent results were obtained for raw peanut kernels since Ara h 1 and Ara h 2 increased in peanut protein extracts under some treatment conditions and decreased in others. PMID:25214091

  12. A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks

    PubMed Central

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  13. A Distributed Learning Method for ℓ 1 -Regularized Kernel Machine over Wireless Sensor Networks.

    PubMed

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-07-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ 1 norm regularization ( ℓ 1 -regularized) is investigated, and a novel distributed learning algorithm for the ℓ 1 -regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost.

  14. A Distributed Learning Method for ℓ 1 -Regularized Kernel Machine over Wireless Sensor Networks.

    PubMed

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ 1 norm regularization ( ℓ 1 -regularized) is investigated, and a novel distributed learning algorithm for the ℓ 1 -regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  15. A Gabor-Block-Based Kernel Discriminative Common Vector Approach Using Cosine Kernels for Human Face Recognition

    PubMed Central

    Kar, Arindam; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita; Kundu, Mahantapas

    2012-01-01

    In this paper a nonlinear Gabor Wavelet Transform (GWT) discriminant feature extraction approach for enhanced face recognition is proposed. Firstly, the low-energized blocks from Gabor wavelet transformed images are extracted. Secondly, the nonlinear discriminating features are analyzed and extracted from the selected low-energized blocks by the generalized Kernel Discriminative Common Vector (KDCV) method. The KDCV method is extended to include cosine kernel function in the discriminating method. The KDCV with the cosine kernels is then applied on the extracted low-energized discriminating feature vectors to obtain the real component of a complex quantity for face recognition. In order to derive positive kernel discriminative vectors, we apply only those kernel discriminative eigenvectors that are associated with nonzero eigenvalues. The feasibility of the low-energized Gabor-block-based generalized KDCV method with cosine kernel function models has been successfully tested for classification using the L1, L2 distance measures; and the cosine similarity measure on both frontal and pose-angled face recognition. Experimental results on the FRAV2D and the FERET database demonstrate the effectiveness of this new approach. PMID:23365559

  16. Volcano clustering determination: Bivariate Gauss vs. Fisher kernels

    NASA Astrophysics Data System (ADS)

    Cañón-Tapia, Edgardo

    2013-05-01

    Underlying many studies of volcano clustering is the implicit assumption that vent distribution can be studied by using kernels originally devised for distribution in plane surfaces. Nevertheless, an important change in topology in the volcanic context is related to the distortion that is introduced when attempting to represent features found on the surface of a sphere that are being projected into a plane. This work explores the extent to which different topologies of the kernel used to study the spatial distribution of vents can introduce significant changes in the obtained density functions. To this end, a planar (Gauss) and a spherical (Fisher) kernels are mutually compared. The role of the smoothing factor in these two kernels is also explored with some detail. The results indicate that the topology of the kernel is not extremely influential, and that either type of kernel can be used to characterize a plane or a spherical distribution with exactly the same detail (provided that a suitable smoothing factor is selected in each case). It is also shown that there is a limitation on the resolution of the Fisher kernel relative to the typical separation between data that can be accurately described, because data sets with separations lower than 500 km are considered as a single cluster using this method. In contrast, the Gauss kernel can provide adequate resolutions for vent distributions at a wider range of separations. In addition, this study also shows that the numerical value of the smoothing factor (or bandwidth) of both the Gauss and Fisher kernels has no unique nor direct relationship with the relevant separation among data. In order to establish the relevant distance, it is necessary to take into consideration the value of the respective smoothing factor together with a level of statistical significance at which the contributions to the probability density function will be analyzed. Based on such reference level, it is possible to create a hierarchy of

  17. Unified connected theory of few-body reaction mechanisms in N-body scattering theory

    NASA Technical Reports Server (NTRS)

    Polyzou, W. N.; Redish, E. F.

    1978-01-01

    A unified treatment of different reaction mechanisms in nonrelativistic N-body scattering is presented. The theory is based on connected kernel integral equations that are expected to become compact for reasonable constraints on the potentials. The operators T/sub +-//sup ab/(A) are approximate transition operators that describe the scattering proceeding through an arbitrary reaction mechanism A. These operators are uniquely determined by a connected kernel equation and satisfy an optical theorem consistent with the choice of reaction mechanism. Connected kernel equations relating T/sub +-//sup ab/(A) to the full T/sub +-//sup ab/ allow correction of the approximate solutions for any ignored process to any order. This theory gives a unified treatment of all few-body reaction mechanisms with the same dynamic simplicity of a model calculation, but can include complicated reaction mechanisms involving overlapping configurations where it is difficult to formulate models.

  18. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power

  19. An adaptive lidar

    NASA Astrophysics Data System (ADS)

    Oshlakov, V. G.; Andreev, M. I.; Malykh, D. D.

    2009-09-01

    Using the polarization characteristics of a target and its underlying surface one can change the target contrast range. As the target one can use the compact and discrete structures with different characteristics to reflect electromagnetic waves. An important problem, solved by the adaptive polarization lidar, is to determine the availability and identification of different targets based on their polarization characteristics against the background of underlying surface, which polarization characteristics are unknown. Another important problem of the adaptive polarization lidar is a search for the objects, which polarization characteristics are unknown, against the background of underlying surface, which polarization characteristics are known. The adaptive polarization lidar makes it possible to determine the presence of impurities in sea water. The characteristics of the adaptive polarization lidar undergo variations, i.e., polarization characteristics of a sensing signal and polarization characteristics of the receiver are varied depending on the problem to be solved. One of the versions of construction of the adaptive polarization lidar is considered. The increase of the contrast in the adaptive lidar has been demonstrated by the numerical experiment when sensing hydrosols on the background of the Rayleigh scattering, caused by clear water. The numerical experiment has also demonstrated the increase of the contrast in the adaptive lidar when sensing at two wavelengths of dry haze and dense haze on the background of the Rayleigh scattering, caused by the clear atmosphere. The most effective wavelength was chosen.

  20. Thermal-to-visible face recognition using multiple kernel learning

    NASA Astrophysics Data System (ADS)

    Hu, Shuowen; Gurram, Prudhvi; Kwon, Heesung; Chan, Alex L.

    2014-06-01

    Recognizing faces acquired in the thermal spectrum from a gallery of visible face images is a desired capability for the military and homeland security, especially for nighttime surveillance and intelligence gathering. However, thermal-tovisible face recognition is a highly challenging problem, due to the large modality gap between thermal and visible imaging. In this paper, we propose a thermal-to-visible face recognition approach based on multiple kernel learning (MKL) with support vector machines (SVMs). We first subdivide the face into non-overlapping spatial regions or blocks using a method based on coalitional game theory. For comparison purposes, we also investigate uniform spatial subdivisions. Following this subdivision, histogram of oriented gradients (HOG) features are extracted from each block and utilized to compute a kernel for each region. We apply sparse multiple kernel learning (SMKL), which is a MKLbased approach that learns a set of sparse kernel weights, as well as the decision function of a one-vs-all SVM classifier for each of the subjects in the gallery. We also apply equal kernel weights (non-sparse) and obtain one-vs-all SVM models for the same subjects in the gallery. Only visible images of each subject are used for MKL training, while thermal images are used as probe images during testing. With subdivision generated by game theory, we achieved Rank-1 identification rate of 50.7% for SMKL and 93.6% for equal kernel weighting using a multimodal dataset of 65 subjects. With uniform subdivisions, we achieved a Rank-1 identification rate of 88.3% for SMKL, but 92.7% for equal kernel weighting.

  1. Protein fold recognition using geometric kernel data fusion

    PubMed Central

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-01-01

    Motivation: Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. Results: We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. Availability and implementation: The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/ Contact: pooyapaydar@gmail.com or yves

  2. Stimulated Brillouin Scattering Microscopic Imaging

    PubMed Central

    Ballmann, Charles W.; Thompson, Jonathan V.; Traverso, Andrew J.; Meng, Zhaokai; Scully, Marlan O.; Yakovlev, Vladislav V.

    2015-01-01

    Two-dimensional stimulated Brillouin scattering microscopy is demonstrated for the first time using low power continuous-wave lasers tunable around 780 nm. Spontaneous Brillouin spectroscopy has much potential for probing viscoelastic properties remotely and non-invasively on a microscopic scale. Nonlinear Brillouin scattering spectroscopy and microscopy may provide a way to tremendously accelerate the data aquisition and improve spatial resolution. This general imaging setup can be easily adapted for specific applications in biology and material science. The low power and optical wavelengths in the water transparency window used in this setup provide a powerful bioimaging technique for probing the mechanical properties of hard and soft tissue. PMID:26691398

  3. Stimulated Brillouin Scattering Microscopic Imaging.

    PubMed

    Ballmann, Charles W; Thompson, Jonathan V; Traverso, Andrew J; Meng, Zhaokai; Scully, Marlan O; Yakovlev, Vladislav V

    2015-01-01

    Two-dimensional stimulated Brillouin scattering microscopy is demonstrated for the first time using low power continuous-wave lasers tunable around 780 nm. Spontaneous Brillouin spectroscopy has much potential for probing viscoelastic properties remotely and non-invasively on a microscopic scale. Nonlinear Brillouin scattering spectroscopy and microscopy may provide a way to tremendously accelerate the data aquisition and improve spatial resolution. This general imaging setup can be easily adapted for specific applications in biology and material science. The low power and optical wavelengths in the water transparency window used in this setup provide a powerful bioimaging technique for probing the mechanical properties of hard and soft tissue.

  4. Stimulated Brillouin Scattering Microscopic Imaging

    NASA Astrophysics Data System (ADS)

    Ballmann, Charles W.; Thompson, Jonathan V.; Traverso, Andrew J.; Meng, Zhaokai; Scully, Marlan O.; Yakovlev, Vladislav V.

    2015-12-01

    Two-dimensional stimulated Brillouin scattering microscopy is demonstrated for the first time using low power continuous-wave lasers tunable around 780 nm. Spontaneous Brillouin spectroscopy has much potential for probing viscoelastic properties remotely and non-invasively on a microscopic scale. Nonlinear Brillouin scattering spectroscopy and microscopy may provide a way to tremendously accelerate the data aquisition and improve spatial resolution. This general imaging setup can be easily adapted for specific applications in biology and material science. The low power and optical wavelengths in the water transparency window used in this setup provide a powerful bioimaging technique for probing the mechanical properties of hard and soft tissue.

  5. Nucleon-nucleon scattering within a multiple subtractive renormalization approach

    SciTech Connect

    Timoteo, V. S.; Frederico, T.; Delfino, A.; Tomio, Lauro

    2011-06-15

    We present a methodology to renormalize the nucleon-nucleon interaction in momentum space, using a recursive multiple subtraction approach that prescinds from a cutoff regularization, to construct the kernel of the scattering equation. The subtracted scattering equation is solved with the next-leading-order and next-to-next-leading-order interactions. The results are presented for all partial waves up to j=2, fitted to low-energy experimental data. In this renormalization group invariant approach, the subtraction energy emerges as a renormalization scale and the momentum associated with it comes to be about the QCD scale ({Lambda}{sub QCD}), irrespectively to the partial wave.

  6. Kernel Machine Testing for Risk Prediction with Stratified Case Cohort Studies

    PubMed Central

    Payne, Rebecca; Neykov, Matey; Jensen, Majken Karoline; Cai, Tianxi

    2015-01-01

    Summary Large assembled cohorts with banked biospecimens offer valuable opportunities to identify novel markers for risk prediction. When the outcome of interest is rare, an effective strategy to conserve limited biological resources while maintaining reasonable statistical power is the case cohort (CCH) sampling design, in which expensive markers are measured on a subset of cases and controls. However, the CCH design introduces significant analytical complexity due to outcome-dependent, finite-population sampling. Current methods for analyzing CCH studies focus primarily on the estimation of simple survival models with linear effects; testing and estimation procedures that can efficiently capture complex non-linear marker effects for CCH data remain elusive. In this paper, we propose inverse probability weighted (IPW) variance component type tests for identifying important marker sets through a Cox proportional hazards kernel machine (CoxKM) regression framework previously considered for full cohort studies (Cai et al., 2011). The optimal choice of kernel, while vitally important to attain high power, is typically unknown for a given dataset. Thus we also develop robust testing procedures that adaptively combine information from multiple kernels. The proposed IPW test statistics have complex null distributions that cannot easily be approximated explicitly. Furthermore, due to the correlation induced by CCH sampling, standard resampling methods such as the bootstrap fail to approximate the distribution correctly. We therefore propose a novel perturbation resampling scheme that can effectively recover the induced correlation structure. Results from extensive simulation studies suggest that the proposed IPW CoxKM testing procedures work well in finite samples. The proposed methods are further illustrated by application to a Danish CCH study of Apolipoprotein C-III markers on the risk of coronary heart disease. PMID:26692376

  7. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  8. Lesser grain borers, Rhyzopertha dominica, select rough rice kernels with cracked hulls for reproduction.

    PubMed

    Kavallieratos, Nickolas G; Athanassiou, Christos G; Arthur, Frank H; Throne, James E

    2012-01-01

    Tests were conducted to determine whether the lesser grain borer, Rhyzopertha dominica (F.) (Coleoptera: Bostrychidae), selects rough rice (Oryza sativa L. (Poales: Poaceae)) kernels with cracked hulls for reproduction when these kernels are mixed with intact kernels. Differing amounts of kernels with cracked hulls (0, 5, 10, and 20%) of the varieties Francis and Wells were mixed with intact kernels, and the number of adult progeny emerging from intact kernels and from kernels with cracked hulls was determined. The Wells variety had been previously classified as tolerant to R. dominica, while the Francis variety was classified as moderately susceptible. Few F 1 progeny were produced in Wells regardless of the percentage of kernels with cracked hulls, few of the kernels with cracked hulls had emergence holes, and little firass was produced from feeding damage. At 10 and 20% kernels with cracked hulls, the progeny production, number of emergence holes in kernels with cracked hulls, and the amount of firass was greater in Francis than in Wells. The proportion of progeny emerging from kernels with cracked hulls increased as the proportion of kernels with cracked hulls increased. The results indicate that R. dominica select kernels with cracked hulls for reproduction.

  9. Modularized seismic full waveform inversion based on waveform sensitivity kernels - The software package ASKI

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang; Lamara, Samir; Gutt, Phillip; Paffrath, Marcel

    2015-04-01

    We present a seismic full waveform inversion concept for applications ranging from seismological to enineering contexts, based on sensitivity kernels for full waveforms. The kernels are derived from Born scattering theory as the Fréchet derivatives of linearized frequency-domain full waveform data functionals, quantifying the influence of elastic earth model parameters and density on the data values. For a specific source-receiver combination, the kernel is computed from the displacement and strain field spectrum originating from the source evaluated throughout the inversion domain, as well as the Green function spectrum and its strains originating from the receiver. By storing the wavefield spectra of specific sources/receivers, they can be re-used for kernel computation for different specific source-receiver combinations, optimizing the total number of required forward simulations. In the iterative inversion procedure, the solution of the forward problem, the computation of sensitivity kernels and the derivation of a model update is held completely separate. In particular, the model description for the forward problem and the description of the inverted model update are kept independent. Hence, the resolution of the inverted model as well as the complexity of solving the forward problem can be iteratively increased (with increasing frequency content of the inverted data subset). This may regularize the overall inverse problem and optimizes the computational effort of both, solving the forward problem and computing the model update. The required interconnection of arbitrary unstructured volume and point grids is realized by generalized high-order integration rules and 3D-unstructured interpolation methods. The model update is inferred solving a minimization problem in a least-squares sense, resulting in Gauss-Newton convergence of the overall inversion process. The inversion method was implemented in the modularized software package ASKI (Analysis of Sensitivity

  10. Travel-time sensitivity kernels in long-range propagation.

    PubMed

    Skarsoulis, E K; Cornuelle, B D; Dzieciuch, M A

    2009-11-01

    Wave-theoretic travel-time sensitivity kernels (TSKs) are calculated in two-dimensional (2D) and three-dimensional (3D) environments and their behavior with increasing propagation range is studied and compared to that of ray-theoretic TSKs and corresponding Fresnel-volumes. The differences between the 2D and 3D TSKs average out when horizontal or cross-range marginals are considered, which indicates that they are not important in the case of range-independent sound-speed perturbations or perturbations of large scale compared to the lateral TSK extent. With increasing range, the wave-theoretic TSKs expand in the horizontal cross-range direction, their cross-range extent being comparable to that of the corresponding free-space Fresnel zone, whereas they remain bounded in the vertical. Vertical travel-time sensitivity kernels (VTSKs)-one-dimensional kernels describing the effect of horizontally uniform sound-speed changes on travel-times-are calculated analytically using a perturbation approach, and also numerically, as horizontal marginals of the corresponding TSKs. Good agreement between analytical and numerical VTSKs, as well as between 2D and 3D VTSKs, is found. As an alternative method to obtain wave-theoretic sensitivity kernels, the parabolic approximation is used; the resulting TSKs and VTSKs are in good agreement with normal-mode results. With increasing range, the wave-theoretic VTSKs approach the corresponding ray-theoretic sensitivity kernels.

  11. Characterization of the desiccation of wheat kernels by multivariate imaging.

    PubMed

    Jaillais, B; Perrin, E; Mangavel, C; Bertrand, D

    2011-06-01

    Variations in the quality of wheat kernels can be an important problem in the cereal industry. In particular, desiccation conditions play an essential role in both the technological characteristics of the kernel and its ability to sprout. In planta desiccation constitutes a key stage in the determinism of the functional properties of seeds. The impact of desiccation on the endosperm texture of seed is presented in this work. A simple imaging system had previously been developed to acquire multivariate images to characterize the heterogeneity of food materials. A special algorithm for the use under principal component analysis (PCA) was developed to process the acquired multivariate images. Wheat grains were collected at physiological maturity, and were subjected to two types of drying conditions that induced different kinetics of water loss. A data set containing 24 images (dimensioned 702 × 524 pixels) corresponding to the different desiccation stages of wheat kernels was acquired at different wavelengths and then analyzed. A comparison of the images of kernel sections highlighted changes in kernel texture as a function of their drying conditions. Slow drying led to a floury texture, whereas fast drying caused a glassy texture. The automated imaging system thus developed is sufficiently rapid and economical to enable the characterization in large collections of grain texture as a function of time and water content.

  12. [Utilizable value of wild economic plant resource--acron kernel].

    PubMed

    He, R; Wang, K; Wang, Y; Xiong, T

    2000-04-01

    Peking whites breeding hens were selected. Using true metabolizable energy method (TME) to evaluate the available nutritive value of acorn kernel, while maize and rice were used as control. The results showed that the contents of gross energy (GE), apparent metabolizable energy (AME), true metabolizable energy (TME) and crude protein (CP) in the acorn kernel were 16.53 mg/kg-1, 11.13 mg.kg-1, 11.66 mg.kg-1 and 10.63%, respectively. The apparent availability and true availability of crude protein were 45.55% and 49.83%. The gross content of 17 amino acids, essential amino acids and semiessential amino acids were 9.23% and 4.84%. The true availability of amino acid and the content of true available amino acid were 60.85% and 6.09%. The contents of tannin and hydrocyanic acid were 4.55% and 0.98% in acorn kernel. The available nutritive value of acorn kernel is similar to maize or slightly lower, but slightly higher than that of rice. Acorn kernel is a wild economic plant resource to exploit and utilize but it contains higher tannin and hydrocyanic acid. PMID:11767593

  13. Aleurone cell identity is suppressed following connation in maize kernels.

    PubMed

    Geisler-Lee, Jane; Gallie, Daniel R

    2005-09-01

    Expression of the cytokinin-synthesizing isopentenyl transferase enzyme under the control of the Arabidopsis (Arabidopsis thaliana) SAG12 senescence-inducible promoter reverses the normal abortion of the lower floret from a maize (Zea mays) spikelet. Following pollination, the upper and lower floret pistils fuse, producing a connated kernel with two genetically distinct embryos and the endosperms fused along their abgerminal face. Therefore, ectopic synthesis of cytokinin was used to position two independent endosperms within a connated kernel to determine how the fused endosperm would affect the development of the two aleurone layers along the fusion plane. Examination of the connated kernel revealed that aleurone cells were present for only a short distance along the fusion plane whereas starchy endosperm cells were present along most of the remainder of the fusion plane, suggesting that aleurone development is suppressed when positioned between independent starchy endosperms. Sporadic aleurone cells along the fusion plane were observed and may have arisen from late or imperfect fusion of the endosperms of the connated kernel, supporting the observation that a peripheral position at the surface of the endosperm and not proximity to maternal tissues such as the testa and pericarp are important for aleurone development. Aleurone mosaicism was observed in the crown region of nonconnated SAG12-isopentenyl transferase kernels, suggesting that cytokinin can also affect aleurone development.

  14. Kernel Methods for Mining Instance Data in Ontologies

    NASA Astrophysics Data System (ADS)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  15. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    PubMed Central

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  16. The Modularized Software Package ASKI - Full Waveform Inversion Based on Waveform Sensitivity Kernels Utilizing External Seismic Wave Propagation Codes

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.

    2015-12-01

    We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full

  17. Rayleigh's Scattering Revised

    NASA Astrophysics Data System (ADS)

    Kolomiets, Sergey; Gorelik, Andrey

    This report is devoted to a discussion of applicability limits of Rayleigh’s scattering model. Implicitly, Rayleigh’s ideas are being used in a wide range of remote sensing applications. To begin with it must be noted that most techniques which have been developed to date for measurements by means of active instruments for remote sensing in case of the target is a set of distributed moving scatters are only hopes, to say so, on measurements per se. The problem is that almost all of such techniques use a priori information about the microstructure of the object of interest during whole measurement session. As one can find in the literature, this approach may happily be applied to systems with identical particles. However, it is not the case with respect to scattering targets that consist of particles of different kind or having a particle size distribution. It must be especially noted that the microstructure of most of such targets changes significantly with time and/or space. Therefore, the true measurement techniques designed to be applicable in such conditions must be not only adaptable in order to take into account a variety of models of an echo interpretation, but also have a well-developed set of clear-cut criteria of applicability and exact means of accuracy estimation. So such techniques will require much more parameters to be measured. In spite of the fact that there is still room for some improvements within classical models and approaches, it is multiwavelength approach that may be seen as the most promising way of development towards obtaining an adequate set of the measured parameters required for true measurement techniques. At the same time, Rayleigh’s scattering is an invariant in regard to a change of the wavelength as it follows from the point of view dominating nowadays. In the light of such an idea, the synergy between multivawelength measurements may be achieved - to a certain extent - by means of the synchronous usage of Rayleigh’s and

  18. Evolutionary Metabolomics Reveals Domestication-Associated Changes in Tetraploid Wheat Kernels.

    PubMed

    Beleggia, Romina; Rau, Domenico; Laidò, Giovanni; Platani, Cristiano; Nigro, Franca; Fragasso, Mariagiovanna; De Vita, Pasquale; Scossa, Federico; Fernie, Alisdair R; Nikoloski, Zoran; Papa, Roberto

    2016-07-01

    Domestication and breeding have influenced the genetic structure of plant populations due to selection for adaptation from natural habitats to agro-ecosystems. Here, we investigate the effects of selection on the contents of 51 primary kernel metabolites and their relationships in three Triticum turgidum L. subspecies (i.e., wild emmer, emmer, durum wheat) that represent the major steps of tetraploid wheat domestication. We present a methodological pipeline to identify the signature of selection for molecular phenotypic traits (e.g., metabolites and transcripts). Following the approach, we show that a reduction in unsaturated fatty acids was associated with selection during domestication of emmer (primary domestication). We also show that changes in the amino acid content due to selection mark the domestication of durum wheat (secondary domestication). These effects were found to be partially independent of the associations that unsaturated fatty acids and amino acids have with other domestication-related kernel traits. Changes in contents of metabolites were also highlighted by alterations in the metabolic correlation networks, indicating wide metabolic restructuring due to domestication. Finally, evidence is provided that wild and exotic germplasm can have a relevant role for improvement of wheat quality and nutritional traits. PMID:27189559

  19. Evolutionary Metabolomics Reveals Domestication-Associated Changes in Tetraploid Wheat Kernels.

    PubMed

    Beleggia, Romina; Rau, Domenico; Laidò, Giovanni; Platani, Cristiano; Nigro, Franca; Fragasso, Mariagiovanna; De Vita, Pasquale; Scossa, Federico; Fernie, Alisdair R; Nikoloski, Zoran; Papa, Roberto

    2016-07-01

    Domestication and breeding have influenced the genetic structure of plant populations due to selection for adaptation from natural habitats to agro-ecosystems. Here, we investigate the effects of selection on the contents of 51 primary kernel metabolites and their relationships in three Triticum turgidum L. subspecies (i.e., wild emmer, emmer, durum wheat) that represent the major steps of tetraploid wheat domestication. We present a methodological pipeline to identify the signature of selection for molecular phenotypic traits (e.g., metabolites and transcripts). Following the approach, we show that a reduction in unsaturated fatty acids was associated with selection during domestication of emmer (primary domestication). We also show that changes in the amino acid content due to selection mark the domestication of durum wheat (secondary domestication). These effects were found to be partially independent of the associations that unsaturated fatty acids and amino acids have with other domestication-related kernel traits. Changes in contents of metabolites were also highlighted by alterations in the metabolic correlation networks, indicating wide metabolic restructuring due to domestication. Finally, evidence is provided that wild and exotic germplasm can have a relevant role for improvement of wheat quality and nutritional traits.

  20. Evolutionary Metabolomics Reveals Domestication-Associated Changes in Tetraploid Wheat Kernels

    PubMed Central

    Beleggia, Romina; Rau, Domenico; Laidò, Giovanni; Platani, Cristiano; Nigro, Franca; Fragasso, Mariagiovanna; De Vita, Pasquale; Scossa, Federico; Fernie, Alisdair R.; Nikoloski, Zoran; Papa, Roberto

    2016-01-01

    Domestication and breeding have influenced the genetic structure of plant populations due to selection for adaptation from natural habitats to agro-ecosystems. Here, we investigate the effects of selection on the contents of 51 primary kernel metabolites and their relationships in three Triticum turgidum L. subspecies (i.e., wild emmer, emmer, durum wheat) that represent the major steps of tetraploid wheat domestication. We present a methodological pipeline to identify the signature of selection for molecular phenotypic traits (e.g., metabolites and transcripts). Following the approach, we show that a reduction in unsaturated fatty acids was associated with selection during domestication of emmer (primary domestication). We also show that changes in the amino acid content due to selection mark the domestication of durum wheat (secondary domestication). These effects were found to be partially independent of the associations that unsaturated fatty acids and amino acids have with other domestication-related kernel traits. Changes in contents of metabolites were also highlighted by alterations in the metabolic correlation networks, indicating wide metabolic restructuring due to domestication. Finally, evidence is provided that wild and exotic germplasm can have a relevant role for improvement of wheat quality and nutritional traits. PMID:27189559

  1. Quantification and classification of neuronal responses in kernel-smoothed peristimulus time histograms.

    PubMed

    Hill, Michael R H; Fried, Itzhak; Koch, Christof

    2015-02-15

    Peristimulus time histograms are a widespread form of visualizing neuronal responses. Kernel convolution methods transform these histograms into a smooth, continuous probability density function. This provides an improved estimate of a neuron's actual response envelope. We here develop a classifier, called the h-coefficient, to determine whether time-locked fluctuations in the firing rate of a neuron should be classified as a response or as random noise. Unlike previous approaches, the h-coefficient takes advantage of the more precise response envelope estimation provided by the kernel convolution method. The h-coefficient quantizes the smoothed response envelope and calculates the probability of a response of a given shape to occur by chance. We tested the efficacy of the h-coefficient in a large data set of Monte Carlo simulated smoothed peristimulus time histograms with varying response amplitudes, response durations, trial numbers, and baseline firing rates. Across all these conditions, the h-coefficient significantly outperformed more classical classifiers, with a mean false alarm rate of 0.004 and a mean hit rate of 0.494. We also tested the h-coefficient's performance in a set of neuronal responses recorded in humans. The algorithm behind the h-coefficient provides various opportunities for further adaptation and the flexibility to target specific parameters in a given data set. Our findings confirm that the h-coefficient can provide a conservative and powerful tool for the analysis of peristimulus time histograms with great potential for future development. PMID:25475352

  2. Scalar heat kernel with boundary in the worldline formalism

    NASA Astrophysics Data System (ADS)

    Bastianelli, Fiorenzo; Corradini, Olindo; Pisani, Pablo A. G.; Schubert, Christian

    2008-10-01

    The worldline formalism has in recent years emerged as a powerful tool for the computation of effective actions and heat kernels. However, implementing nontrivial boundary conditions in this formalism has turned out to be a difficult problem. Recently, such a generalization was developed for the case of a scalar field on the half-space Bbb R+ × Bbb RD-1, based on an extension of the associated worldline path integral to the full Bbb RD using image charges. We present here an improved version of this formalism which allows us to write down non-recursive master formulas for the n-point contribution to the heat kernel trace of a scalar field on the half-space with Dirichlet or Neumann boundary conditions. These master formulas are suitable to computerization. We demonstrate the efficiency of the formalism by a calculation of two new heat-kernel coefficients for the half-space, a4 and a9/2.

  3. Weighted Feature Gaussian Kernel SVM for Emotion Recognition

    PubMed Central

    Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443

  4. Improved Online Support Vector Machines Spam Filtering Using String Kernels

    NASA Astrophysics Data System (ADS)

    Amayri, Ola; Bouguila, Nizar

    A major bottleneck in electronic communications is the enormous dissemination of spam emails. Developing of suitable filters that can adequately capture those emails and achieve high performance rate become a main concern. Support vector machines (SVMs) have made a large contribution to the development of spam email filtering. Based on SVMs, the crucial problems in email classification are feature mapping of input emails and the choice of the kernels. In this paper, we present thorough investigation of several distance-based kernels and propose the use of string kernels and prove its efficiency in blocking spam emails. We detail a feature mapping variants in text classification (TC) that yield improved performance for the standard SVMs in filtering task. Furthermore, to cope for realtime scenarios we propose an online active framework for spam filtering.

  5. A method of smoothed particle hydrodynamics using spheroidal kernels

    NASA Technical Reports Server (NTRS)

    Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.

    1995-01-01

    We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.

  6. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    PubMed Central

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  7. Recurrent kernel machines: computing with infinite echo state networks.

    PubMed

    Hermans, Michiel; Schrauwen, Benjamin

    2012-01-01

    Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks.

  8. Compression loading behaviour of sunflower seeds and kernels

    NASA Astrophysics Data System (ADS)

    Selvam, Thasaiya A.; Manikantan, Musuvadi R.; Chand, Tarsem; Sharma, Rajiv; Seerangurayar, Thirupathi

    2014-10-01

    The present study was carried out to investigate the compression loading behaviour of five Indian sunflower varieties (NIRMAL-196, NIRMAL-303, CO-2, KBSH-41, and PSH- 996) under four different moisture levels (6-18% d.b). The initial cracking force, mean rupture force, and rupture energy were measured as a function of moisture content. The observed results showed that the initial cracking force decreased linearly with an increase in moisture content for all varieties. The mean rupture force also decreased linearly with an increase in moisture content. However, the rupture energy was found to be increasing linearly for seed and kernel with moisture content. NIRMAL-196 and PSH-996 had maximum and minimum values of all the attributes studied for both seed and kernel, respectively. The values of all the studied attributes were higher for seed than kernel of all the varieties at all moisture levels. There was a significant effect of moisture and variety on compression loading behaviour.

  9. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  10. Single aflatoxin contaminated corn kernel analysis with fluorescence hyperspectral image

    NASA Astrophysics Data System (ADS)

    Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Ononye, Ambrose; Brown, Robert L.; Cleveland, Thomas E.

    2010-04-01

    Aflatoxins are toxic secondary metabolites of the fungi Aspergillus flavus and Aspergillus parasiticus, among others. Aflatoxin contaminated corn is toxic to domestic animals when ingested in feed and is a known carcinogen associated with liver and lung cancer in humans. Consequently, aflatoxin levels in food and feed are regulated by the Food and Drug Administration (FDA) in the US, allowing 20 ppb (parts per billion) limits in food and 100 ppb in feed for interstate commerce. Currently, aflatoxin detection and quantification methods are based on analytical tests including thin-layer chromatography (TCL) and high performance liquid chromatography (HPLC). These analytical tests require the destruction of samples, and are costly and time consuming. Thus, the ability to detect aflatoxin in a rapid, nondestructive way is crucial to the grain industry, particularly to corn industry. Hyperspectral imaging technology offers a non-invasive approach toward screening for food safety inspection and quality control based on its spectral signature. The focus of this paper is to classify aflatoxin contaminated single corn kernels using fluorescence hyperspectral imagery. Field inoculated corn kernels were used in the study. Contaminated and control kernels under long wavelength ultraviolet excitation were imaged using a visible near-infrared (VNIR) hyperspectral camera. The imaged kernels were chemically analyzed to provide reference information for image analysis. This paper describes a procedure to process corn kernels located in different images for statistical training and classification. Two classification algorithms, Maximum Likelihood and Binary Encoding, were used to classify each corn kernel into "control" or "contaminated" through pixel classification. The Binary Encoding approach had a slightly better performance with accuracy equals to 87% or 88% when 20 ppb or 100 ppb was used as classification threshold, respectively.

  11. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  12. The Effects of Kernel Feeding by Halyomorpha halys (Hemiptera: Pentatomidae) on Commercial Hazelnuts.

    PubMed

    Hedstrom, C S; Shearer, P W; Miller, J C; Walton, V M

    2014-10-01

    Halyomorpha halys Stål, the brown marmorated stink bug (Hemiptera: Pentatomidae), is an invasive pest with established populations in Oregon. The generalist feeding habits of H. halys suggest it has the potential to be a pest of many specialty crops grown in Oregon, including hazelnuts, Corylus avellana L. The objectives of this study were to: 1) characterize the damage to developing hazelnut kernels resulting from feeding by H. halys adults, 2) determine how the timing of feeding during kernel development influences damage to kernels, and 3) determine if hazelnut shell thickness has an effect on feeding frequency on kernels. Adult brown marmorated stink bugs were allowed to feed on developing nuts for 1-wk periods from initial kernel development (spring) until harvest (fall). Developing nuts not exposed to feeding by H. halys served as a control treatment. The degree of damage and diagnostic symptoms corresponded with the hazelnut kernels' physiological development. Our results demonstrated that when H. halys fed on hazelnuts before kernel expansion, development of the kernels could cease, resulting in empty shells. When stink bugs fed during kernel expansion, kernels appeared malformed. When stink bugs fed on mature nuts the kernels exhibited corky, necrotic areas. Although significant differences in shell thickness were observed among the cultivars, no significant differences occurred in the proportions of damaged kernels based on field tests and laboratory choice tests. The results of these studies demonstrated that commercial hazelnuts are susceptible to damage caused by the feeding of H. halys throughout the entire period of kernel development.

  13. Heat kernel smoothing using Laplace-Beltrami eigenfunctions.

    PubMed

    Seo, Seongho; Chung, Moo K; Vorperian, Houri K

    2010-01-01

    We present a novel surface smoothing framework using the Laplace-Beltrami eigenfunctions. The Green's function of an isotropic diffusion equation on a manifold is constructed as a linear combination of the Laplace-Beltraimi operator. The Green's function is then used in constructing heat kernel smoothing. Unlike many previous approaches, diffusion is analytically represented as a series expansion avoiding numerical instability and inaccuracy issues. This proposed framework is illustrated with mandible surfaces, and is compared to a widely used iterative kernel smoothing technique in computational anatomy. The MATLAB source code is freely available at http://brainimaging.waisman.wisc.edu/ chung/lb.

  14. Optical remote sensor for peanut kernel abortion classification.

    PubMed

    Ozana, Nisan; Buchsbaum, Stav; Bishitz, Yael; Beiderman, Yevgeny; Schmilovitch, Zeev; Schwarz, Ariel; Shemer, Amir; Keshet, Joseph; Zalevsky, Zeev

    2016-05-20

    In this paper, we propose a simple, inexpensive optical device for remote measurement of various agricultural parameters. The sensor is based on temporal tracking of backreflected secondary speckle patterns generated when illuminating a plant with a laser and while applying periodic acoustic-based pressure stimulation. By analyzing different parameters using a support-vector-machine-based algorithm, peanut kernel abortion can be detected remotely. This paper presents experimental tests which are the first step toward an implementation of a noncontact device for the detection of agricultural parameters such as kernel abortion. PMID:27411126

  15. Source identity and kernel functions for Inozemtsev-type systems

    NASA Astrophysics Data System (ADS)

    Langmann, Edwin; Takemura, Kouichi

    2012-08-01

    The Inozemtsev Hamiltonian is an elliptic generalization of the differential operator defining the BCN trigonometric quantum Calogero-Sutherland model, and its eigenvalue equation is a natural many-variable generalization of the Heun differential equation. We present kernel functions for Inozemtsev Hamiltonians and Chalykh-Feigin-Veselov-Sergeev-type deformations thereof. Our main result is a solution of a heat-type equation for a generalized Inozemtsev Hamiltonian which is the source of all these kernel functions. Applications are given, including a derivation of simple exact eigenfunctions and eigenvalues of the Inozemtsev Hamiltonian.

  16. FUV Continuum in Flare Kernels Observed by IRIS

    NASA Astrophysics Data System (ADS)

    Daw, Adrian N.; Kowalski, Adam; Allred, Joel C.; Cauzzi, Gianna

    2016-05-01

    Fits to Interface Region Imaging Spectrograph (IRIS) spectra observed from bright kernels during the impulsive phase of solar flares are providing long-sought constraints on the UV/white-light continuum emission. Results of fits of continua plus numerous atomic and molecular emission lines to IRIS far ultraviolet (FUV) spectra of bright kernels are presented. Constraints on beam energy and cross sectional area are provided by cotemporaneous RHESSI, FERMI, ROSA/DST, IRIS slit-jaw and SDO/AIA observations, allowing for comparison of the observed IRIS continuum to calculations of non-thermal electron beam heating using the RADYN radiative-hydrodynamic loop model.

  17. Iris Image Blur Detection with Multiple Kernel Learning

    NASA Astrophysics Data System (ADS)

    Pan, Lili; Xie, Mei; Mao, Ling

    In this letter, we analyze the influence of motion and out-of-focus blur on both frequency spectrum and cepstrum of an iris image. Based on their characteristics, we define two new discriminative blur features represented by Energy Spectral Density Distribution (ESDD) and Singular Cepstrum Histogram (SCH). To merge the two features for blur detection, a merging kernel which is a linear combination of two kernels is proposed when employing Support Vector Machine. Extensive experiments demonstrate the validity of our method by showing the improved blur detection performance on both synthetic and real datasets.

  18. Patient-specific scatter correction for flat-panel detector-based cone-beam CT imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Brunner, Stephen; Niu, Kai; Schafer, Sebastian; Royalty, Kevin; Chen, Guang-Hong

    2015-02-01

    A patient-specific scatter correction algorithm is proposed to mitigate scatter artefacts in cone-beam CT (CBCT). The approach belongs to the category of convolution-based methods in which a scatter potential function is convolved with a convolution kernel to estimate the scatter profile. A key step in this method is to determine the free parameters introduced in both scatter potential and convolution kernel using a so-called calibration process, which is to seek for the optimal parameters such that the models for both scatter potential and convolution kernel is able to optimally fit the previously known coarse estimates of scatter profiles of the image object. Both direct measurements and Monte Carlo (MC) simulations have been proposed by other investigators to achieve the aforementioned rough estimates. In the present paper, a novel method has been proposed and validated to generate the needed coarse scatter profile for parameter calibration in the convolution method. The method is based upon an image segmentation of the scatter contaminated CBCT image volume, followed by a reprojection of the segmented image volume using a given x-ray spectrum. The reprojected data is subtracted from the scatter contaminated projection data to generate a coarse estimate of the needed scatter profile used in parameter calibration. The method was qualitatively and quantitatively evaluated using numerical simulations and experimental CBCT data acquired on a clinical CBCT imaging system. Results show that the proposed algorithm can significantly reduce scatter artefacts and recover the correct CT number. Numerical simulation results show the method is patient specific, can accurately estimate the scatter, and is robust with respect to segmentation procedure. For experimental and in vivo human data, the results show the CT number can be successfully recovered and anatomical structure visibility can be significantly improved.

  19. Validation tests of an improved kernel density estimation method for identifying disease clusters

    NASA Astrophysics Data System (ADS)

    Cai, Qiang; Rushton, Gerard; Bhaduri, Budhendra

    2012-07-01

    The spatial filter method, which belongs to the class of kernel density estimation methods, has been used to make morbidity and mortality maps in several recent studies. We propose improvements in the method to include spatially adaptive filters to achieve constant standard error of the relative risk estimates; a staircase weight method for weighting observations to reduce estimation bias; and a parameter selection tool to enhance disease cluster detection performance, measured by sensitivity, specificity, and false discovery rate. We test the performance of the method using Monte Carlo simulations of hypothetical disease clusters over a test area of four counties in Iowa. The simulations include different types of spatial disease patterns and high-resolution population distribution data. Results confirm that the new features of the spatial filter method do substantially improve its performance in realistic situations comparable to those where the method is likely to be used.

  20. Higher-order Lipatov kernels and the QCD Pomeron

    SciTech Connect

    White, A.R.

    1994-08-12

    Three closely related topics are covered. The derivation of O(g{sup 4}) Lipatov kernels in pure glue QCD. The significance of quarks for the physical Pomeron in QCD. The possible inter-relation of Pomeron dynamics with Electroweak symmetry breaking.

  1. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    SciTech Connect

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  2. Metabolite identification through multiple kernel learning on fragmentation trees

    PubMed Central

    Shen, Huibin; Dührkop, Kai; Böcker, Sebastian; Rousu, Juho

    2014-01-01

    Motivation: Metabolite identification from tandem mass spectrometric data is a key task in metabolomics. Various computational methods have been proposed for the identification of metabolites from tandem mass spectra. Fragmentation tree methods explore the space of possible ways in which the metabolite can fragment, and base the metabolite identification on scoring of these fragmentation trees. Machine learning methods have been used to map mass spectra to molecular fingerprints; predicted fingerprints, in turn, can be used to score candidate molecular structures. Results: Here, we combine fragmentation tree computations with kernel-based machine learning to predict molecular fingerprints and identify molecular structures. We introduce a family of kernels capturing the similarity of fragmentation trees, and combine these kernels using recently proposed multiple kernel learning approaches. Experiments on two large reference datasets show that the new methods significantly improve molecular fingerprint prediction accuracy. These improvements result in better metabolite identification, doubling the number of metabolites ranked at the top position of the candidates list. Contact: huibin.shen@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931979

  3. Enzymatic treatment of peanut kernels to reduce allergen levels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study investigated the use of enzymatic treatment to reduce peanut allergens in peanut kernel by processing conditions, such as, pretreatment with heat and proteolysis at different enzyme concentrations and treatment times. Two major peanut allergens, Ara h 1 and Ara h 2, were used as indicator...

  4. Popping the Kernel Modeling the States of Matter

    ERIC Educational Resources Information Center

    Hitt, Austin; White, Orvil; Hanson, Debbie

    2005-01-01

    This article discusses how to use popcorn to engage students in model building and to teach them about the nature of matter. Popping kernels is a simple and effective method to connect the concepts of heat, motion, and volume with the different phases of matter. Before proceeding with the activity the class should discuss the nature of scientific…

  5. Music emotion detection using hierarchical sparse kernel machines.

    PubMed

    Chin, Yu-Hao; Lin, Chang-Hong; Siahaan, Ernestasia; Wang, Jia-Ching

    2014-01-01

    For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA) is implemented to reduce the dimension. The acoustical features are utilized to generate the first-level decision vector, which is a vector with each element being a significant value of an emotion. The significant values of eight main emotional classes are utilized in this paper. To calculate the significant value of an emotion, we construct its 2-class SVM with calm emotion as the global (non-target) side of the SVM. The probability distributions of the adopted acoustical features are calculated and the probability product kernel is applied in the first-level SVMs to obtain first-level decision vector feature. In the second level of the hierarchical system, we merely construct a 2-class relevance vector machine (RVM) with happiness as the target side and other emotions as the background side of the RVM. The first-level decision vector is used as the feature with conventional radial basis function kernel. The happiness verification threshold is built on the probability value. In the experimental results, the detection error tradeoff (DET) curve shows that the proposed system has a good performance on verifying if a music clip reveals happiness emotion.

  6. High-Speed Tracking with Kernelized Correlation Filters.

    PubMed

    Henriques, João F; Caseiro, Rui; Martins, Pedro; Batista, Jorge

    2015-03-01

    The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies-any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new kernelized correlation filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call dual correlation filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source. PMID:26353263

  7. Notes on a storage manager for the Clouds kernel

    NASA Technical Reports Server (NTRS)

    Pitts, David V.; Spafford, Eugene H.

    1986-01-01

    The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.

  8. Uniqueness Result in the Cauchy Dirichlet Problem via Mehler Kernel

    NASA Astrophysics Data System (ADS)

    Dhungana, Bishnu P.

    2014-09-01

    Using the Mehler kernel, a uniqueness theorem in the Cauchy Dirichlet problem for the Hermite heat equation with homogeneous Dirichlet boundary conditions on a class P of bounded functions U( x, t) with certain growth on U x ( x, t) is established.

  9. High-Speed Tracking with Kernelized Correlation Filters.

    PubMed

    Henriques, João F; Caseiro, Rui; Martins, Pedro; Batista, Jorge

    2015-03-01

    The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies-any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new kernelized correlation filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call dual correlation filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source.

  10. PERI - auto-tuning memory-intensive kernels for multicore

    NASA Astrophysics Data System (ADS)

    Williams, S.; Datta, K.; Carter, J.; Oliker, L.; Shalf, J.; Yelick, K.; Bailey, D.

    2008-07-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to sparse matrix vector multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the high-performance computing literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4× improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  11. Microwave moisture meter for in-shell peanut kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    . A microwave moisture meter built with off-the-shelf components was developed, calibrated and tested in the laboratory and in the field for nondestructive and instantaneous in-shell peanut kernel moisture content determination from dielectric measurements on unshelled peanut pod samples. The meter ...

  12. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2008-03-01

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  13. Stereotype Measurement and the "Kernel of Truth" Hypothesis.

    ERIC Educational Resources Information Center

    Gordon, Randall A.

    1989-01-01

    Describes a stereotype measurement suitable for classroom demonstration. Illustrates C. McCauley and C. L. Stitt's diagnostic ratio measure and examines the validity of the "kernel of truth" hypothesis. Uses this as a starting point for class discussion. Reports results and gives suggestions for discussion of related concepts. (Author/NL)

  14. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    ERIC Educational Resources Information Center

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  15. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  16. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  17. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  18. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  19. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  20. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2009-02-20

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  1. Matrix kernels for MEG and EEG source localization and imaging

    SciTech Connect

    Mosher, J.C.; Lewis, P.S.; Leahy, R.M.

    1994-12-31

    The most widely used model for electroencephalography (EEG) and magnetoencephalography (MEG) assumes a quasi-static approximation of Maxwell`s equations and a piecewise homogeneous conductor model. Both models contain an incremental field element that linearly relates an incremental source element (current dipole) to the field or voltage at a distant point. The explicit form of the field element is dependent on the head modeling assumptions and sensor configuration. Proper characterization of this incremental element is crucial to the inverse problem. The field element can be partitioned into the product of a vector dependent on sensor characteristics and a matrix kernel dependent only on head modeling assumptions. We present here the matrix kernels for the general boundary element model (BEM) and for MEG spherical models. We show how these kernels are easily interchanged in a linear algebraic framework that includes sensor specifics such as orientation and gradiometer configuration. We then describe how this kernel is easily applied to ``gain`` or ``transfer`` matrices used in multiple dipole and source imaging models.

  2. The Stokes problem for the ellipsoid using ellipsoidal kernels

    NASA Technical Reports Server (NTRS)

    Zhu, Z.

    1981-01-01

    A brief review of Stokes' problem for the ellipsoid as a reference surface is given. Another solution of the problem using an ellipsoidal kernel, which represents an iterative form of Stokes' integral, is suggested with a relative error of the order of the flattening. On studying of Rapp's method in detail the procedures of improving its convergence are discussed.

  3. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  4. Genome Mapping of Kernel Characteristics in Hard Red Spring Wheat Breeding Lines

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Kernel characteristics, particularly kernel weight, kernel size, and grain protein content, are important components of grain yield and quality in wheat. Development of high performing wheat cultivars, with high grain yield and quality, is a major focus in wheat breeding programs worldwide. Here, we...

  5. Low Cost Real-Time Sorting of in Shell Pistachio Nuts from Kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A high speed sorter for separating pistachio nuts with (in shell) and without (kernels) shells is reported. Testing indicates 95% accuracy in removing kernels from the in shell stream with no false positive results out of 1000 kernels tested. Testing with 1000 each of in shell, shell halves, and ker...

  6. nd Scattering Observables Derived from the Quark-Model Baryon-Baryon Interaction

    SciTech Connect

    Fujiwara, Y.; Fukukawa, K.

    2010-05-12

    We solve the nd scattering in the Faddeev formalism, employing the NN sector of the quark-model baryon-baryon interaction fss2. The energy-dependence of the NN interaction, inherent to the (3q)-(3q) resonating-group formulation, is eliminated by the standard off-shell transformation utilizing the 1/sq root(N) factor, where N is the normalization kernel for the (3q)-(3q) system. This procedure yields an extra nonlocality, whose effect is very important to reproduce all the scattering observables below E{sub n}<=65 MeV. The different off-shell properties from the standard meson-exchange potentials, related to the non-locality of the quark-exchange kernel, yields appreciable effects to the differential cross sections and polarization observables of the nd elastic scattering, which are usually attributed to the specific properties of three-body forces.

  7. Sub-second pencil beam dose calculation on GPU for adaptive proton therapy.

    PubMed

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-06-21

    Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.

  8. Sub-second pencil beam dose calculation on GPU for adaptive proton therapy

    NASA Astrophysics Data System (ADS)

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-06-01

    Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.

  9. Coupling the use of anti-scatter grid with analytical scatter estimation in cone beam CT

    NASA Astrophysics Data System (ADS)

    Rinkel, J.; Gerfault, L.; Estève, F.; Dinten, J.-M.

    2007-03-01

    Cone-Beam Computed Tomography (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging: even in the presence of anti-scatter grid, the scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation include cupping artifacts, streaks, and quantification inaccuracies. In this paper, a scatter management process for tomographic projections, without supplementary on-line acquisition, is presented. The scattered radiation is corrected using a method based on scatter calibration through off-line acquisitions. This is combined with on-line analytical transformation based on physical equations, to perform an estimation adapted to the object observed. This approach has been previously applied to a system without anti-scatter grid. The focus of this paper is to show how to combine this approach with an anti-scatter grid. First, the interest of the grid is evaluated in terms of noise to signal ratio and scatter rejection. Then, the method of scatter correction is evaluated by testing it on an anthropomorphic phantom of thorax. The reconstructed volume of the phantom is compared to that obtained with a strongly collimated conventional multi-slice CT scanner. The new method provides results that closely agree with the conventional CT scanner, eliminating cupping artifacts and significantly improving quantification.

  10. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  11. Fourier deconvolution reveals the role of the Lorentz function as the convolution kernel of narrow photon beams

    NASA Astrophysics Data System (ADS)

    Djouguela, Armand; Harder, Dietrich; Kollhoff, Ralf; Foschepoth, Simon; Kunth, Wolfgang; Rühmann, Antje; Willborn, Kay; Poppe, Björn

    2009-05-01

    The two-dimensional lateral dose profiles D(x, y) of narrow photon beams, typically used for beamlet-based IMRT, stereotactic radiosurgery and tomotherapy, can be regarded as resulting from the convolution of a two-dimensional rectangular function R(x, y), which represents the photon fluence profile within the field borders, with a rotation-symmetric convolution kernel K(r). This kernel accounts not only for the lateral transport of secondary electrons and small-angle scattered photons in the absorber, but also for the 'geometrical spread' of each pencil beam due to the phase-space distribution of the photon source. The present investigation of the convolution kernel was based on an experimental study of the associated line-spread function K(x). Systematic cross-plane scans of rectangular and quadratic fields of variable side lengths were made by utilizing the linear current versus dose rate relationship and small energy dependence of the unshielded Si diode PTW 60012 as well as its narrow spatial resolution function. By application of the Fourier convolution theorem, it was observed that the values of the Fourier transform of K(x) could be closely fitted by an exponential function exp(-2πλνx) of the spatial frequency νx. Thereby, the line-spread function K(x) was identified as the Lorentz function K(x) = (λ/π)[1/(x2 + λ2)], a single-parameter, bell-shaped but non-Gaussian function with a narrow core, wide curve tail, full half-width 2λ and convenient convolution properties. The variation of the 'kernel width parameter' λ with the photon energy, field size and thickness of a water-equivalent absorber was systematically studied. The convolution of a rectangular fluence profile with K(x) in the local space results in a simple equation accurately reproducing the measured lateral dose profiles. The underlying 2D convolution kernel (point-spread function) was identified as K(r) = (λ/2π)[1/(r2 + λ2)]3/2, fitting experimental results as well. These results are

  12. Black Hole Scattering via Spectral Methods

    NASA Astrophysics Data System (ADS)

    Clemente, P. C. M.; de Oliveira, H. P.; Rodrigues, E. L.

    2013-12-01

    We present an alternative method to solve the problem of scattering by a black hole by adapting the spectral code originally developed by Boyd (Comp Phys 4:83, 1990). In order to show the effectiveness and versatility of the algorithm, we solve the scattering by Schwarzschild, standard acoustic, and charged black holes. We recover the partial and total absorption cross sections and, in the case of charged black holes, the conversion factor of eletromagnetic and gravitational waves. We also study the exponential decay of the reflection coefficient, which is a general feature of any scattering problem.

  13. Dose uncertainties in photon pencil kernel calculations at off-axis positions

    SciTech Connect

    Olofsson, Joergen; Nyholm, Tufve; Ahnesjoe, Anders; Karlsson, Mikael

    2006-09-15

    The purpose of this study was to investigate the specific problems associated with photon dose calculations in points located at a distance from the central beam axis. These problems are related to laterally inhomogeneous energy fluence distributions and spectral variations causing a lateral shift in the beam quality, commonly referred to as off-axis softening (OAS). We have examined how the dose calculation accuracy is affected when enabling and disabling explicit modeling of these two effects. The calculations were performed using a pencil kernel dose calculation algorithm that facilitates modeling of OAS through laterally varying kernel properties. Together with a multisource model that provides the lateral energy fluence distribution this generates the total dose output, i.e., the dose per monitor unit, at an arbitrary point of interest. The dose calculation accuracy was evaluated through comparisons with 264 measured output factors acquired at 5, 10, and 20 cm depth in four different megavoltage photon beams. The measurements were performed up to 18 cm from the central beam axis, inside square fields of varying size and position. The results show that calculations including explicit modeling of OAS were considerably more accurate, up to 4%, than those ignoring the lateral beam quality shift. The deviations caused by simplified head scatter modeling were smaller, but near the field edges additional errors close to 1% occurred. When enabling full physics modeling in the dose calculations the deviations display a mean value of -0.1%, a standard deviation of 0.7%, and a maximum deviation of -2.2%. Finally, the results were analyzed in order to quantify and model the inherent uncertainties that are present when leaving the central beam axis. The off-axis uncertainty component showed to increase with both off-axis distance and depth, reaching 1% (1 standard deviation) at 20 cm depth.

  14. Q-branch Raman scattering and modern kinetic thoery

    SciTech Connect

    Monchick, L.

    1993-12-01

    The program is an extension of previous APL work whose general aim was to calculate line shapes of nearly resonant isolated line transitions with solutions of a popular quantum kinetic equation-the Waldmann-Snider equation-using well known advanced solution techniques developed for the classical Boltzmann equation. The advanced techniques explored have been a BGK type approximation, which is termed the Generalized Hess Method (GHM), and conversion of the collision operator to a block diagonal matrix of symmetric collision kernels which then can be approximated by discrete ordinate methods. The latter method, which is termed the Collision Kernel method (CC), is capable of the highest accuracy and has been used quite successfully for Q-branch Raman scattering. The GHM method, not quite as accurate, is applicable over a wider range of pressures and has proven quite useful.

  15. Choosing parameters of kernel subspace LDA for recognition of face images under pose and illumination variations.

    PubMed

    Huang, Jian; Yuen, Pong C; Chen, Wen-Sheng; Lai, Jian Huang

    2007-08-01

    This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.

  16. Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.

    PubMed

    Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I

    2016-03-01

    The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor.

  17. Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.

    PubMed

    Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I

    2016-03-01

    The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor. PMID:27021084

  18. Characterizing intimate mixtures of materials in hyperspectral imagery with albedo-based and kernel-based approaches

    NASA Astrophysics Data System (ADS)

    Rand, Robert S.; Resmini, Ronald G.; Allen, David W.

    2015-09-01

    Linear mixtures of materials in a scene often occur because the pixel size of a sensor is relatively large and consequently they contain patches of different materials within them. This type of mixing can be thought of as areal mixing and modeled by a linear mixture model with certain constraints on the abundances. The solution to these models has received a lot of attention. However, there are more complex situations, such as scattering that occurs in mixtures of vegetation and soil, or intimate mixing of granular materials like soils. Such multiple scattering and microscopic mixtures within pixels have varying degrees of non-linearity. In such cases, a linear model is not sufficient. Furthermore, often enough, scenes may contain cases of both linear and non-linear mixing on a pixel-by-pixel basis. This study considers two approaches for use as generalized methods for un-mixing pixels in a scene that may be linear (areal mixed) or non-linear (intimately mixed). The first method is based on earlier studies that indicate non-linear mixtures in reflectance space are approximately linear in albedo space. The method converts reflectance to singlescattering albedo (SSA) according to Hapke theory assuming bidirectional scattering at nadir look angles and uses a constrained linear model on the computed albedo values. The second method is motivated by the same idea, but uses a kernel that seeks to capture the linear behavior of albedo in non-linear mixtures of materials. The behavior of the kernel method is dependent on the value of a parameter, gamma. Furthermore, both methods are dependent on the choice of endmembers, and also on RMSE (root mean square error) as a performance metric. This study compares the two approaches and pays particular attention to these dependencies. Both laboratory and aerial collections of hyperspectral imagery are used to validate the methods.

  19. Kernel-imbedded Gaussian processes for disease classification using microarray gene expression data

    PubMed Central

    Zhao, Xin; Cheung, Leo Wang-Kit

    2007-01-01

    Background Designing appropriate machine learning methods for identifying genes that have a significant discriminating power for disease outcomes has become more and more important for our understanding of diseases at genomic level. Although many machine learning methods have been developed and applied to the area of microarray gene expression data analysis, the majority of them are based on linear models, which however are not necessarily appropriate for the underlying connection between the target disease and its associated explanatory genes. Linear model based methods usually also bring in false positive significant features more easily. Furthermore, linear model based algorithms often involve calculating the inverse of a matrix that is possibly singular when the number of potentially important genes is relatively large. This leads to problems of numerical instability. To overcome these limitations, a few non-linear methods have recently been introduced to the area. Many of the existing non-linear methods have a couple of critical problems, the model selection problem and the model parameter tuning problem, that remain unsolved or even untouched. In general, a unified framework that allows model parameters of both linear and non-linear models to be easily tuned is always preferred in real-world applications. Kernel-induced learning methods form a class of approaches that show promising potentials to achieve this goal. Results A hierarchical statistical model named kernel-imbedded Gaussian process (KIGP) is developed under a unified Bayesian framework for binary disease classification problems using microarray gene expression data. In particular, based on a probit regression setting, an adaptive algorithm with a cascading structure is designed to find the appropriate kernel, to discover the potentially significant genes, and to make the optimal class prediction accordingly. A Gibbs sampler is built as the core of the algorithm to make Bayesian inferences

  20. Effective face recognition using bag of features with additive kernels

    NASA Astrophysics Data System (ADS)

    Yang, Shicai; Bebis, George; Chu, Yongjie; Zhao, Lindu

    2016-01-01

    In past decades, many techniques have been used to improve face recognition performance. The most common and well-studied ways are to use the whole face image to build a subspace based on the reduction of dimensionality. Differing from methods above, we consider face recognition as an image classification problem. The face images of the same person are considered to fall into the same category. Each category and each face image could be both represented by a simple pyramid histogram. Spatial dense scale-invariant feature transform features and bag of features method are used to build categories and face representations. In an effort to make the method more efficient, a linear support vector machine solver, Pegasos, is used for the classification in the kernel space with additive kernels instead of nonlinear SVMs. Our experimental results demonstrate that the proposed method can achieve very high recognition accuracy on the ORL, YALE, and FERET databases.

  1. Some physical properties of ginkgo nuts and kernels

    NASA Astrophysics Data System (ADS)

    Ch'ng, P. E.; Abdullah, M. H. R. O.; Mathai, E. J.; Yunus, N. A.

    2013-12-01

    Some data of the physical properties of ginkgo nuts at a moisture content of 45.53% (±2.07) (wet basis) and of their kernels at 60.13% (± 2.00) (wet basis) are presented in this paper. It consists of the estimation of the mean length, width, thickness, the geometric mean diameter, sphericity, aspect ratio, unit mass, surface area, volume, true density, bulk density, and porosity measures. The coefficient of static friction for nuts and kernels was determined by using plywood, glass, rubber, and galvanized steel sheet. The data are essential in the field of food engineering especially dealing with design and development of machines, and equipment for processing and handling agriculture products.

  2. Reproducing kernel particle method for free and forced vibration analysis

    NASA Astrophysics Data System (ADS)

    Zhou, J. X.; Zhang, H. Y.; Zhang, L.

    2005-01-01

    A reproducing kernel particle method (RKPM) is presented to analyze the natural frequencies of Euler-Bernoulli beams as well as Kirchhoff plates. In addition, RKPM is also used to predict the forced vibration responses of buried pipelines due to longitudinal travelling waves. Two different approaches, Lagrange multipliers as well as transformation method , are employed to enforce essential boundary conditions. Based on the reproducing kernel approximation, the domain of interest is discretized by a set of particles without the employment of a structured mesh, which constitutes an advantage over the finite element method. Meanwhile, RKPM also exhibits advantages over the classical Rayleigh-Ritz method and its counterparts. Numerical results presented here demonstrate the effectiveness of this novel approach for both free and forced vibration analysis.

  3. Undersampled dynamic magnetic resonance imaging using kernel principal component analysis.

    PubMed

    Wang, Yanhua; Ying, Leslie

    2014-01-01

    Compressed sensing (CS) is a promising approach to accelerate dynamic magnetic resonance imaging (MRI). Most existing CS methods employ linear sparsifying transforms. The recent developments in non-linear or kernel-based sparse representations have been shown to outperform the linear transforms. In this paper, we present an iterative non-linear CS dynamic MRI reconstruction framework that uses the kernel principal component analysis (KPCA) to exploit the sparseness of the dynamic image sequence in the feature space. Specifically, we apply KPCA to represent the temporal profiles of each spatial location and reconstruct the images through a modified pre-image problem. The underlying optimization algorithm is based on variable splitting and fixed-point iteration method. Simulation results show that the proposed method outperforms conventional CS method in terms of aliasing artifact reduction and kinetic information preservation. PMID:25570262

  4. Hydroxocobalamin treatment of acute cyanide poisoning from apricot kernels.

    PubMed

    Cigolini, Davide; Ricci, Giogio; Zannoni, Massimo; Codogni, Rosalia; De Luca, Manuela; Perfetti, Paola; Rocca, Giampaolo

    2011-05-24

    Clinical experience with hydroxocobalamin in acute cyanide poisoning via ingestion remains limited. This case concerns a 35-year-old mentally ill woman who consumed more than 20 apricot kernels. Published literature suggests each kernel would have contained cyanide concentrations ranging from 0.122 to 4.09 mg/g (average 2.92 mg/g). On arrival, the woman appeared asymptomatic with a raised pulse rate and slight metabolic acidosis. Forty minutes after admission (approximately 70 min postingestion), the patient experienced headache, nausea and dyspnoea, and was hypotensive, hypoxic and tachypnoeic. Following treatment with amyl nitrite and sodium thiosulphate, her methaemoglobin level was 10%. This prompted the administration of oxygen, which evoked a slight improvement in her vital signs. Hydroxocobalamin was then administered. After 24 h, she was completely asymptomatic with normalised blood pressure and other haemodynamic parameters. This case reinforces the safety and effectiveness of hydroxocobalamin in acute cyanide poisoning by ingestion.

  5. Hydroxocobalamin treatment of acute cyanide poisoning from apricot kernels.

    PubMed

    Cigolini, Davide; Ricci, Giogio; Zannoni, Massimo; Codogni, Rosalia; De Luca, Manuela; Perfetti, Paola; Rocca, Giampaolo

    2011-09-01

    Clinical experience with hydroxocobalamin in acute cyanide poisoning via ingestion remains limited. This case concerns a 35-year-old mentally ill woman who consumed more than 20 apricot kernels. Published literature suggests each kernel would have contained cyanide concentrations ranging from 0.122 to 4.09 mg/g (average 2.92 mg/g). On arrival, the woman appeared asymptomatic with a raised pulse rate and slight metabolic acidosis. Forty minutes after admission (approximately 70 min postingestion), the patient experienced headache, nausea and dyspnoea, and was hypotensive, hypoxic and tachypnoeic. Following treatment with amyl nitrite and sodium thiosulphate, her methaemoglobin level was 10%. This prompted the administration of oxygen, which evoked a slight improvement in her vital signs. Hydroxocobalamin was then administered. After 24 h, she was completely asymptomatic with normalised blood pressure and other haemodynamic parameters. This case reinforces the safety and effectiveness of hydroxocobalamin in acute cyanide poisoning by ingestion.

  6. Realistic dispersion kernels applied to cohabitation reaction dispersion equations

    NASA Astrophysics Data System (ADS)

    Isern, Neus; Fort, Joaquim; Pérez-Losada, Joaquim

    2008-10-01

    We develop front spreading models for several jump distance probability distributions (dispersion kernels). We derive expressions for a cohabitation model (cohabitation of parents and children) and a non-cohabitation model, and apply them to the Neolithic using data from real human populations. The speeds that we obtain are consistent with observations of the Neolithic transition. The correction due to the cohabitation effect is up to 38%.

  7. Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications

    SciTech Connect

    Jones, Terry R

    2011-01-01

    This paper describes a kernel scheduling algorithm that is based on co-scheduling principles and that is intended for parallel applications running on 1000 cores or more where inter-node scalability is key. Experimental results for a Linux implementation on a Cray XT5 machine are presented.1 The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  8. Multilevel image recognition using discriminative patches and kernel covariance descriptor

    NASA Astrophysics Data System (ADS)

    Lu, Le; Yao, Jianhua; Turkbey, Evrim; Summers, Ronald M.

    2014-03-01

    Computer-aided diagnosis of medical images has emerged as an important tool to objectively improve the performance, accuracy and consistency for clinical workflow. To computerize the medical image diagnostic recognition problem, there are three fundamental problems: where to look (i.e., where is the region of interest from the whole image/volume), image feature description/encoding, and similarity metrics for classification or matching. In this paper, we exploit the motivation, implementation and performance evaluation of task-driven iterative, discriminative image patch mining; covariance matrix based descriptor via intensity, gradient and spatial layout; and log-Euclidean distance kernel for support vector machine, to address these three aspects respectively. To cope with often visually ambiguous image patterns for the region of interest in medical diagnosis, discovery of multilabel selective discriminative patches is desired. Covariance of several image statistics summarizes their second order interactions within an image patch and is proved as an effective image descriptor, with low dimensionality compared with joint statistics and fast computation regardless of the patch size. We extensively evaluate two extended Gaussian kernels using affine-invariant Riemannian metric or log-Euclidean metric with support vector machines (SVM), on two medical image classification problems of degenerative disc disease (DDD) detection on cortical shell unwrapped CT maps and colitis detection on CT key images. The proposed approach is validated with promising quantitative results on these challenging tasks. Our experimental findings and discussion also unveil some interesting insights on the covariance feature composition with or without spatial layout for classification and retrieval, and different kernel constructions for SVM. This will also shed some light on future work using covariance feature and kernel classification for medical image analysis.

  9. Cassane diterpenes from the seed kernels of Caesalpinia sappan.

    PubMed

    Nguyen, Hai Xuan; Nguyen, Nhan Trung; Dang, Phu Hoang; Thi Ho, Phuoc; Nguyen, Mai Thanh Thi; Van Can, Mao; Dibwe, Dya Fita; Ueda, Jun-Ya; Awale, Suresh

    2016-02-01

    Eight structurally diverse cassane diterpenes named tomocins A-H were isolated from the seed kernels of Vietnamese Caesalpinia sappan Linn. Their structures were determined by extensive NMR and CD spectroscopic analysis. Among the isolated compounds, tomocin A, phanginin A, F, and H exhibited mild preferential cytotoxicity against PANC-1 human pancreatic cancer cells under nutrition-deprived condition without causing toxicity in normal nutrient-rich conditions.

  10. Instantaneous Bethe-Salpeter kernel for the lightest pseudoscalar mesons

    NASA Astrophysics Data System (ADS)

    Lucha, Wolfgang; Schöberl, Franz F.

    2016-05-01

    Starting from a phenomenologically successful, numerical solution of the Dyson-Schwinger equation that governs the quark propagator, we reconstruct in detail the interaction kernel that has to enter the instantaneous approximation to the Bethe-Salpeter equation to allow us to describe the lightest pseudoscalar mesons as quark-antiquark bound states exhibiting the (almost) masslessness necessary for them to be interpretable as the (pseudo) Goldstone bosons related to the spontaneous chiral symmetry breaking of quantum chromodynamics.

  11. Benchmarking NWP Kernels on Multi- and Many-core Processors

    NASA Astrophysics Data System (ADS)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  12. Mapping quantitative trait loci for kernel composition in almond

    PubMed Central

    2012-01-01

    Background Almond breeding is increasingly taking into account kernel quality as a breeding objective. Information on the parameters to be considered in evaluating almond quality, such as protein and oil content, as well as oleic acid and tocopherol concentration, has been recently compiled. The genetic control of these traits has not yet been studied in almond, although this information would improve the efficiency of almond breeding programs. Results A map with 56 simple sequence repeat or microsatellite (SSR) markers was constructed for an almond population showing a wide range of variability for the chemical components of the almond kernel. A total of 12 putative quantitative trait loci (QTL) controlling these chemical traits have been detected in this analysis, corresponding to seven genomic regions of the eight almond linkage groups (LG). Some QTL were clustered in the same region or shared the same molecular markers, according to the correlations already found between the chemical traits. The logarithm of the odds (LOD) values for any given trait ranged from 2.12 to 4.87, explaining from 11.0 to 33.1 % of the phenotypic variance of the trait. Conclusions The results produced in the study offer the opportunity to include the new genetic information in almond breeding programs. Increases in the positive traits of kernel quality may be looked for simultaneously whenever they are genetically independent, even if they are negatively correlated. We have provided the first genetic framework for the chemical components of the almond kernel, with twelve QTL in agreement with the large number of genes controlling their metabolism. PMID:22720975

  13. Equilibrium studies of copper ion adsorption onto palm kernel fibre.

    PubMed

    Ofomaja, Augustine E

    2010-07-01

    The equilibrium sorption of copper ions from aqueous solution using a new adsorbent, palm kernel fibre, has been studied. Palm kernel fibre is obtained in large amounts as a waste product of palm oil production. Batch equilibrium studies were carried out and system variables such as solution pH, sorbent dose, and sorption temperature were varied. The equilibrium sorption data was then analyzed using the Langmuir, Freundlich, Dubinin-Radushkevich (D-R) and Temkin isotherms. The fit of these isotherm models to the equilibrium sorption data was determined, using the linear coefficient of determination, r(2), and the non-linear Chi-square, chi(2) error analysis. The results revealed that sorption was pH dependent and increased with increasing solution pH above the pH(PZC) of the palm kernel fibre with an optimum dose of 10g/dm(3). The equilibrium data were found to fit the Langmuir isotherm model best, with a monolayer capacity of 3.17 x 10(-4)mol/g at 339K. The sorption equilibrium constant, K(a), increased with increasing temperature, indicating that bond strength between sorbate and sorbent increased with temperature and sorption was endothermic. This was confirmed by the increase in the values of the Temkin isotherm constant, B(1), with increasing temperature. The Dubinin-Radushkevich (D-R) isotherm parameter, free energy, E, was in the range of 15.7-16.7kJ/mol suggesting that the sorption mechanism was ion exchange. Desorption studies showed that a high percentage of the copper was desorbed from the adsorbent using acid solutions (HCl, HNO(3) and CH(3)COOH) and the desorption percentage increased with acid concentration. The thermodynamics of the copper ions/palm kernel fibre system indicate that the process is spontaneous and endothermic. PMID:20346574

  14. Deproteinated palm kernel cake-derived oligosaccharides: A preliminary study

    NASA Astrophysics Data System (ADS)

    Fan, Suet Pin; Chia, Chin Hua; Fang, Zhen; Zakaria, Sarani; Chee, Kah Leong

    2014-09-01

    Preliminary study on microwave-assisted hydrolysis of deproteinated palm kernel cake (DPKC) to produce oligosaccharides using succinic acid was performed. Three important factors, i.e., temperature, acid concentration and reaction time, were selected to carry out the hydrolysis processes. Results showed that the highest yield of DPKC-derived oligosaccharides can be obtained at a parameter 170 °C, 0.2 N SA and 20 min of reaction time.

  15. Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism

    SciTech Connect

    Jones, Terry R

    2012-01-01

    This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  16. Equilibrium studies of copper ion adsorption onto palm kernel fibre.

    PubMed

    Ofomaja, Augustine E

    2010-07-01

    The equilibrium sorption of copper ions from aqueous solution using a new adsorbent, palm kernel fibre, has been studied. Palm kernel fibre is obtained in large amounts as a waste product of palm oil production. Batch equilibrium studies were carried out and system variables such as solution pH, sorbent dose, and sorption temperature were varied. The equilibrium sorption data was then analyzed using the Langmuir, Freundlich, Dubinin-Radushkevich (D-R) and Temkin isotherms. The fit of these isotherm models to the equilibrium sorption data was determined, using the linear coefficient of determination, r(2), and the non-linear Chi-square, chi(2) error analysis. The results revealed that sorption was pH dependent and increased with increasing solution pH above the pH(PZC) of the palm kernel fibre with an optimum dose of 10g/dm(3). The equilibrium data were found to fit the Langmuir isotherm model best, with a monolayer capacity of 3.17 x 10(-4)mol/g at 339K. The sorption equilibrium constant, K(a), increased with increasing temperature, indicating that bond strength between sorbate and sorbent increased with temperature and sorption was endothermic. This was confirmed by the increase in the values of the Temkin isotherm constant, B(1), with increasing temperature. The Dubinin-Radushkevich (D-R) isotherm parameter, free energy, E, was in the range of 15.7-16.7kJ/mol suggesting that the sorption mechanism was ion exchange. Desorption studies showed that a high percentage of the copper was desorbed from the adsorbent using acid solutions (HCl, HNO(3) and CH(3)COOH) and the desorption percentage increased with acid concentration. The thermodynamics of the copper ions/palm kernel fibre system indicate that the process is spontaneous and endothermic.

  17. Kernel Feature Cross-Correlation for Unsupervised Quantification of Damage from Windthrow in Forests

    NASA Astrophysics Data System (ADS)

    Pirotti, F.; Travaglini, D.; Giannetti, F.; Kutchartt, E.; Bottalico, F.; Chirici, G.

    2016-06-01

    In this study estimation of tree damage from a windthrow event using feature detection on RGB high resolution imagery is assessed. An accurate quantitative assessment of the damage in terms of volume is important and can be done by ground sampling, which is notably expensive and time-consuming, or by manual interpretation and analyses of aerial images. This latter manual method also requires an expert operator investing time to manually detect damaged trees and apply relation functions between measures and volume which are also error-prone. In the proposed method RGB images with 0.2 m ground sample distance are analysed using an adaptive template matching method. Ten images corresponding to ten separate study areas are tested. A 13x13 pixels kernel with a simplified linear-feature representation of a cylinder is applied at different rotation angles (from 0° to 170° at 10° steps). The higher values of the normalized cross-correlation (NCC) of all angles are recorded for each pixel for each image. Several features are tested: percentiles (75, 80, 85, 90, 95, 99, max) and sum and number of pixels with NCC above 0.55. Three regression methods are tested, multiple regression (mr), support vector machines (svm) with linear kernel and random forests. The first two methods gave the best results. The ground-truth was acquired by ground sampling, and total volumes of damaged trees are estimated for each of the 10 areas. Damaged volumes in the ten areas range from ~1.8 x102 m3 to ~1.2x104 m3. Regression results show that smv regression method over the sum gives an R-squared of 0.92, a mean of absolute errors (MAE) of 255 m3 and a relative absolute error (RAE) of 34% using leave-one-out cross validation from the 10 observations. These initial results are encouraging and support further investigations on more finely tuned kernel template metrics to define an unsupervised image analysis process to automatically assess forest damage from windthrow.

  18. Knowledge Driven Image Mining with Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels; which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper, we present the theory of Mercer Kernels, describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  19. KNBD: A Remote Kernel Block Server for Linux

    NASA Technical Reports Server (NTRS)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  20. Biodiesel from Siberian apricot (Prunus sibirica L.) seed kernel oil.

    PubMed

    Wang, Libing; Yu, Haiyan

    2012-05-01

    In this paper, Siberian apricot (Prunus sibirica L.) seed kernel oil was investigated for the first time as a promising non-conventional feedstock for preparation of biodiesel. Siberian apricot seed kernel has high oil content (50.18 ± 3.92%), and the oil has low acid value (0.46 mg g(-1)) and low water content (0.17%). The fatty acid composition of the Siberian apricot seed kernel oil includes a high percentage of oleic acid (65.23 ± 4.97%) and linoleic acid (28.92 ± 4.62%). The measured fuel properties of the Siberian apricot biodiesel, except cetane number and oxidative stability, were conformed to EN 14214-08, ASTM D6751-10 and GB/T 20828-07 standards, especially the cold flow properties were excellent (Cold filter plugging point -14°C). The addition of 500 ppm tert-butylhydroquinone (TBHQ) resulted in a higher induction period (7.7h) compliant with all the three biodiesel standards. PMID:22440572

  1. Hyperspectral-imaging-based techniques applied to wheat kernels characterization

    NASA Astrophysics Data System (ADS)

    Serranti, Silvia; Cesare, Daniela; Bonifazi, Giuseppe

    2012-05-01

    Single kernels of durum wheat have been analyzed by hyperspectral imaging (HSI). Such an approach is based on the utilization of an integrated hardware and software architecture able to digitally capture and handle spectra as an image sequence, as they results along a pre-defined alignment on a surface sample properly energized. The study was addressed to investigate the possibility to apply HSI techniques for classification of different types of wheat kernels: vitreous, yellow berry and fusarium-damaged. Reflectance spectra of selected wheat kernels of the three typologies have been acquired by a laboratory device equipped with an HSI system working in near infrared field (1000-1700 nm). The hypercubes were analyzed applying principal component analysis (PCA) to reduce the high dimensionality of data and for selecting some effective wavelengths. Partial least squares discriminant analysis (PLS-DA) was applied for classification of the three wheat typologies. The study demonstrated that good classification results were obtained not only considering the entire investigated wavelength range, but also selecting only four optimal wavelengths (1104, 1384, 1454 and 1650 nm) out of 121. The developed procedures based on HSI can be utilized for quality control purposes or for the definition of innovative sorting logics of wheat.

  2. Reduced-size kernel models for nonlinear hybrid system identification.

    PubMed

    Le, Van Luong; Bloch, Grard; Lauer, Fabien

    2011-12-01

    This brief paper focuses on the identification of nonlinear hybrid dynamical systems, i.e., systems switching between multiple nonlinear dynamical behaviors. Thus the aim is to learn an ensemble of submodels from a single set of input-output data in a regression setting with no prior knowledge on the grouping of the data points into similar behaviors. To be able to approximate arbitrary nonlinearities, kernel submodels are considered. However, in order to maintain efficiency when applying the method to large data sets, a preprocessing step is required in order to fix the submodel sizes and limit the number of optimization variables. This brief paper proposes four approaches, respectively inspired by the fixed-size least-squares support vector machines, the feature vector selection method, the kernel principal component regression and a modification of the latter, in order to deal with this issue and build sparse kernel submodels. These are compared in numerical experiments, which show that the proposed approach achieves the simultaneous classification of data points and approximation of the nonlinear behaviors in an efficient and accurate manner.

  3. Fast metabolite identification with Input Output Kernel Regression

    PubMed Central

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-01-01

    Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628

  4. Dimensionality reduction of hyperspectral images using kernel ICA

    NASA Astrophysics Data System (ADS)

    Khan, Asif; Kim, Intaek; Kong, Seong G.

    2009-05-01

    Computational burden due to high dimensionality of Hyperspectral images is an obstacle in efficient analysis and processing of Hyperspectral images. In this paper, we use Kernel Independent Component Analysis (KICA) for dimensionality reduction of Hyperspectraql images based on band selection. Commonly used ICA and PCA based dimensionality reduction methods do not consider non linear transformations and assumes that data has non-gaussian distribution. When the relation of source signals (pure materials) and observed Hyperspectral images is nonlinear then these methods drop a lot of information during dimensionality reduction process. Recent research shows that kernel-based methods are effective in nonlinear transformations. KICA is robust technique of blind source separation and can even work on near-gaussina data. We use Kernel Independent Component Analysis (KICA) for the selection of minimum number of bands that contain maximum information for detection in Hyperspectral images. The reduction of bands is basd on the evaluation of weight matrix generated by KICA. From the selected lower number of bands, we generate a new spectral image with reduced dimension and use it for hyperspectral image analysis. We use this technique as preprocessing step in detection and classification of poultry skin tumors. The hyperspectral iamge samples of chicken tumors used contain 65 spectral bands of fluorescence in the visible region of the spectrum. Experimental results show that KICA based band selection has high accuracy than that of fastICA based band selection for dimensionality reduction and analysis for Hyperspectral images.

  5. Noise Level Estimation for Model Selection in Kernel PCA Denoising.

    PubMed

    Varon, Carolina; Alzate, Carlos; Suykens, Johan A K

    2015-11-01

    One of the main challenges in unsupervised learning is to find suitable values for the model parameters. In kernel principal component analysis (kPCA), for example, these are the number of components, the kernel, and its parameters. This paper presents a model selection criterion based on distance distributions (MDDs). This criterion can be used to find the number of components and the σ(2) parameter of radial basis function kernels by means of spectral comparison between information and noise. The noise content is estimated from the statistical moments of the distribution of distances in the original dataset. This allows for a type of randomization of the dataset, without actually having to permute the data points or generate artificial datasets. After comparing the eigenvalues computed from the estimated noise with the ones from the input dataset, information is retained and maximized by a set of model parameters. In addition to the model selection criterion, this paper proposes a modification to the fixed-size method and uses the incomplete Cholesky factorization, both of which are used to solve kPCA in large-scale applications. These two approaches, together with the model selection MDD, were tested in toy examples and real life applications, and it is shown that they outperform other known algorithms. PMID:25608316

  6. Predicting activity approach based on new atoms similarity kernel function.

    PubMed

    Abu El-Atta, Ahmed H; Moussa, M I; Hassanien, Aboul Ella

    2015-07-01

    Drug design is a high cost and long term process. To reduce time and costs for drugs discoveries, new techniques are needed. Chemoinformatics field implements the informational techniques and computer science like machine learning and graph theory to discover the chemical compounds properties, such as toxicity or biological activity. This is done through analyzing their molecular structure (molecular graph). To overcome this problem there is an increasing need for algorithms to analyze and classify graph data to predict the activity of molecules. Kernels methods provide a powerful framework which combines machine learning with graph theory techniques. These kernels methods have led to impressive performance results in many several chemoinformatics problems like biological activity prediction. This paper presents a new approach based on kernel functions to solve activity prediction problem for chemical compounds. First we encode all atoms depending on their neighbors then we use these codes to find a relationship between those atoms each other. Then we use relation between different atoms to find similarity between chemical compounds. The proposed approach was compared with many other classification methods and the results show competitive accuracy with these methods.

  7. Initial Kernel Timing Using a Simple PIM Performance Model

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.; Block, Gary L.; Springer, Paul L.; Sterling, Thomas; Brockman, Jay B.; Callahan, David

    2005-01-01

    This presentation will describe some initial results of paper-and-pencil studies of 4 or 5 application kernels applied to a processor-in-memory (PIM) system roughly similar to the Cascade Lightweight Processor (LWP). The application kernels are: * Linked list traversal * Sun of leaf nodes on a tree * Bitonic sort * Vector sum * Gaussian elimination The intent of this work is to guide and validate work on the Cascade project in the areas of compilers, simulators, and languages. We will first discuss the generic PIM structure. Then, we will explain the concepts needed to program a parallel PIM system (locality, threads, parcels). Next, we will present a simple PIM performance model that will be used in the remainder of the presentation. For each kernel, we will then present a set of codes, including codes for a single PIM node, and codes for multiple PIM nodes that move data to threads and move threads to data. These codes are written at a fairly low level, between assembly and C, but much closer to C than to assembly. For each code, we will present some hand-drafted timing forecasts, based on the simple PIM performance model. Finally, we will conclude by discussing what we have learned from this work, including what programming styles seem to work best, from the point-of-view of both expressiveness and performance.

  8. Hyperspectral anomaly detection using sparse kernel-based ensemble learning

    NASA Astrophysics Data System (ADS)

    Gurram, Prudhvi; Han, Timothy; Kwon, Heesung

    2011-06-01

    In this paper, sparse kernel-based ensemble learning for hyperspectral anomaly detection is proposed. The proposed technique is aimed to optimize an ensemble of kernel-based one class classifiers, such as Support Vector Data Description (SVDD) classifiers, by estimating optimal sparse weights. In this method, hyperspectral signatures are first randomly sub-sampled into a large number of spectral feature subspaces. An enclosing hypersphere that defines the support of spectral data, corresponding to the normalcy/background data, in the Reproducing Kernel Hilbert Space (RKHS) of each respective feature subspace is then estimated using regular SVDD. The enclosing hypersphere basically represents the spectral characteristics of the background data in the respective feature subspace. The joint hypersphere is learned by optimally combining the hyperspheres from the individual RKHS, while imposing the l1 constraint on the combining weights. The joint hypersphere representing the most optimal compact support of the local hyperspectral data in the joint feature subspaces is then used to test each pixel in hyperspectral image data to determine if it belongs to the local background data or not. The outliers are considered to be targets. The performance comparison between the proposed technique and the regular SVDD is provided using the HYDICE hyperspectral images.

  9. Kernel Averaged Predictors for Spatio-Temporal Regression Models.

    PubMed

    Heaton, Matthew J; Gelfand, Alan E

    2012-12-01

    In applications where covariates and responses are observed across space and time, a common goal is to quantify the effect of a change in the covariates on the response while adequately accounting for the spatio-temporal structure of the observations. The most common approach for building such a model is to confine the relationship between a covariate and response variable to a single spatio-temporal location. However, oftentimes the relationship between the response and predictors may extend across space and time. In other words, the response may be affected by levels of predictors in spatio-temporal proximity to the response location. Here, a flexible modeling framework is proposed to capture such spatial and temporal lagged effects between a predictor and a response. Specifically, kernel functions are used to weight a spatio-temporal covariate surface in a regression model for the response. The kernels are assumed to be parametric and non-stationary with the data informing the parameter values of the kernel. The methodology is illustrated on simulated data as well as a physical data set of ozone concentrations to be explained by temperature. PMID:24010051

  10. Open-cluster density profiles derived using a kernel estimator

    NASA Astrophysics Data System (ADS)

    Seleznev, Anton F.

    2016-03-01

    Surface and spatial radial density profiles in open clusters are derived using a kernel estimator method. Formulae are obtained for the contribution of every star into the spatial density profile. The evaluation of spatial density profiles is tested against open-cluster models from N-body experiments with N = 500. Surface density profiles are derived for seven open clusters (NGC 1502, 1960, 2287, 2516, 2682, 6819 and 6939) using Two-Micron All-Sky Survey data and for different limiting magnitudes. The selection of an optimal kernel half-width is discussed. It is shown that open-cluster radius estimates hardly depend on the kernel half-width. Hints of stellar mass segregation and structural features indicating cluster non-stationarity in the regular force field are found. A comparison with other investigations shows that the data on open-cluster sizes are often underestimated. The existence of an extended corona around the open cluster NGC 6939 was confirmed. A combined function composed of the King density profile for the cluster core and the uniform sphere for the cluster corona is shown to be a better approximation of the surface radial density profile.The King function alone does not reproduce surface density profiles of sample clusters properly. The number of stars, the cluster masses and the tidal radii in the Galactic gravitational field for the sample clusters are estimated. It is shown that NGC 6819 and 6939 are extended beyond their tidal surfaces.

  11. Near infrared hyperspectral imaging for the evaluation of endosperm texture in whole yellow maize (Zea maize L.) kernels.

    PubMed

    Manley, Marena; Williams, Paul; Nilsson, David; Geladi, Paul

    2009-10-14

    Near infrared hyperspectral images (HSI) were recorded for whole yellow maize kernels (commercial hybrids) defined as either hard, intermediate, or soft by experienced maize breeders. The images were acquired with a linescan (pushbroom) instrument using a HgCdTe detector. The final image size was 570 x 219 pixels in 239 wavelength bands from 1000 to 2498 nm in steps of approximately 6.5 nm. Multivariate image cleaning was used to remove background and optical errors, in which about two-thirds of all pixels were removed. The cleaned image was used to calculate a principal component analysis (PCA) model after multiplicative scatter correction (MSC) and mean-centering were applied. It was possible to find clusters representing vitreous and floury endosperm (different types of endosperm present in varying ratios in hard and soft kernels) as well as a third type of endosperm by interactively delineating polygon based clusters in the score plot of the second and fourth principal components and projecting the results on the image space. Chemical interpretation of the loading line plots shows the effect of starch density and the protein matrix. The vitreous and floury endosperm clusters were used to make a partial least-squares discriminant analysis (PLS-DA) model, using four components, with a coefficient of determination (R(2)) for the y data (kernel hardness category) for the training set of over 85%. This PLS-DA model could be used for prediction in a test set. We show how the prediction images can be interpreted, thus confirming the validity of the PCA classification. The technique presented here is very powerful for laboratory studies of small cereal samples in order to produce localized information. PMID:19728712

  12. Organizing for ontological change: The kernel of an AIDS research infrastructure

    PubMed Central

    Polk, Jessica Beth

    2015-01-01

    Is it possible to prepare and plan for emergent and changing objects of research? Members of the Multicenter AIDS Cohort Study have been investigating AIDS for over 30 years, and in that time, the disease has been repeatedly transformed. Over the years and across many changes, members have continued to study HIV disease while in the process regenerating an adaptable research organization. The key to sustaining this technoscientific flexibility has been what we call the kernel of a research infrastructure: ongoing efforts to maintain the availability of resources and services that may be brought to bear in the investigation of new objects. In the case of the Multicenter AIDS Cohort Study, these resources are as follows: specimens and data, calibrated instruments, heterogeneous experts, and participating cohorts of gay and bisexual men. We track three ontological transformations, examining how members prepared for and responded to changes: the discovery of a novel retroviral agent (HIV), the ability to test for that agent, and the transition of the disease from fatal to chronic through pharmaceutical intervention. Respectively, we call the work, ‘technologies’, and techniques of adapting to these changes, ‘repurposing’, ‘elaborating’, and ‘extending the kernel’. PMID:26477206

  13. Classification of corn kernels contaminated with aflatoxins using fluorescence and reflectance hyperspectral images analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas

    2015-05-01

    Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.

  14. General-form 3-3-3 interpolation kernel and its simplified frequency-response derivation

    NASA Astrophysics Data System (ADS)

    Deng, Tian-Bo

    2016-11-01

    An interpolation kernel is required in a wide variety of signal processing applications such as image interpolation and timing adjustment in digital communications. This article presents a general-form interpolation kernel called 3-3-3 interpolation kernel and derives its frequency response in a closed-form by using a simple derivation method. This closed-form formula is preliminary to designing various 3-3-3 interpolation kernels subject to a set of design constraints. The 3-3-3 interpolation kernel is formed through utilising the third-degree piecewise polynomials, and it is an even-symmetric function. Thus, it will suffice to consider only its right-hand side when deriving its frequency response. Since the right-hand side of the interpolation kernel contains three piecewise polynomials of the third degree, i.e. the degrees of the three piecewise polynomials are (3,3,3), we call it the 3-3-3 interpolation kernel. Once the general-form frequency-response formula is derived, we can systematically formulate the design of various 3-3-3 interpolation kernels subject to a set of design constraints, which are targeted for different interpolation applications. Therefore, the closed-form frequency-response expression is preliminary to the optimal design of various 3-3-3 interpolation kernels. We will use an example to show the optimal design of a 3-3-3 interpolation kernel based on the closed-form frequency-response expression.

  15. Hypothesis of a daemon kernel of the Earth

    NASA Astrophysics Data System (ADS)

    Drobyshevski, E. M.

    2004-01-01

    The paper considers the fate of the electrically charged (Ze 10e) Planckian elementary black holes, namely, daemons, making up the dark matter of the Galactic disc, which, as follows from our measurements, were trapped by the Earth during 4.5 Gyears in an amount equal to approximately 1024. Owing to their huge mass (about 2 x 10 kg), these particles settle down to the Earth's centre to form a kernel. Assuming that the excess flux of 10-20 TW over the heat flux level produced by known sources, which is quoted by many researchers, is due to the energy liberated in the outer kernel layers in daemon-stimulated proton decay of Fe nuclei, we have come to the conclusion that the Earth's kernel is at present a fraction of a metre in size. The observed mantle flux of 3He (and the limiting 3He to 4He ratio of about 10 4 itself) can be provided if at least one 3He (or 3T) nucleus is emitted in a daemon-stimulated decay of 102-103 Fe nuclei. This could actually remove the only objection to the hot origin of the Earth and to its original melting. The high energy liberation at the centre of the Earth drives two-phase two-dimensional convection in its inner core (IC), with rolls oriented along the rotation axis. This provides an explanation for the numerous features in the IC structure revealed in recent years (anisotropy in the seismic wave propagation, the existence of small irregularities, the strong damping of the P and S waves, ambiguities in the measurements of the IC rotation rate, etc.). The energy release in the kernel grows continuously as the number of daemons in it increases. Therefore the global tectonic activity, which had died out after the initial differentiation and cooling off of the Earth was reanimated 2 Gyears ago by the rearrangement and enhancement of convection in the mantle as a result of the increasing outward energy flow. It is pointed out that, as the kernel continues to grow, the tectonic activity will become intensified rather than die out, as was

  16. Functional diversity among seed dispersal kernels generated by carnivorous mammals.

    PubMed

    González-Varo, Juan P; López-Bao, José V; Guitián, José

    2013-05-01

    1. Knowledge of the spatial scale of the dispersal service provided by important seed dispersers (i.e. common and/or keystone species) is essential to our understanding of their role on plant ecology, ecosystem functioning and, ultimately, biodiversity conservation. 2. Carnivores are the main mammalian frugivores and seed dispersers in temperate climate regions. However, information on the seed dispersal distances they generate is still very limited. We focused on two common temperate carnivores differing in body size and spatial ecology - red fox (Vulpes vulpes) and European pine marten (Martes martes) - for evaluating possible functional diversity in their seed dispersal kernels. 3. We measured dispersal distances using colour-coded seed mimics embedded in experimental fruits that were offered to the carnivores in feeding stations (simulating source trees). The exclusive colour code of each simulated tree allowed us to assign the exact origin of seed mimics found later in carnivore faeces. We further designed an explicit sampling strategy aiming to detect the longest dispersal events; as far we know, the most robust sampling scheme followed for tracking carnivore-dispersed seeds. 4. We found a marked functional heterogeneity among both species in their seed dispersal kernels according to their home range size: multimodality and long-distance dispersal in the case of the fox and unimodality and short-distance dispersal in the case of the marten (maximum distances = 2846 and 1233 m, respectively). As a consequence, emergent kernels at the guild level (overall and in two different years) were highly dependent on the relative contribution of each carnivore species. 5. Our results provide the first empirical evidence of functional diversity among seed dispersal kernels generated by carnivorous mammals. Moreover, they illustrate for the first time how seed dispersal kernels strongly depend on the relative contribution of different disperser species, thus on the

  17. Interaction between drought and chronic high temperature during kernel filling in wheat in a controlled environment.

    PubMed

    Wardlaw, Ian F

    2002-10-01

    Wheat plants (Triticum aestivum L. 'Lyallpur'), limited to a single culm, were grown at day/night temperatures of either 18/13 degrees C (moderate temperature), or 27/22 degrees C (chronic high temperature) from the time of anthesis. Plants were either non-droughted or subjected to two post-anthesis water stresses by withholding water from plants grown in different volumes of potting mix. In selected plants the demand for assimilates by the ear was reduced by removal of all but the five central spikelets. In non-droughted plants, it was confirmed that shading following anthesis (source limitation) reduced kernel dry weight at maturity, with a compensating increase in the dry weight of the remaining kernels when the total number of kernels was reduced (small sink). Reducing kernel number did not alter the effect of high temperature following anthesis on the dry weight of the remaining kernels at maturity, but reducing the number of kernels did result in a greater dry weight of the remaining kernels of droughted plants. However, the relationship between the response to drought and kernel number was confounded by a reduction in the extent of water stress associated with kernel removal. Data on the effect of water stress on kernel dry weight at maturity of plants with either the full complement or reduced numbers of kernels, and subjected to low and high temperatures following anthesis, indicate that the effect of drought on kernel dry weight may be reduced, in both absolute and relative terms, rather than enhanced, at high temperature. It is suggested that where high temperature and drought occur concurrently after anthesis there may be a degree of drought escape associated with chronic high temperature due to the reduction in the duration of kernel filling, even though the rate of water use may be enhanced by high temperature. PMID:12324270

  18. The Effects of Kernel Feeding by Halyomorpha halys (Hemiptera: Pentatomidae) on Commercial Hazelnuts.

    PubMed

    Hedstrom, C S; Shearer, P W; Miller, J C; Walton, V M

    2014-10-01

    Halyomorpha halys Stål, the brown marmorated stink bug (Hemiptera: Pentatomidae), is an invasive pest with established populations in Oregon. The generalist feeding habits of H. halys suggest it has the potential to be a pest of many specialty crops grown in Oregon, including hazelnuts, Corylus avellana L. The objectives of this study were to: 1) characterize the damage to developing hazelnut kernels resulting from feeding by H. halys adults, 2) determine how the timing of feeding during kernel development influences damage to kernels, and 3) determine if hazelnut shell thickness has an effect on feeding frequency on kernels. Adult brown marmorated stink bugs were allowed to feed on developing nuts for 1-wk periods from initial kernel development (spring) until harvest (fall). Developing nuts not exposed to feeding by H. halys served as a control treatment. The degree of damage and diagnostic symptoms corresponded with the hazelnut kernels' physiological development. Our results demonstrated that when H. halys fed on hazelnuts before kernel expansion, development of the kernels could cease, resulting in empty shells. When stink bugs fed during kernel expansion, kernels appeared malformed. When stink bugs fed on mature nuts the kernels exhibited corky, necrotic areas. Although significant differences in shell thickness were observed among the cultivars, no significant differences occurred in the proportions of damaged kernels based on field tests and laboratory choice tests. The results of these studies demonstrated that commercial hazelnuts are susceptible to damage caused by the feeding of H. halys throughout the entire period of kernel development. PMID:26309276

  19. Application of neural networks for determining optical parameters of strongly scattering media from the intensity profile of backscattered radiation

    SciTech Connect

    Kotova, S P; Maiorov, I V; Maiorova, A M

    2007-01-31

    We analyse the possibilities of simultaneous measuring three optical parameters of scattering media, namely, the scattering and absorption coefficients and the scattering anisotropy parameter by the intensity profile of backscattered radiation by using the neural network inversion method and the method of adaptive-network-based fuzzy inference system. The measurement errors of the absorption and scattering coefficients and the scattering anisotropy parameter are 20%, 5%, and 10%, respectively. (special issue devoted to multiple radiation scattering in random media)

  20. Adaptive Management

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...

  1. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed.

  2. Kernel-aligned multi-view canonical correlation analysis for image recognition

    NASA Astrophysics Data System (ADS)

    Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao

    2016-09-01

    Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.

  3. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  4. Scattering from binary optics

    NASA Technical Reports Server (NTRS)

    Ricks, Douglas W.

    1993-01-01

    There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.

  5. Approximations for photoelectron scattering

    NASA Astrophysics Data System (ADS)

    Fritzsche, V.

    1989-04-01

    The errors of several approximations in the theoretical approach of photoelectron scattering are systematically studied, in tungsten, for electron energies ranging from 10 to 1000 eV. The large inaccuracies of the plane-wave approximation (PWA) are substantially reduced by means of effective scattering amplitudes in the modified small-scattering-centre approximation (MSSCA). The reduced angular momentum expansion (RAME) is so accurate that it allows reliable calculations of multiple-scattering contributions for all the energies considered.

  6. Standardized total tract digestibility of phosphorus in copra meal, palm kernel expellers, palm kernel meal, and soybean meal fed to growing pigs.

    PubMed

    Almaguer, B L; Sulabo, R C; Liu, Y; Stein, H H

    2014-06-01

    Sixty-six barrows (initial BW: 27.4 ± 2.8 kg) were used to determine the standardized total tract digestibility (STTD) of P in copra meal (CM), palm kernel expellers from Indonesia (PKE-IN), palm kernel expellers from Costa Rica (PKE-CR), palm kernel meal from Costa Rica (PKM), and soybean meal (SBM) without or with exogenous phytase. Pigs were housed individually in metabolism cages and allotted to 11 diets with 6 replicate pigs per diet in a generalized randomized block design. Five diets were formulated by mixing cornstarch and sugar with CM, PKE-IN, PKE-CR, PKM, or SBM. Five additional diets, which were identical to the initial 5 diets but supplemented with 800 units of phytase, were also formulated. A P-free diet was used to measure basal endogenous losses of P by the pigs. Feces were collected for 5 d using the marker to marker approach after a 5-d adaptation period. Analyzed total P in CM, PKE-IN, PKE-CR, PKM, and SBM was 0.52, 0.51, 0.53, 0.54, and 0.67%, respectively. Phytate P was 0.22, 0.35, 0.38, 0.32, and 0.44% in CM, PKE-IN, PKE-CR, PKM, and SBM, respectively. Addition of phytase increased (P < 0.05) the apparent total tract digestibility (ATTD) of P from 60.6 to 80.8, 27.3 to 56.5, 32.6 to 59.9, 48.9 to 64.1, and 41.1 to 72.2% in CM, PKE-IN, PKE-CR, PKM, and SBM, respectively. The ATTD of P in CM was greater (P < 0.05) than in any of the other ingredients. The ATTD of P in SBM and PKM was greater (P < 0.05) than in PKE-IN, with PKE-CR being intermediate. The STTD of P increased (P < 0.05) from 70.6 to 90.3, 37.6 to 66.4, 43.2 to 69.9, 57.9 to 73.5, and 49.6 to 81.1% in CM, PKE-IN, PKE-CR, PKM, and SBM, respectively, when microbial phytase was added to the diets. When expressed as a percentage of total P, phytate P concentration in the ingredient negatively affected (P < 0.05) the ATTD of P (107.09 - 1.0564 × % phytate P; R(2) = 87.1) and the STTD of P (116.3 - 1.0487 × % phytate P; R(2) = 89.4). In conclusion, microbial phytase increased P

  7. Standardized total tract digestibility of phosphorus in copra meal, palm kernel expellers, palm kernel meal, and soybean meal fed to growing pigs.

    PubMed

    Almaguer, B L; Sulabo, R C; Liu, Y; Stein, H H

    2014-06-01

    Sixty-six barrows (initial BW: 27.4 ± 2.8 kg) were used to determine the standardized total tract digestibility (STTD) of P in copra meal (CM), palm kernel expellers from Indonesia (PKE-IN), palm kernel expellers from Costa Rica (PKE-CR), palm kernel meal from Costa Rica (PKM), and soybean meal (SBM) without or with exogenous phytase. Pigs were housed individually in metabolism cages and allotted to 11 diets with 6 replicate pigs per diet in a generalized randomized block design. Five diets were formulated by mixing cornstarch and sugar with CM, PKE-IN, PKE-CR, PKM, or SBM. Five additional diets, which were identical to the initial 5 diets but supplemented with 800 units of phytase, were also formulated. A P-free diet was used to measure basal endogenous losses of P by the pigs. Feces were collected for 5 d using the marker to marker approach after a 5-d adaptation period. Analyzed total P in CM, PKE-IN, PKE-CR, PKM, and SBM was 0.52, 0.51, 0.53, 0.54, and 0.67%, respectively. Phytate P was 0.22, 0.35, 0.38, 0.32, and 0.44% in CM, PKE-IN, PKE-CR, PKM, and SBM, respectively. Addition of phytase increased (P < 0.05) the apparent total tract digestibility (ATTD) of P from 60.6 to 80.8, 27.3 to 56.5, 32.6 to 59.9, 48.9 to 64.1, and 41.1 to 72.2% in CM, PKE-IN, PKE-CR, PKM, and SBM, respectively. The ATTD of P in CM was greater (P < 0.05) than in any of the other ingredients. The ATTD of P in SBM and PKM was greater (P < 0.05) than in PKE-IN, with PKE-CR being intermediate. The STTD of P increased (P < 0.05) from 70.6 to 90.3, 37.6 to 66.4, 43.2 to 69.9, 57.9 to 73.5, and 49.6 to 81.1% in CM, PKE-IN, PKE-CR, PKM, and SBM, respectively, when microbial phytase was added to the diets. When expressed as a percentage of total P, phytate P concentration in the ingredient negatively affected (P < 0.05) the ATTD of P (107.09 - 1.0564 × % phytate P; R(2) = 87.1) and the STTD of P (116.3 - 1.0487 × % phytate P; R(2) = 89.4). In conclusion, microbial phytase increased P

  8. Fireplace adapters

    SciTech Connect

    Hunt, R.L.

    1983-12-27

    An adapter is disclosed for use with a fireplace. The stove pipe of a stove standing in a room to be heated may be connected to the flue of the chimney so that products of combustion from the stove may be safely exhausted through the flue and outwardly of the chimney. The adapter may be easily installed within the fireplace by removing the damper plate and fitting the adapter to the damper frame. Each of a pair of bolts has a portion which hooks over a portion of the damper frame and a threaded end depending from the hook portion and extending through a hole in the adapter. Nuts are threaded on the bolts and are adapted to force the adapter into a tight fit with the adapter frame.

  9. Antioxidant and antimicrobial activities of bitter and sweet apricot (Prunus armeniaca L.) kernels.

    PubMed

    Yiğit, D; Yiğit, N; Mavi, A

    2009-04-01

    The present study describes the in vitro antimicrobial and antioxidant activity of methanol and water extracts of sweet and bitter apricot (Prunus armeniaca L.) kernels. The antioxidant properties of apricot kernels were evaluated by determining radical scavenging power, lipid peroxidation inhibition activity and total phenol content measured with a DPPH test, the thiocyanate method and the Folin method, respectively. In contrast to extracts of the bitter kernels, both the water and methanol extracts of sweet kernels have antioxidant potential. The highest percent inhibition of lipid peroxidation (69%) and total phenolic content (7.9 +/- 0.2 microg/mL) were detected in the methanol extract of sweet kernels (Hasanbey) and in the water extract of the same cultivar, respectively. The antimicrobial activities of the above extracts were also tested against human pathogenic microorganisms using a disc-diffusion method, and the minimal inhibitory concentration (MIC) values of each active extract were determined. The most effective antibacterial activity was observed in the methanol and water extracts of bitter kernels and in the methanol extract of sweet kernels against the Gram-positive bacteria Staphylococcus aureus. Additionally, the methanol extracts of the bitter kernels were very potent against the Gram-negative bacteria Escherichia coli (0.312 mg/mL MIC value). Significant anti-candida activity was also observed with the methanol extract of bitter apricot kernels against Candida albicans, consisting of a 14 mm in diameter of inhibition zone and a 0.625 mg/mL MIC value.

  10. Influence of argan kernel roasting-time on virgin argan oil composition and oxidative stability.

    PubMed

    Harhar, Hicham; Gharby, Saïd; Kartah, Bader; El Monfalouti, Hanae; Guillaume, Dom; Charrouf, Zoubida

    2011-06-01

    Virgin argan oil, which is harvested from argan fruit kernels, constitutes an alimentary source of substances of nutraceutical value. Chemical composition and oxidative stability of argan oil prepared from argan kernels roasted for different times were evaluated and compared with those of beauty argan oil that is prepared from unroasted kernels. Prolonged roasting time induced colour development and increased phosphorous content whereas fatty acid composition and tocopherol levels did not change. Oxidative stability data indicate that kernel roasting for 15 to 30 min at 110 °C is optimum to preserve virgin argan oil nutritive properties.

  11. Growth inhibition of a Fusarium verticillioides GUS strain in corn kernels of aflatoxin-resistant genotypes.

    PubMed

    Brown, R L; Cleveland, T E; Woloshuk, C P; Payne, G A; Bhatnagar, D

    2001-12-01

    Two corn genotypes, GT-MAS:gk and MI82, resistant to Aspergillus flavus infection/aflatoxin contamination, were tested for their ability to limit growth of Fusarium verticillioides. An F. verticillioides strain was transformed with a beta-glucuronidase (GUS) reporter gene (uidA) construct to facilitate fungal growth quantification and then inoculated onto endosperm-wounded and non-wounded kernels of the above-corn lines. To serve as a control, an A. flavus strain containing the same reporter gene construct was inoculated onto non-wounded kernels of GT-MAS:gk. Results showed that, as in a previous study, non-wounded GT-MAS:gk kernels supported less growth (six- to ten-fold) of A. flavus than did kernels of a susceptible control. Also, non-wounded kernels of GT-MAS:gk and M182 supported less growth (two- to four-fold) of F. verticillioides than did susceptible kernels. Wounding, however, increased F. verticillioides infection of MI82, but not that of GT-MAS:gk. This is in contrast to a previous study of A. flavus, where wounding increased infection of GT-MAS:gk rather than M182 kernels. Further study is needed to explain genotypic variation in the kernel response to A. flavus and F. verticillioides kernel infections. Also, the potential for aflatoxin-resistant corn lines to likewise inhibit growth of F. verticillioides needs to be confirmed in the field. PMID:11778882

  12. Participation of cob tissue in the transport of medium components into maize kernels cultured in vitro

    SciTech Connect

    Felker, F.C. )

    1990-05-01

    Maize (Zea mays L.) kernels cultured in vitro while still attached to cob pieces have been used as a model system to study the physiology of kernel development. In this study, the role of the cob tissue in uptake of medium components into kernels was examined. Cob tissue was essential for in vitro kernel growth, and better growth occurred with larger cob/kernel ratios. A symplastically transported fluorescent dye readily permeated the endosperm when supplied in the medium, while an apoplastic dye did not. Slicing the cob tissue to disrupt vascular connections, but not apoplastic continuity, greatly reduced ({sup 14}C)sucrose uptake into kernels. ({sup 14}C)Sucrose uptake by cob and kernel tissue was reduced 31% and 68%, respectively, by 5 mM PCMBS. L-({sup 14}C)glucose was absorbed much more slowly than D-({sup 14}C)glucose. These and other results indicate that phloem loading of sugars occurs in the cob tissue. Passage of medium components through the symplast cob tissue may be a prerequisite for uptake into the kernel. Simple diffusion from the medium to the kernels is unlikely. Therefore, the ability of substances to be transported into cob tissue cells should be considered in formulating culture medium.

  13. Graphlet kernels for prediction of functional residues in protein structures.

    PubMed

    Vacic, Vladimir; Iakoucheva, Lilia M; Lonardi, Stefano; Radivojac, Predrag

    2010-01-01

    We introduce a novel graph-based kernel method for annotating functional residues in protein structures. A structure is first modeled as a protein contact graph, where nodes correspond to residues and edges connect spatially neighboring residues. Each vertex in the graph is then represented as a vector of counts of labeled non-isomorphic subgraphs (graphlets), centered on the vertex of interest. A similarity measure between two vertices is expressed as the inner product of their respective count vectors and is used in a supervised learning framework to classify protein residues. We evaluated our method on two function prediction problems: identification of catalytic residues in proteins, which is a well-studied problem suitable for benchmarking, and a much less explored problem of predicting phosphorylation sites in protein structures. The performance of the graphlet kernel approach was then compared against two alternative methods, a sequence-based predictor and our implementation of the FEATURE framework. On both tasks, the graphlet kernel performed favorably; however, the margin of difference was considerably higher on the problem of phosphorylation site prediction. While there is data that phosphorylation sites are preferentially positioned in intrinsically disordered regions, we provide evidence that for the sites that are located in structured regions, neither the surface accessibility alone nor the averaged measures calculated from the residue microenvironments utilized by FEATURE were sufficient to achieve high accuracy. The key benefit of the graphlet representation is its ability to capture neighborhood similarities in protein structures via enumerating the patterns of local connectivity in the corresponding labeled graphs.

  14. Genetic dissection of the maize kernel development process via conditional QTL mapping for three developing kernel-related traits in an immortalized F2 population.

    PubMed

    Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua

    2016-02-01

    Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.

  15. Nonlinear Knowledge in Kernel-Based Multiple Criteria Programming Classifier

    NASA Astrophysics Data System (ADS)

    Zhang, Dongling; Tian, Yingjie; Shi, Yong

    Kernel-based Multiple Criteria Linear Programming (KMCLP) model is used as classification methods, which can learn from training examples. Whereas, in traditional machine learning area, data sets are classified only by prior knowledge. Some works combine the above two classification principle to overcome the defaults of each approach. In this paper, we propose a model to incorporate the nonlinear knowledge into KMCLP in order to solve the problem when input consists of not only training example, but also nonlinear prior knowledge. In dealing with real world case breast cancer diagnosis, the model shows its better performance than the model solely based on training data.

  16. Anytime query-tuned kernel machine classifiers via Cholesky factorization

    NASA Technical Reports Server (NTRS)

    DeCoste, D.

    2002-01-01

    We recently demonstrated 2 to 64-fold query-time speedups of Support Vector Machine and Kernel Fisher classifiers via a new computational geometry method for anytime output bounds (DeCoste,2002). This new paper refines our approach in two key ways. First, we introduce a simple linear algebra formulation based on Cholesky factorization, yielding simpler equations and lower computational overhead. Second, this new formulation suggests new methods for achieving additional speedups, including tuning on query samples. We demonstrate effectiveness on benchmark datasets.

  17. Kernel PLS-SVC for Linear and Nonlinear Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Matthews, Bryan

    2003-01-01

    A new methodology for discrimination is proposed. This is based on kernel orthonormalized partial least squares (PLS) dimensionality reduction of the original data space followed by support vector machines for classification. Close connection of orthonormalized PLS and Fisher's approach to linear discrimination or equivalently with canonical correlation analysis is described. This gives preference to use orthonormalized PLS over principal component analysis. Good behavior of the proposed method is demonstrated on 13 different benchmark data sets and on the real world problem of the classification finger movement periods versus non-movement periods based on electroencephalogram.

  18. Kernel methods for large-scale genomic data analysis

    PubMed Central

    Xing, Eric P.; Schaid, Daniel J.

    2015-01-01

    Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743

  19. Sufficient conditions for a memory-kernel master equation

    NASA Astrophysics Data System (ADS)

    Chruściński, Dariusz; Kossakowski, Andrzej

    2016-08-01

    We derive sufficient conditions for the memory-kernel governing nonlocal master equation which guarantee a legitimate (completely positive and trace-preserving) dynamical map. It turns out that these conditions provide natural parametrizations of the dynamical map being a generalization of the Markovian semigroup. This parametrization is defined by the so-called legitimate pair—monotonic quantum operation and completely positive map—and it is shown that such a class of maps covers almost all known examples from the Markovian semigroup, the semi-Markov evolution, up to collision models and their generalization.

  20. On the solution of integral equations with strongly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1987-01-01

    Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  1. On the solution of integral equations with strongly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m ,m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup -m , terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  2. Kernel based color estimation for night vision imagery

    NASA Astrophysics Data System (ADS)

    Gu, Xiaojing; Sun, Shaoyuan; Fang, Jian'an; Zhou, Peng

    2012-04-01

    Displaying night vision (NV) imagery with colors can largely improve observer's performance of scene recognition and situational awareness comparing to the conventional monochrome representation. However, estimating colors for single-band NV imagery has two challenges: deriving an appropriate color mapping model and extracting sufficient image features required by the model. To address these, a kernel based regression model and a set of multi-scale image features are used here. The proposed method can automatically render single-band NV imagery with natural colors, even when it has abnormal luminance distribution and lacks identifiable details.

  3. Partial Kernelization for Rank Aggregation: Theory and Experiments

    NASA Astrophysics Data System (ADS)

    Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf

    Rank Aggregation is important in many areas ranging from web search over databases to bioinformatics. The underlying decision problem Kemeny Score is NP-complete even in case of four input rankings to be aggregated into a "median ranking". We study efficient polynomial-time data reduction rules that allow us to find optimal median rankings. On the theoretical side, we improve a result for a "partial problem kernel" from quadratic to linear size. On the practical side, we provide encouraging experimental results with data based on web search and sport competitions, e.g., computing optimal median rankings for real-world instances with more than 100 candidates within milliseconds.

  4. Scattering in optical materials

    SciTech Connect

    Musikant, S.

    1983-01-01

    Topics discussed include internal scattering and surface scattering, environmental effects, and various applications. Papers are presented on scattering in ZnSe laser windows, the far-infrared reflectance spectra of optical black coatings, the effects of standard optical shop practices on scattering, and the damage susceptibility of ring laser gyro class optics. Attention is also given to the infrared laser stimulated desorption of pyridine from silver surfaces, to electrically conductive black optical paint, to light scattering from an interface bubble, and to the role of diagnostic testing in identifying and resolving dimensional stability problems in electroplated laser mirrors.

  5. Adaptive sharpening of photos

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho

    2008-01-01

    Sharpness is an important attribute that contributes to the overall impression of printed photo quality. Often it is impossible to estimate sharpness prior to printing. Sometimes it is a complex task for a consumer to obtain accurate sharpening results by editing a photo on a computer. The novel method of adaptive sharpening aimed for photo printers is proposed. Our approach includes 3 key techniques: sharpness level estimation, local tone mapping and boosting of local contrast. Non-reference automatic sharpness level estimation is based on analysis of variations of edges histograms, where edges are produced by high-pass filters with various kernel sizes, array of integrals of logarithm of edges histograms characterizes photo sharpness, machine learning is applied to choose optimal parameters for given printing size and resolution. Local tone mapping with ordering is applied to decrease edge transition slope length without noticeable artifacts and with some noise suppression. Unsharp mask via bilateral filter is applied for boosting of local contrast. This stage does not produce strong halo artifact which is typical for the traditional unsharp mask filter. The quality of proposed approach is evaluated by surveying observer's opinions. According to obtained replies the proposed method enhances the majority of photos.

  6. Scatter correction in CBCT with an offset detector through a deconvolution method using data consistency

    NASA Astrophysics Data System (ADS)

    Kim, Changhwan; Park, Miran; Lee, Hoyeon; Cho, Seungryong

    2016-03-01

    Our earlier work has demonstrated that the data consistency condition can be used as a criterion for scatter kernel optimization in deconvolution methods in a full-fan mode cone-beam CT [1]. However, this scheme cannot be directly applied to CBCT system with an offset detector (half-fan mode) because of transverse data truncation in projections. In this study, we proposed a modified scheme of the scatter kernel optimization method that can be used in a half-fan mode cone-beam CT, and have successfully shown its feasibility. Using the first-reconstructed volume image from half-fan projection data, we acquired full-fan projection data by forward projection synthesis. The synthesized full-fan projections were partly used to fill the truncated regions in the half-fan data. By doing so, we were able to utilize the existing data consistency-driven scatter kernel optimization method. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by an experimental study using the ACS head phantom.

  7. Adaptive SPECT

    PubMed Central

    Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.

    2008-01-01

    Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485

  8. Emotion Recognition from Single-Trial EEG Based on Kernel Fisher's Emotion Pattern and Imbalanced Quasiconformal Kernel Support Vector Machine

    PubMed Central

    Liu, Yi-Hung; Wu, Chien-Te; Cheng, Wei-Teng; Hsiao, Yu-Tsung; Chen, Po-Ming; Teng, Jyh-Tong

    2014-01-01

    Electroencephalogram-based emotion recognition (EEG-ER) has received increasing attention in the fields of health care, affective computing, and brain-computer interface (BCI). However, satisfactory ER performance within a bi-dimensional and non-discrete emotional space using single-trial EEG data remains a challenging task. To address this issue, we propose a three-layer scheme for single-trial EEG-ER. In the first layer, a set of spectral powers of different EEG frequency bands are extracted from multi-channel single-trial EEG signals. In the second layer, the kernel Fisher's discriminant analysis method is applied to further extract features with better discrimination ability from the EEG spectral powers. The feature vector produced by layer 2 is called a kernel Fisher's emotion pattern (KFEP), and is sent into layer 3 for further classification where the proposed imbalanced quasiconformal kernel support vector machine (IQK-SVM) serves as the emotion classifier. The outputs of the three layer EEG-ER system include labels of emotional valence and arousal. Furthermore, to collect effective training and testing datasets for the current EEG-ER system, we also use an emotion-induction paradigm in which a set of pictures selected from the International Affective Picture System (IAPS) are employed as emotion induction stimuli. The performance of the proposed three-layer solution is compared with that of other EEG spectral power-based features and emotion classifiers. Results on 10 healthy participants indicate that the proposed KFEP feature performs better than other spectral power features, and IQK-SVM outperforms traditional SVM in terms of the EEG-ER accuracy. Our findings also show that the proposed EEG-ER scheme achieves the highest classification accuracies of valence (82.68%) and arousal (84.79%) among all testing methods. PMID:25061837

  9. Selection of Haploid Maize Kernels from Hybrid Kernels for Plant Breeding Using Near-Infrared Spectroscopy and SIMCA Analysis

    SciTech Connect

    Jones, Roger W.; Reinot, Tonu; Frei, Ursula K.; Tseng, Yichia; Lübberstedt, Thomas; McClelland, John F.

    2012-04-01

    Samples of haploid and hybrid seed from three different maize donor genotypes after maternal haploid induction were used to test the capability of automated near-infrared transmission spectroscopy to individually differentiate haploid from hybrid seeds. Using a two-step chemometric analysis in which the seeds were first classified according to genotype and then the haploid or hybrid status was determined proved to be the most successful approach. This approach allowed 11 of 13 haploid and 25 of 25 hybrid kernels to be correctly identified from a mixture that included seeds of all the genotypes.

  10. Validity of the Rytov Approximation in the Form of Finite-Frequency Sensitivity Kernels

    NASA Astrophysics Data System (ADS)

    Xu, Wenjun; Xie, Xiao-Bi; Geng, Jianhua

    2015-06-01

    The first-order (or linear) Rytov or Born approximation is the foundation for formulation of wave-equation tomography and waveform inversion, so the validity of the Rytov/Born approximation can substantially affect the applicability of these theories. However, discussions and research reported in literature on this topic are insufficient or limited. In this paper we introduce five variables in scattering theory to help us discuss conditions under which the Rytov approximation, in the form of the finite frequency sensitivity kernels (RFFSK), the basis of waveform inversion and tomography, is valid. The five variables are propagation length L, heterogeneity scale a, wavenumber k, anisotropy ratio ξ, and perturbation strength ɛ. Combined with theoretical analysis and numerical experiments, we conclude that varying the conditions used to establish the Rytov approximation can lead to uninterpretable or undesired results. This conclusion has two consequences. First, one cannot rigorously apply the linear Rytov approximation to all theoretical or practical cases without discussing its validity. Second, the nonlinear Rytov approximation is essential if the linear Rytov approximation is not valid. Different from previous literature, only phase (or travel time) terms for the whole wavefield are discussed. The time shifts of two specific events between the background and observed wavefields measured by cross-correlation will serve as a reference for evaluation of whether the time shifts predicted by the FFSKs are reasonably acceptable. Significantly, the reference "cross-correlation" should be regarded as reliable only if the condition "two specific similar signals" is satisfied. We cannot expect it to provide a reasonable result if this condition is not met. This paper reports its reliability and experimental limitations. Using cross-correlation (CC) samples as the X axis and sensitivity kernel (SK) or ray tracing (RT) samples as the Y axis, a chart of cross validation

  11. Adaptive Computing.

    ERIC Educational Resources Information Center

    Harrell, William

    1999-01-01

    Provides information on various adaptive technology resources available to people with disabilities. (Contains 19 references, an annotated list of 129 websites, and 12 additional print resources.) (JOW)

  12. Contour adaptation.

    PubMed

    Anstis, Stuart

    2013-01-01

    It is known that adaptation to a disk that flickers between black and white at 3-8 Hz on a gray surround renders invisible a congruent gray test disk viewed afterwards. This is contrast adaptation. We now report that adapting simply to the flickering circular outline of the disk can have the same effect. We call this "contour adaptation." This adaptation does not transfer interocularly, and apparently applies only to luminance, not color. One can adapt selectively to only some of the contours in a display, making only these contours temporarily invisible. For instance, a plaid comprises a vertical grating superimposed on a horizontal grating. If one first adapts to appropriate flickering vertical lines, the vertical components of the plaid disappears and it looks like a horizontal grating. Also, we simulated a Cornsweet (1970) edge, and we selectively adapted out the subjective and objective contours of a Kanisza (1976) subjective square. By temporarily removing edges, contour adaptation offers a new technique to study the role of visual edges, and it demonstrates how brightness information is concentrated in edges and propagates from them as it fills in surfaces.

  13. Association mapping for kernel phytosterol content in almond

    PubMed Central

    Font i Forcada, Carolina; Velasco, Leonardo; Socias i Company, Rafel; Fernández i Martí, Ángel

    2015-01-01

    Almond kernels are a rich source of phytosterols, which are important compounds for human nutrition. The genetic control of phytosterol content has not yet been documented in almond. Association mapping (AM), also known as linkage disequilibrium (LD), was applied to an almond germplasm collection in order to provide new insight into the genetic control of total and individual sterol contents in kernels. Population structure analysis grouped the accessions into two principal groups, the Mediterranean and the non-Mediterranean. There was a strong subpopulation structure with LD decaying with increasing genetic distance, resulting in lower levels of LD between more distant markers. A significant impact of population structure on LD in the almond cultivar groups was observed. The mean r2-value for all intra-chromosomal loci pairs was 0.040, whereas, the r2 for the inter-chromosomal loci pairs was 0.036. For analysis of association between the markers and phenotypic traits five models were tested. The mixed linear model (MLM) approach using co-ancestry values from population structure and kinship estimates (K model) as covariates identified a maximum of 13 significant associations. Most of the associations found appeared to map within the interval where many candidate genes involved in the sterol biosynthesis pathway are predicted in the peach genome. These findings provide a valuable foundation for quality gene identification and molecular marker assisted breeding in almond. PMID:26217374

  14. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.

    PubMed

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-08-16

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

  15. KERNEL-SMOOTHED CONDITIONAL QUANTILES OF CORRELATED BIVARIATE DISCRETE DATA

    PubMed Central

    De Gooijer, Jan G.; Yuan, Ao

    2012-01-01

    Socio-economic variables are often measured on a discrete scale or rounded to protect confidentiality. Nevertheless, when exploring the effect of a relevant covariate on the outcome distribution of a discrete response variable, virtually all common quantile regression methods require the distribution of the covariate to be continuous. This paper departs from this basic requirement by presenting an algorithm for nonparametric estimation of conditional quantiles when both the response variable and the covariate are discrete. Moreover, we allow the variables of interest to be pairwise correlated. For computational efficiency, we aggregate the data into smaller subsets by a binning operation, and make inference on the resulting prebinned data. Specifically, we propose two kernel-based binned conditional quantile estimators, one for untransformed discrete response data and one for rank-transformed response data. We establish asymptotic properties of both estimators. A practical procedure for jointly selecting band- and binwidth parameters is also presented. Simulation results show excellent estimation accuracy in terms of bias, mean squared error, and confidence interval coverage. Typically prebinning the data leads to considerable computational savings when large datasets are under study, as compared to direct (un)conditional quantile kernel estimation of multivariate data. With this in mind, we illustrate the proposed methodology with an application to a large dataset concerning US hospital patients with congestive heart failure. PMID:23667297

  16. Parsimonious kernel extreme learning machine in primal via Cholesky factorization.

    PubMed

    Zhao, Yong-Ping

    2016-08-01

    Recently, extreme learning machine (ELM) has become a popular topic in machine learning community. By replacing the so-called ELM feature mappings with the nonlinear mappings induced by kernel functions, two kernel ELMs, i.e., P-KELM and D-KELM, are obtained from primal and dual perspectives, respectively. Unfortunately, both P-KELM and D-KELM possess the dense solutions in direct proportion to the number of training data. To this end, a constructive algorithm for P-KELM (CCP-KELM) is first proposed by virtue of Cholesky factorization, in which the training data incurring the largest reductions on the objective function are recruited as significant vectors. To reduce its training cost further, PCCP-KELM is then obtained with the application of a probabilistic speedup scheme into CCP-KELM. Corresponding to CCP-KELM, a destructive P-KELM (CDP-KELM) is presented using a partial Cholesky factorization strategy, where the training data incurring the smallest reductions on the objective function after their removals are pruned from the current set of significant vectors. Finally, to verify the efficacy and feasibility of the proposed algorithms in this paper, experiments on both small and large benchmark data sets are investigated.

  17. Seismic hazard of the Iberian Peninsula: evaluation with kernel functions

    NASA Astrophysics Data System (ADS)

    Crespo, M. J.; Martínez, F.; Martí, J.

    2014-05-01

    The seismic hazard of the Iberian Peninsula is analysed using a nonparametric methodology based on statistical kernel functions; the activity rate is derived from the catalogue data, both its spatial dependence (without a seismogenic zonation) and its magnitude dependence (without using Gutenberg-Richter's relationship). The catalogue is that of the Instituto Geográfico Nacional, supplemented with other catalogues around the periphery; the quantification of events has been homogenised and spatially or temporally interrelated events have been suppressed to assume a Poisson process. The activity rate is determined by the kernel function, the bandwidth and the effective periods. The resulting rate is compared with that produced using Gutenberg-Richter statistics and a zoned approach. Three attenuation relationships have been employed, one for deep sources and two for shallower events, depending on whether their magnitude was above or below 5. The results are presented as seismic hazard maps for different spectral frequencies and for return periods of 475 and 2475 yr, which allows constructing uniform hazard spectra.

  18. Seismic hazards of the Iberian Peninsula - evaluation with kernel functions

    NASA Astrophysics Data System (ADS)

    Crespo, M. J.; Martínez, F.; Martí, J.

    2013-08-01

    The seismic hazard of the Iberian Peninsula is analysed using a nonparametric methodology based on statistical kernel functions; the activity rate is derived from the catalogue data, both its spatial dependence (without a seismogenetic zonation) and its magnitude dependence (without using Gutenberg-Richter's law). The catalogue is that of the Instituto Geográfico Nacional, supplemented with other catalogues around the periphery; the quantification of events has been homogenised and spatially or temporally interrelated events have been suppressed to assume a Poisson process. The activity rate is determined by the kernel function, the bandwidth and the effective periods. The resulting rate is compared with that produced using Gutenberg-Richter statistics and a zoned approach. Three attenuation laws have been employed, one for deep sources and two for shallower events, depending on whether their magnitude was above or below 5. The results are presented as seismic hazard maps for different spectral frequencies and for return periods of 475 and 2475 yr, which allows constructing uniform hazard spectra.

  19. Gaussian Kernel Based Classification Approach for Wheat Identification

    NASA Astrophysics Data System (ADS)

    Aggarwal, R.; Kumar, A.; Raju, P. L. N.; Krishna Murthy, Y. V. N.

    2014-11-01

    Agriculture holds a pivotal role in context to India, which is basically agrarian economy. Crop type identification is a key issue for monitoring agriculture and is the basis for crop acreage and yield estimation. However, it is very challenging to identify a specific crop using single date imagery. Hence, it is highly important to go for multi-temporal analysis approach for specific crop identification. This research work deals with implementation of fuzzy classifier; Possibilistic c-Means (PCM) with and without kernel based approach, using temporal data of Landsat 8- OLI (Operational Land Imager) for identification of wheat in Radaur City, Haryana. The multi- temporal dataset covers complete phenological cycle that is from seedling to ripening of wheat crop growth. The experimental results show that inclusion of Gaussian kernel, with Euclidean Norm (ED Norm) in Possibilistic c-Means (KPCM), soft classifier has been more robust in identification of the wheat crop. Also, identification of all the wheat fields is dependent upon appropriate selection of the temporal date. The best combination of temporal data corresponds to tillering, stem extension, heading and ripening stages of wheat crop. Entropy at testing sites of wheat has been used to validate the classified results. The entropy value at testing sites was observed to be low, implying lower uncertainty of existence of any other class at wheat test sites and high certainty of existence of wheat crop.

  20. Overcoming Unix kernel deficiencies in a portable, distributed storage system

    SciTech Connect

    Gary, M.

    1990-01-01

    The LINCS Storage System at Lawrence Livermore National Laboratory was designed to provide an efficient, portable, distributed file and directory system capable of running on a variety of hardware platforms, consistent with the IEEE Mass Storage System Reference Model. Our intent was to meet these requirements with a storage system running atop standard, unmodified versions of the Unix operating system. Most of the system components runs as ordinary user processes. However, for those components that were implemented in the kernel to improve performances, Unix presented a number of hurdles. These included the lack of a lightweight tasking facility in the kernel; process-blocked I/O; inefficient data transfer; and the lack of optimized drivers for storage devices. How we overcame these difficulties is the subject of this paper. Ideally, future evolution of Unix by vendors will provide the missing facilities; until then, however, data centers adopting Unix operating systems for large-scale distributed computing will have to provide similar solutions. 11 refs., 5 figs.

  1. Very long chain fatty acid synthesis in sunflower kernels.

    PubMed

    Salas, Joaquín J; Martínez-Force, Enrique; Garcés, Rafael

    2005-04-01

    Most common seed oils contain small amounts of very long chain fatty acids (VLCFAs), the main components of oils from species such as Brassica napus or Lunnaria annua. These fatty acids are synthesized from acyl-CoA precursors in the endoplasmic reticulum through the activity of a dissociated enzyme complex known as fatty acid elongase. We studied the synthesis of the arachidic, behenic, and lignoceric VLCFAs in sunflower kernels, in which they account for 1-3% of the saturated fatty acids. These VLCFAs are synthesized from 18:0-CoA by membrane-bound fatty acid elongases, and their biosynthesis is mainly dependent on NADPH equivalents. Two condensing enzymes appear to be responsible for the synthesis of VLCFAs in sunflower kernels, beta-ketoacyl-CoA synthase-I (KCS-I) and beta-ketoacyl-CoA synthase-II (KCS-II). Both of these enzymes were resolved by ion exchange chromatography and display different substrate specificities. While KCS-I displays a preference for 20:0-CoA, 18:0-CoA was more efficiently elongated by KCS-II. Both enzymes have different sensitivities to pH and Triton X-100, and their kinetic properties indicate that both are strongly inhibited by the presence of their substrates. In light of these results, the VLCFA composition of sunflower oil is considered in relation to that in other commercially exploited oils.

  2. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation

    PubMed Central

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-01-01

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888

  3. Parsimonious kernel extreme learning machine in primal via Cholesky factorization.

    PubMed

    Zhao, Yong-Ping

    2016-08-01

    Recently, extreme learning machine (ELM) has become a popular topic in machine learning community. By replacing the so-called ELM feature mappings with the nonlinear mappings induced by kernel functions, two kernel ELMs, i.e., P-KELM and D-KELM, are obtained from primal and dual perspectives, respectively. Unfortunately, both P-KELM and D-KELM possess the dense solutions in direct proportion to the number of training data. To this end, a constructive algorithm for P-KELM (CCP-KELM) is first proposed by virtue of Cholesky factorization, in which the training data incurring the largest reductions on the objective function are recruited as significant vectors. To reduce its training cost further, PCCP-KELM is then obtained with the application of a probabilistic speedup scheme into CCP-KELM. Corresponding to CCP-KELM, a destructive P-KELM (CDP-KELM) is presented using a partial Cholesky factorization strategy, where the training data incurring the smallest reductions on the objective function after their removals are pruned from the current set of significant vectors. Finally, to verify the efficacy and feasibility of the proposed algorithms in this paper, experiments on both small and large benchmark data sets are investigated. PMID:27203553

  4. Convex-relaxed kernel mapping for image segmentation.

    PubMed

    Ben Salah, Mohamed; Ben Ayed, Ismail; Jing Yuan; Hong Zhang

    2014-03-01

    This paper investigates a convex-relaxed kernel mapping formulation of image segmentation. We optimize, under some partition constraints, a functional containing two characteristic terms: 1) a data term, which maps the observation space to a higher (possibly infinite) dimensional feature space via a kernel function, thereby evaluating nonlinear distances between the observations and segments parameters and 2) a total-variation term, which favors smooth segment surfaces (or boundaries). The algorithm iterates two steps: 1) a convex-relaxation optimization with respect to the segments by solving an equivalent constrained problem via the augmented Lagrange multiplier method and 2) a convergent fixed-point optimization with respect to the segments parameters. The proposed algorithm can bear with a variety of image types without the need for complex and application-specific statistical modeling, while having the computational benefits of convex relaxation. Our solution is amenable to parallelized implementations on graphics processing units (GPUs) and extends easily to high dimensions. We evaluated the proposed algorithm with several sets of comprehensive experiments and comparisons, including: 1) computational evaluations over 3D medical-imaging examples and high-resolution large-size color photographs, which demonstrate that a parallelized implementation of the proposed method run on a GPU can bring a significant speed-up and 2) accuracy evaluations against five state-of-the-art methods over the Berkeley color-image database and a multimodel synthetic data set, which demonstrates competitive performances of the algorithm. PMID:24723519

  5. TORCH Computational Reference Kernels - A Testbed for Computer Science Research

    SciTech Connect

    Kaiser, Alex; Williams, Samuel Webb; Madduri, Kamesh; Ibrahim, Khaled; Bailey, David H.; Demmel, James W.; Strohmaier, Erich

    2010-12-02

    For decades, computer scientists have sought guidance on how to evolve architectures, languages, and programming models in order to improve application performance, efficiency, and productivity. Unfortunately, without overarching advice about future directions in these areas, individual guidance is inferred from the existing software/hardware ecosystem, and each discipline often conducts their research independently assuming all other technologies remain fixed. In today's rapidly evolving world of on-chip parallelism, isolated and iterative improvements to performance may miss superior solutions in the same way gradient descent optimization techniques may get stuck in local minima. To combat this, we present TORCH: A Testbed for Optimization ResearCH. These computational reference kernels define the core problems of interest in scientific computing without mandating a specific language, algorithm, programming model, or implementation. To compliment the kernel (problem) definitions, we provide a set of algorithmically-expressed verification tests that can be used to verify a hardware/software co-designed solution produces an acceptable answer. Finally, to provide some illumination as to how researchers have implemented solutions to these problems in the past, we provide a set of reference implementations in C and MATLAB.

  6. A Testbed of Parallel Kernels for Computer Science Research

    SciTech Connect

    Bailey, David; Demmel, James; Ibrahim, Khaled; Kaiser, Alex; Koniges, Alice; Madduri, Kamesh; Shalf, John; Strohmaier, Erich; Williams, Samuel

    2010-04-30

    initial result of the more modern study was the seven dwarfs, which was subsequently extended to 13 motifs. These motifs have already been useful in defining classes of applications for architecture-software studies. However, these broad-brush problem statements often miss the nuance seen in individual kernels. For example, the computational requirements of particle methods vary greatly between the naive (but more accurate) direct calculations and the particle-mesh and particle-tree codes. Thus we commenced our study with an enumeration of problems, but then proceeded by providing not only reference implementations for each problem, but more importantly a mathematical definition that allows one to escape iterative approaches to software/hardware optimization. To ensure long term value, we have augmented each of our reference implementations with both a scalable problem generator and a verification scheme. In a paper we have prepared that documents our efforts, we describe in detail this process of problem definition, scalable input creation, verification, and implementation of reference codes for the scientific computing domain. Table 1 enumerates and describes the level of support we've developed for each kernel. We group these important kernels using the Berkeley dwarfs/motifs taxonomy using a red box in the appropriate column. As kernels become progressively complex, they build upon other, simpler computational methods. We note this dependency via orange boxes. After enumeration of the important numerical problems, we created a domain-appropriate high-level definition of each problem. To ensure future endeavors are not tainted by existing implementations, we specified the problem definition to be independent of both computer architecture and existing programming languages, models, and data types. Then, to provide context as to how such kernels productively map to existing architectures, languages and programming models, we produced reference implementations for most of

  7. Power Prediction in Smart Grids with Evolutionary Local Kernel Regression

    NASA Astrophysics Data System (ADS)

    Kramer, Oliver; Satzger, Benjamin; Lässig, Jörg

    Electric grids are moving from a centralized single supply chain towards a decentralized bidirectional grid of suppliers and consumers in an uncertain and dynamic scenario. Soon, the growing smart meter infrastructure will allow the collection of terabytes of detailed data about the grid condition, e.g., the state of renewable electric energy producers or the power consumption of millions of private customers, in very short time steps. For reliable prediction strong and fast regression methods are necessary that are able to cope with these challenges. In this paper we introduce a novel regression technique, i.e., evolutionary local kernel regression, a kernel regression variant based on local Nadaraya-Watson estimators with independent bandwidths distributed in data space. The model is regularized with the CMA-ES, a stochastic non-convex optimization method. We experimentally analyze the load forecast behavior on real power consumption data. The proposed method is easily parallelizable, and therefore well appropriate for large-scale scenarios in smart grids.

  8. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.

    PubMed

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-01-01

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888

  9. Fast Query-Optimized Kernel-Machine Classification

    NASA Technical Reports Server (NTRS)

    Mazzoni, Dominic; DeCoste, Dennis

    2004-01-01

    A recently developed algorithm performs kernel-machine classification via incremental approximate nearest support vectors. The algorithm implements support-vector machines (SVMs) at speeds 10 to 100 times those attainable by use of conventional SVM algorithms. The algorithm offers potential benefits for classification of images, recognition of speech, recognition of handwriting, and diverse other applications in which there are requirements to discern patterns in large sets of data. SVMs constitute a subset of kernel machines (KMs), which have become popular as models for machine learning and, more specifically, for automated classification of input data on the basis of labeled training data. While similar in many ways to k-nearest-neighbors (k-NN) models and artificial neural networks (ANNs), SVMs tend to be more accurate. Using representations that scale only linearly in the numbers of training examples, while exploring nonlinear (kernelized) feature spaces that are exponentially larger than the original input dimensionality, KMs elegantly and practically overcome the classic curse of dimensionality. However, the price that one must pay for the power of KMs is that query-time complexity scales linearly with the number of training examples, making KMs often orders of magnitude more computationally expensive than are ANNs, decision trees, and other popular machine learning alternatives. The present algorithm treats an SVM classifier as a special form of a k-NN. The algorithm is based partly on an empirical observation that one can often achieve the same classification as that of an exact KM by using only small fraction of the nearest support vectors (SVs) of a query. The exact KM output is a weighted sum over the kernel values between the query and the SVs. In this algorithm, the KM output is approximated with a k-NN classifier, the output of which is a weighted sum only over the kernel values involving k selected SVs. Before query time, there are gathered

  10. Cylindrical-Wave Approach for Electromagnetic Scattering by Subsurface Targets in a Lossy Medium

    NASA Astrophysics Data System (ADS)

    Frezza, F.; Pajewski, L.; Ponti, C.; Schettini, G.; Tedeschi, N.

    2012-04-01

    The Cylindrical-Wave Approach (CWA) rigorously solves, in the spectral domain, the electromagnetic forward scattering by a finite set of buried two-dimensional perfectly-conducting or dielectric objects [1]-[3]. In this technique, the field scattered by underground objects is represented in terms of a superposition of cylindrical waves. Use is made of the plane-wave spectrum [1] to take into account the interaction of such waves with the planar interface between air and soil, and between different layers eventually present in the ground. In this work we present the progress we recently made to improve the method. In particular, we have faced the fundamental problem of losses in the ground: this is of significant importance in remote sensing applications, since real soils often have complex permittivity and conductivity, and sometimes also a complex permeability. First, a convergent closed-form representation of the cylindrical-wave angular spectrum in a generic lossy medium has been found [4]. To obtain this spectrum, the canonical Sommerfeld representation of the first-kind Hankel function of integer order has been used; its integration path has been modified to ensure the integral convergence for complex values of the wavenumber. Subsequently, the solution to the scattering problem of a plane-wave propagating in air, impinging on the interface with a dissipative medium, and interacting with a buried perfectly-conducting cylinder, has been derived. The developed method may return the field values in each point of the space, both in the near and far zones; moreover it may be applied for any polarization, and for arbitrary values of the cylinder size and of the distance between the cylinder and the air-soil interface. The theoretical solution has been implemented in a Fortran code. The numerical evaluation of the reflected and transmitted cylindrical wave functions in the presence of lossy media was a critical point: we extended the Gaussian adaptive quadrature

  11. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    SciTech Connect

    Huang, Jessie Y.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.; Eklund, David; Childress, Nathan L.

    2013-12-15

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm.Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels.Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found

  12. SU-E-T-214: Intensity Modulated Proton Therapy (IMPT) Based On Passively Scattered Protons and Multi-Leaf Collimation: Prototype TPS and Dosimetry Study

    SciTech Connect

    Sanchez-Parcerisa, D; Carabe-Fernandez, A

    2014-06-01

    Purpose. Intensity-modulated proton therapy is usually implemented with multi-field optimization of pencil-beam scanning (PBS) proton fields. However, at the view of the experience with photon-IMRT, proton facilities equipped with double-scattering (DS) delivery and multi-leaf collimation (MLC) could produce highly conformal dose distributions (and possibly eliminate the need for patient-specific compensators) with a clever use of their MLC field shaping, provided that an optimal inverse TPS is developed. Methods. A prototype TPS was developed in MATLAB. The dose calculation process was based on a fluence-dose algorithm on an adaptive divergent grid. A database of dose kernels was precalculated in order to allow for fast variations of the field range and modulation during optimization. The inverse planning process was based on the adaptive simulated annealing approach, with direct aperture optimization of the MLC leaves. A dosimetry study was performed on a phantom formed by three concentrical semicylinders separated by 5 mm, of which the inner-most and outer-most were regarded as organs at risk (OARs), and the middle one as the PTV. We chose a concave target (which is not treatable with conventional DS fields) to show the potential of our technique. The optimizer was configured to minimize the mean dose to the OARs while keeping a good coverage of the target. Results. The plan produced by the prototype TPS achieved a conformity index of 1.34, with the mean doses to the OARs below 78% of the prescribed dose. This Result is hardly achievable with traditional conformal DS technique with compensators, and it compares to what can be obtained with PBS. Conclusion. It is certainly feasible to produce IMPT fields with MLC passive scattering fields. With a fully developed treatment planning system, the produced plans can be superior to traditional DS plans in terms of plan conformity and dose to organs at risk.

  13. Climate adaptation

    NASA Astrophysics Data System (ADS)

    Kinzig, Ann P.

    2015-03-01

    This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.

  14. Feasibility of detecting Aflatoxin B1 in single maize kernels using hyperspectral imaging

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The feasibility of detecting Aflatoxin B1 (AFB1) in single maize kernel inoculated with Aspergillus flavus conidia in the field, as well as its spatial distribution in the kernels, was assessed using near-infrared hyperspectral imaging (HSI) technique. Firstly, an image mask was applied to a pixel-b...

  15. Detecting and Segregating Black Tip-Damaged Wheat Kernels Using Visible and Near Infrared Spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Detection of individual wheat kernels with black tip symptom (BTS) and black tip damage (BTD) was demonstrated using near infrared reflectance spectroscopy (NIRS) and silicon light-emitting-diode (LED) based instruments. The two instruments tested, a single kernel near-infrared spectroscopy instrume...

  16. Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain

    ERIC Educational Resources Information Center

    Hannagan, Thomas; Grainger, Jonathan

    2012-01-01

    It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jakel, Scholkopf, & Wichmann, 2009). We point out that "String kernels," initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal…

  17. Genetic, Genomic, and Breeding Approaches to Further Explore Kernel Composition Traits and Grain Yield in Maize

    ERIC Educational Resources Information Center

    Da Silva, Helena Sofia Pereira

    2009-01-01

    Maize ("Zea mays L.") is a model species well suited for the dissection of complex traits which are often of commercial value. The purpose of this research was to gain a deeper understanding of the genetic control of maize kernel composition traits starch, protein, and oil concentration, and also kernel weight and grain yield. Germplasm with…

  18. Citreoviridin levels in Eupenicillium ochrosalmoneum-infested maize kernels at harvest.

    PubMed Central

    Wicklow, D T; Stubblefield, R D; Horn, B W; Shotwell, O L

    1988-01-01

    Citreoviridin contents were measured in eight bulk samples of maize kernels collected from eight fields immediately following harvest in southern Georgia. Citreoviridin contamination in six of the bulk samples ranged from 19 to 2,790 micrograms/kg. In hand-picked samples the toxin was concentrated in a few kernels (pick-outs), the contents of which were stained a bright lemon yellow (range, 53,800 to 759,900 micrograms/kg). The citreoviridin-producing fungus Eupenicillium ochrosalmoneum Scott & Stolk was isolated from each of these pick-out kernels. Citreoviridin was not detected in bulk samples from two of the fields. Aflatoxins were also present in all of the bulk samples (total aflatoxin B1 and B2; range, 7 to 360 micrograms/kg), including those not containing citreoviridin. In Biotron-grown maize ears that were inoculated with E. ochrosalmoneum through a wound made with a toothpick, citreoviridin was concentrated primarily in the wounded and fungus-rotted kernels (range, 142,000 to 2,780,000 micrograms/kg). Samples of uninjured kernels immediately adjacent to the wounded kernel (first circle) had less than 4,000 micrograms of citreoviridin per kg, while the mean concentration of toxin in kernel samples representing the next row removed (second circle) and all remaining kernels from the ear was less than 45 micrograms/kg. Animal toxicosis has not been linked to citreoviridin-contaminated maize. PMID:3389806

  19. Resistant-starch Formation in High-amylose Maize Starch During Kernel Development

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this study was to understand the resistant-starch (RS) formation during the kernel development of high-amylose maize, GEMS-0067 line. RS content of the starch, determined using AOAC Method 991.43 for total dietary fiber, increased with kernel maturation and the increase in amylose/...

  20. Kernel Composition, Starch Structure, and Enzyme Digestibility of Opaque-2 Maize and Quality Protein Maize

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Objectives of this study were to understand how opaque-2 (o2) mutation and quality protein maize (QPM) affect maize kernel composition and starch structure, property, and enzyme digestibility. Kernels of o2 maize contained less protein (9.6−12.5%) than those of the wild-type (WT) counterparts (12...

  1. Lesser grain borers, Rhyzopertha dominica, select rough rice kernels with cracked hulls for infestation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Tests were conducted to determine whether differing amounts of kernels with cracked hulls (0, 5, 10, and 20%) mixed with intact kernels affected progeny production of the lesser grain borer, Rhyzopertha dominica, in two rough rice varieties, Francis and Wells. Wells had been previously classified as...

  2. DFT calculations of molecular excited states using an orbital-dependent nonadiabatic exchange kernel

    SciTech Connect

    Ipatov, A. N.

    2010-02-15

    A density functional method for computing molecular excitation spectra is presented that uses a frequency-dependent kernel and takes into account the nonlocality of exchange interaction. Owing to its high numerical stability and the use of a nonadiabatic (frequency-dependent) exchange kernel, the proposed approach provides a qualitatively correct description of the asymptotic behavior of charge-transfer excitation energies.

  3. Reduction of Salmonella Enteritidis Population Sizes on Almond Kernels with Infrared Heat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Catalytic infrared (IR) heating was investigated to determine its effect on Salmonella enterica serovar Enteritidis population sizes on raw almond kernels. Using a double-sided catalytic infrared heating system, a radiation intensity of 5458 W/m2 caused a fast temperature increase at the kernel surf...

  4. Predicting dissolved oxygen concentration using kernel regression modeling approaches with nonlinear hydro-chemical data.

    PubMed

    Singh, Kunwar P; Gupta, Shikha; Rai, Premanjali

    2014-05-01

    Kernel function-based regression models were constructed and applied to a nonlinear hydro-chemical dataset pertaining to surface water for predicting the dissolved oxygen levels. Initial features were selected using nonlinear approach. Nonlinearity in the data was tested using BDS statistics, which revealed the data with nonlinear structure. Kernel ridge regression, kernel principal component regression, kernel partial least squares regression, and support vector regression models were developed using the Gaussian kernel function and their generalization and predictive abilities were compared in terms of several statistical parameters. Model parameters were optimized using the cross-validation procedure. The proposed kernel regression methods successfully captured the nonlinear features of the original data by transforming it to a high dimensional feature space using the kernel function. Performance of all the kernel-based modeling methods used here were comparable both in terms of predictive and generalization abilities. Values of the performance criteria parameters suggested for the adequacy of the constructed models to fit the nonlinear data and their good predictive capabilities. PMID:24338099

  5. Assessing the utility of microwave kernel moisture sensing in peanut drying

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Presently, in the peanut industry, peanut pods (unshelled peanuts) have to be shelled for kernel moisture content determination with the official moisture meter. This makes kernel moisture content determination laborious and limits efficiency during peanut drying. For field testing during the 2013 a...

  6. Classification of Hazelnut Kernels by Using Impact Acoustic Time-Frequency Patterns

    NASA Astrophysics Data System (ADS)

    Kalkan, Habil; Ince, Nuri Firat; Tewfik, Ahmed H.; Yardimci, Yasemin; Pearson, Tom

    2007-12-01

    Hazelnuts with damaged or cracked shells are more prone to infection with aflatoxin producing molds ( Aspergillus flavus). These molds can cause cancer. In this study, we introduce a new approach that separates damaged/cracked hazelnut kernels from good ones by using time-frequency features obtained from impact acoustic signals. The proposed technique requires no prior knowledge of the relevant time and frequency locations. In an offline step, the algorithm adaptively segments impact signals from a training data set in time using local cosine packet analysis and a Kullback-Leibler criterion to assess the discrimination power of different segmentations. In each resulting time segment, the signal is further decomposed into subbands using an undecimated wavelet transform. The most discriminative subbands are selected according to the Euclidean distance between the cumulative probability distributions of the corresponding subband coefficients. The most discriminative subbands are fed into a linear discriminant analysis classifier. In the online classification step, the algorithm simply computes the learned features from the observed signal and feeds them to the linear discriminant analysis (LDA) classifier. The algorithm achieved a throughput rate of 45 nuts/s and a classification accuracy of 96% with the 30 most discriminative features, a higher rate than those provided with prior methods.

  7. Kernel regression image processing method for optical readout MEMS based uncooled IRFPA

    NASA Astrophysics Data System (ADS)

    Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin; Hui, Mei; Zhou, Xiaoxiao

    2009-11-01

    Almost two years after the investors in Sarcon Microsystems pulled the plug, the micro-cantilever array based uncooled IR detector technology is again attracting more and more attention because of its low cost and high credibility. An uncooled thermal detector array with low NETD is designed and fabricated using MEMS bimaterial microcantilever structures that bend in response to thermal change. The IR images of objects obtained by these FPAs are readout by an optical method. For the IR images, one of the most problems of fixed pattern noise (FPN) is complicated by the fact that the response of each FPA detector changes due to a variety of factors. We adapt and expand kernel regression ideas for use in image denoising. The processed image quality is improved obviously. Great compute and analysis have been realized by using the discussed algorithm to the simulated data and in applications on real data. The experimental results demonstrate, better RMSE and highest Peak Signal-to- Noise Ratio (PSNR) compared with traditional methods can be obtained. At last we discuss the factors that determine the ultimate performance of the FPA. And we indicated that one of the unique advantages of the present approach is the scalability to larger imaging arrays.

  8. Feature extraction of kernel regress reconstruction for fault diagnosis based on self-organizing manifold learning

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoguang; Liang, Lin; Xu, Guanghua; Liu, Dan

    2013-09-01

    The feature space extracted from vibration signals with various faults is often nonlinear and of high dimension. Currently, nonlinear dimensionality reduction methods are available for extracting low-dimensional embeddings, such as manifold learning. However, these methods are all based on manual intervention, which have some shortages in stability, and suppressing the disturbance noise. To extract features automatically, a manifold learning method with self-organization mapping is introduced for the first time. Under the non-uniform sample distribution reconstructed by the phase space, the expectation maximization(EM) iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention. After that, the local tangent space alignment(LTSA) algorithm is adopted to compress the high-dimensional phase space into a more truthful low-dimensional representation. Finally, the signal is reconstructed by the kernel regression. Several typical states include the Lorenz system, engine fault with piston pin defect, and bearing fault with outer-race defect are analyzed. Compared with the LTSA and continuous wavelet transform, the results show that the background noise can be fully restrained and the entire periodic repetition of impact components is well separated and identified. A new way to automatically and precisely extract the impulsive components from mechanical signals is proposed.

  9. Adjusted sequence kernel association test for rare variants controlling for cryptic and family relatedness.

    PubMed

    Oualkacha, Karim; Dastani, Zari; Li, Rui; Cingolani, Pablo E; Spector, Timothy D; Hammond, Christopher J; Richards, J Brent; Ciampi, Antonio; Greenwood, Celia M T

    2013-05-01

    Recent progress in sequencing technologies makes it possible to identify rare and unique variants that may be associated with complex traits. However, the results of such efforts depend crucially on the use of efficient statistical methods and study designs. Although family-based designs might enrich a data set for familial rare disease variants, most existing rare variant association approaches assume independence of all individuals. We introduce here a framework for association testing of rare variants in family-based designs. This framework is an adaptation of the sequence kernel association test (SKAT) which allows us to control for family structure. Our adjusted SKAT (ASKAT) combines the SKAT approach and the factored spectrally transformed linear mixed models (FaST-LMMs) algorithm to capture family effects based on a LMM incorporating the realized proportion of the genome that is identical by descent between pairs of individuals, and using restricted maximum likelihood methods for estimation. In simulation studies, we evaluated type I error and power of this proposed method and we showed that regardless of the level of the trait heritability, our approach has good control of type I error and good power. Since our approach uses FaST-LMM to calculate variance components for the proposed mixed model, ASKAT is reasonably fast and can analyze hundreds of thousands of markers. Data from the UK twins consortium are presented to illustrate the ASKAT methodology. PMID:23529756

  10. Missing intensity interpolation using a kernel PCA-based POCS algorithm and its applications.

    PubMed

    Ogawa, Takahiro; Haseyama, Miki

    2011-02-01

    A missing intensity interpolation method using a kernel principal component analysis (PCA)-based projection onto convex sets (POCS) algorithm and its applications are presented in this paper. In order to interpolate missing intensities within a target image, the proposed method reconstructs local textures containing the missing pixels by using the POCS algorithm. In this reconstruction process, a nonlinear eigenspace is constructed from each kind of texture, and the optimal subspace for the target local texture is introduced into the constraint of the POCS algorithm. In the proposed method, the optimal subspace can be selected by monitoring errors converged in the reconstruction process. This approach provides a solution to the problem in conventional methods of not being able to effectively perform adaptive reconstruction of the target textures due to missing intensities, and successful interpolation of the missing intensities by the proposed method can be realized. Furthermore, since our method can restore any images including arbitrary-shaped missing areas, its potential in two image reconstruction tasks, image enlargement and missing area restoration, is also shown in this paper.

  11. Study on generating of thermal neutron scattering cross sections for LiH

    SciTech Connect

    Wang, L.; Jiang, X.; Zhao, Z.; Chen, L.

    2013-07-01

    LiH is designated as a promising moderator and shielding material because of its low density, high melting point and large fraction of H atoms. However, lack of the thermal neutron cross sections of LiH makes numerical calculation deviate from experimental data to some extent. As a result, it is necessary to study LiH thermal kernel effect. The phonon property of LiH has been investigated by first-principles calculations using the plane-wave pseudo potential method with CASTEP code. The scattering law and the thermal neutron scattering cross sections for Li and H have been generated using this distribution. The results have been compared with zirconium hydride data. The GASKET and NJOY/LEAPR codes have been used in the calculation of scattering law, whose results have been compared with the reference; the discrepancy mainly comes from phonon spectrums and its expansion. LEAPR had the capability to compute scattering through larger energy and momentum transfers than GASKET did. By studying LiH phonon spectrum and constructing the model of LiH thermal kernel and scattering matrix, the ACE format LiH thermal neutron cross sections for MCNP software could be made and used for reactor Neutronics calculation. (authors)

  12. Boltzmann Solver with Adaptive Mesh in Velocity Space

    SciTech Connect

    Kolobov, Vladimir I.; Arslanbekov, Robert R.; Frolova, Anna A.

    2011-05-20

    We describe the implementation of direct Boltzmann solver with Adaptive Mesh in Velocity Space (AMVS) using quad/octree data structure. The benefits of the AMVS technique are demonstrated for the charged particle transport in weakly ionized plasmas where the collision integral is linear. We also describe the implementation of AMVS for the nonlinear Boltzmann collision integral. Test computations demonstrate both advantages and deficiencies of the current method for calculations of narrow-kernel distributions.

  13. FABRICATION PROCESS AND PRODUCT QUALITY IMPROVEMENTS IN ADVANCED GAS REACTOR UCO KERNELS

    SciTech Connect

    Charles M Barnes

    2008-09-01

    A major element of the Advanced Gas Reactor (AGR) program is developing fuel fabrication processes to produce high quality uranium-containing kernels, TRISO-coated particles and fuel compacts needed for planned irradiation tests. The goals of the AGR program also include developing the fabrication technology to mass produce this fuel at low cost. Kernels for the first AGR test (“AGR-1) consisted of uranium oxycarbide (UCO) microspheres that werre produced by an internal gelation process followed by high temperature steps tot convert the UO3 + C “green” microspheres to first UO2 + C and then UO2 + UCx. The high temperature steps also densified the kernels. Babcock and Wilcox (B&W) fabricated UCO kernels for the AGR-1 irradiation experiment, which went into the Advance Test Reactor (ATR) at Idaho National Laboratory in December 2006. An evaluation of the kernel process following AGR-1 kernel production led to several recommendations to improve the fabrication process. These recommendations included testing alternative methods of dispersing carbon during broth preparation, evaluating the method of broth mixing, optimizing the broth chemistry, optimizing sintering conditions, and demonstrating fabrication of larger diameter UCO kernels needed for the second AGR irradiation test. Based on these recommendations and requirements, a test program was defined and performed. Certain portions of the test program were performed by Oak Ridge National Laboratory (ORNL), while tests at larger scale were performed by B&W. The tests at B&W have demonstrated improvements in both kernel properties and process operation. Changes in the form of carbon black used and the method of mixing the carbon prior to forming kernels led to improvements in the phase distribution in the sintered kernels, greater consistency in kernel properties, a reduction in forming run time, and simplifications to the forming process. Process parameter variation tests in both forming and sintering steps led

  14. Aspergillus flavus infection induces transcriptional and physical changes in developing maize kernels

    PubMed Central

    Dolezal, Andrea L.; Shu, Xiaomei; OBrian, Gregory R.; Nielsen, Dahlia M.; Woloshuk, Charles P.; Boston, Rebecca S.; Payne, Gary A.

    2014-01-01

    Maize kernels are susceptible to infection by the opportunistic pathogen Aspergillus flavus. Infection results in reduction of grain quality and contamination of kernels with the highly carcinogenic mycotoxin, aflatoxin. To understanding host response to infection by the fungus, transcription of approximately 9000 maize genes were monitored during the host-pathogen interaction with a custom designed Affymetrix GeneChip® DNA array. More than 4000 maize genes were found differentially expressed at a FDR of 0.05. This included the up regulation of defense related genes and signaling pathways. Transcriptional changes also were observed in primary metabolism genes. Starch biosynthetic genes were down regulated during infection, while genes encoding maize hydrolytic enzymes, presumably involved in the degradation of host reserves, were up regulated. These data indicate that infection of the maize kernel by A. flavus induced metabolic changes in the kernel, including the production of a defense response, as well as a disruption in kernel development. PMID:25132833

  15. Spatial-temporal filtering method based on kernel density estimation in suppressing background clutter

    NASA Astrophysics Data System (ADS)

    Tian, Yuexin; Liu, Yinghui; Gao, Kun; Shu, Yuwen; Ni, Guoqiang

    2014-11-01

    A temporal-spatial filtering algorithm based on kernel density estimation structure is presented for background suppression in this paper. The algorithm can be divided into spatial filtering and temporal filtering. Smoothing process is applied to the background of an infrared image sequence by using the kernel density estimation algorithm in spatial filtering. The probability density of the image gray values after spatial filtering is calculated with the kernel density estimation algorithm in temporal filtering. The background residual and blind pixels are picked out based on their gray values, and are further filtered. The algorithm is validated with a real infrared image sequence. The image sequence is processed by using Fuller kernel filter, Uniform kernel filter and high-pass filter. Quantitatively analysis shows that the temporal-spatial filtering algorithm based on the nonparametric method is a satisfactory way to suppress background clutter in infrared images. The SNR is significantly improved as well.

  16. On the equivalence between kernel self-organising maps and self-organising mixture density networks.

    PubMed

    Yin, Hujun

    2006-01-01

    The kernel method has become a useful trick and has been widely applied to various learning models to extend their nonlinear approximation and classification capabilities. Such extensions have also recently occurred to the Self-Organising Map (SOM). In this paper, two recently proposed kernel SOMs are reviewed, together with their link to an energy function. The Self-Organising Mixture Network is an extension of the SOM for mixture density modelling. This paper shows that with an isotropic, density-type kernel function, the kernel SOM is equivalent to a homoscedastic Self-Organising Mixture Network, an entropy-based density estimator. This revelation on the one hand explains that kernelising SOM can improve classification performance by acquiring better probability models of the data; but on the other hand it also explains that the SOM already naturally approximates the kernel method.

  17. Fast image search with locality-sensitive hashing and homogeneous kernels map.

    PubMed

    Li, Jun-yi; Li, Jian-hua

    2015-01-01

    Fast image search with efficient additive kernels and kernel locality-sensitive hashing has been proposed. As to hold the kernel functions, recent work has probed methods to create locality-sensitive hashing, which guarantee our approach's linear time; however existing methods still do not solve the problem of locality-sensitive hashing (LSH) algorithm and indirectly sacrifice the loss in accuracy of search results in order to allow fast queries. To improve the search accuracy, we show how to apply explicit feature maps into the homogeneous kernels, which help in feature transformation and combine it with kernel locality-sensitive hashing. We prove our method on several large datasets and illustrate that it improves the accuracy relative to commonly used methods and make the task of object classification and, content-based retrieval more fast and accurate.

  18. Kaon-nucleus scattering

    NASA Technical Reports Server (NTRS)

    Hong, Byungsik; Maung, Khin Maung; Wilson, John W.; Buck, Warren W.

    1989-01-01

    The derivations of the Lippmann-Schwinger equation and Watson multiple scattering are given. A simple optical potential is found to be the first term of that series. The number density distribution models of the nucleus, harmonic well, and Woods-Saxon are used without t-matrix taken from the scattering experiments. The parameterized two-body inputs, which are kaon-nucleon total cross sections, elastic slope parameters, and the ratio of the real to the imaginary part of the forward elastic scattering amplitude, are presented. The eikonal approximation was chosen as our solution method to estimate the total and absorptive cross sections for the kaon-nucleus scattering.

  19. Optical monitoring of rheumatoid arthritis: Monte Carlo generated reconstruction kernels

    NASA Astrophysics Data System (ADS)

    Minet, O.; Beuthan, J.; Hielscher, A. H.; Zabarylo, U.

    2008-06-01

    Optical imaging in biomedicine is governed by the light absorption and scattering interaction on microscopic and macroscopic constituents in the medium. Therefore, light scattering characteristics of human tissue correlate with the stage of some diseases. In the near infrared range the scattering event with the coefficient approximately two orders of magnitude greater than absorption plays a dominant role. When measuring the optical parameters variations were discovered that correlate with the rheumatoid arthritis of a small joint. The potential of an experimental setup for transillumination the finger joint with a laser diode and the pattern of the stray light detection are demonstrated. The scattering caused by skin contains no useful information and it can be removed by a deconvolution technique to enhance the diagnostic value of this non-invasive optical method. Monte Carlo simulations ensure both the construction of the corresponding point spread function and both the theoretical verification of the stray light picture in rather complex geometry.

  20. FABRICATION OF URANIUM OXYCARBIDE KERNELS AND COMPACTS FOR HTR FUEL

    SciTech Connect

    Dr. Jeffrey A. Phillips; Eric L. Shaber; Scott G. Nagley

    2012-10-01

    As part of the program to demonstrate tristructural isotropic (TRISO)-coated fuel for the Next Generation Nuclear Plant (NGNP), Advanced Gas Reactor (AGR) fuel is being irradiation tested in the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL). This testing has led to improved kernel fabrication techniques, the formation of TRISO fuel particles, and upgrades to the overcoating, compaction, and heat treatment processes. Combined, these improvements provide a fuel manufacturing process that meets the stringent requirements associated with testing in the AGR experimentation program. Researchers at Idaho National Laboratory (INL) are working in conjunction with a team from Babcock and Wilcox (B&W) and Oak Ridge National Laboratory (ORNL) to (a) improve the quality of uranium oxycarbide (UCO) fuel kernels, (b) deposit TRISO layers to produce a fuel that meets or exceeds the standard developed by German researches in the 1980s, and (c) develop a process to overcoat TRISO particles with the same matrix material, but applies it with water using equipment previously and successfully employed in the pharmaceutical industry. A primary goal of this work is to simplify the process, making it more robust and repeatable while relying less on operator technique than prior overcoating efforts. A secondary goal is to improve first-pass yields to greater than 95% through the use of established technology and equipment. In the first test, called “AGR-1,” graphite compacts containing approximately 300,000 coated particles were irradiated from December 2006 to November 2009. The AGR-1 fuel was designed to closely replicate many of the properties of German TRISO-coated particles, thought to be important for good fuel performance. No release of gaseous fission product, indicative of particle coating failure, was detected in the nearly 3-year irradiation to a peak burn up of 19.6% at a time-average temperature of 1038–1121°C. Before fabricating AGR-2 fuel, each