Science.gov

Sample records for kernel smoothing methods

  1. A method of smoothed particle hydrodynamics using spheroidal kernels

    NASA Technical Reports Server (NTRS)

    Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.

    1995-01-01

    We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.

  2. An adaptive kernel smoothing method for classifying Austrosimulium tillyardianum (Diptera: Simuliidae) larval instars.

    PubMed

    Cen, Guanjun; Yu, Yonghao; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks' rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby's growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods.

  3. An Adaptive Kernel Smoothing Method for Classifying Austrosimulium tillyardianum (Diptera: Simuliidae) Larval Instars

    PubMed Central

    Cen, Guanjun; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks’ rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby’s growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods. PMID:26546689

  4. A high-order fast method for computing convolution integral with smooth kernel

    NASA Astrophysics Data System (ADS)

    Qiang, Ji

    2010-02-01

    In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.

  5. A high-order fast method for computing convolution integral with smooth kernel

    SciTech Connect

    Qiang, Ji

    2009-09-28

    In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h{sup 4}), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.

  6. CRKSPH - A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme

    NASA Astrophysics Data System (ADS)

    Frontiere, Nicholas; Raskin, Cody D.; Owen, J. Michael

    2017-03-01

    We present a formulation of smoothed particle hydrodynamics (SPH) that utilizes a first-order consistent reproducing kernel, a smoothing function that exactly interpolates linear fields with particle tracers. Previous formulations using reproducing kernel (RK) interpolation have had difficulties maintaining conservation of momentum due to the fact the RK kernels are not, in general, spatially symmetric. Here, we utilize a reformulation of the fluid equations such that mass, linear momentum, and energy are all rigorously conserved without any assumption about kernel symmetries, while additionally maintaining approximate angular momentum conservation. Our approach starts from a rigorously consistent interpolation theory, where we derive the evolution equations to enforce the appropriate conservation properties, at the sacrifice of full consistency in the momentum equation. Additionally, by exploiting the increased accuracy of the RK method's gradient, we formulate a simple limiter for the artificial viscosity that reduces the excess diffusion normally incurred by the ordinary SPH artificial viscosity. Collectively, we call our suite of modifications to the traditional SPH scheme Conservative Reproducing Kernel SPH, or CRKSPH. CRKSPH retains many benefits of traditional SPH methods (such as preserving Galilean invariance and manifest conservation of mass, momentum, and energy) while improving on many of the shortcomings of SPH, particularly the overly aggressive artificial viscosity and zeroth-order inaccuracy. We compare CRKSPH to two different modern SPH formulations (pressure based SPH and compatibly differenced SPH), demonstrating the advantages of our new formulation when modeling fluid mixing, strong shock, and adiabatic phenomena.

  7. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template.

  8. Kernel current source density method.

    PubMed

    Potworowski, Jan; Jakuczun, Wit; Lȩski, Szymon; Wójcik, Daniel

    2012-02-01

    Local field potentials (LFP), the low-frequency part of extracellular electrical recordings, are a measure of the neural activity reflecting dendritic processing of synaptic inputs to neuronal populations. To localize synaptic dynamics, it is convenient, whenever possible, to estimate the density of transmembrane current sources (CSD) generating the LFP. In this work, we propose a new framework, the kernel current source density method (kCSD), for nonparametric estimation of CSD from LFP recorded from arbitrarily distributed electrodes using kernel methods. We test specific implementations of this framework on model data measured with one-, two-, and three-dimensional multielectrode setups. We compare these methods with the traditional approach through numerical approximation of the Laplacian and with the recently developed inverse current source density methods (iCSD). We show that iCSD is a special case of kCSD. The proposed method opens up new experimental possibilities for CSD analysis from existing or new recordings on arbitrarily distributed electrodes (not necessarily on a grid), which can be obtained in extracellular recordings of single unit activity with multiple electrodes.

  9. Kernel method for corrections to scaling.

    PubMed

    Harada, Kenji

    2015-07-01

    Scaling analysis, in which one infers scaling exponents and a scaling function in a scaling law from given data, is a powerful tool for determining universal properties of critical phenomena in many fields of science. However, there are corrections to scaling in many cases, and then the inference problem becomes ill-posed by an uncontrollable irrelevant scaling variable. We propose a new kernel method based on Gaussian process regression to fix this problem generally. We test the performance of the new kernel method for some example cases. In all cases, when the precision of the example data increases, inference results of the new kernel method correctly converge. Because there is no limitation in the new kernel method for the scaling function even with corrections to scaling, unlike in the conventional method, the new kernel method can be widely applied to real data in critical phenomena.

  10. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX*

    PubMed Central

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    2015-01-01

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case. PMID:26283801

  11. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX.

    PubMed

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case.

  12. Equalizing resolution in smoothed-particle hydrodynamics calculations using self-adaptive sinc kernels

    NASA Astrophysics Data System (ADS)

    García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin

    2014-10-01

    Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.

  13. Using Cochran's Z Statistic to Test the Kernel-Smoothed Item Response Function Differences between Focal and Reference Groups

    ERIC Educational Resources Information Center

    Zheng, Yinggan; Gierl, Mark J.; Cui, Ying

    2010-01-01

    This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…

  14. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    PubMed

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.

  15. Automated nonlinear system modeling with multiple fuzzy neural networks and kernel smoothing.

    PubMed

    Yu, Wen; Li, Xiaoou

    2010-10-01

    This paper, presents a novel identification approach using fuzzy neural networks. It focuses on structure and parameters uncertainties which have been widely explored in the literatures. The main contribution of this paper is that an integrated analytic framework is proposed for automated structure selection and parameter identification. A kernel smoothing technique is used to generate a model structure automatically in a fixed time interval. To cope with structural change, a hysteresis strategy is proposed to guarantee finite times switching and desired performance.

  16. Kernel methods for phenotyping complex plant architecture.

    PubMed

    Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien

    2014-02-07

    The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.

  17. Kernel Smoothed Profile Likelihood Estimation in the Accelerated Failure Time Frailty Model for Clustered Survival Data

    PubMed Central

    Liu, Bo; Lu, Wenbin; Zhang, Jiajia

    2013-01-01

    Summary Clustered survival data frequently arise in biomedical applications, where event times of interest are clustered into groups such as families. In this article we consider an accelerated failure time frailty model for clustered survival data and develop nonparametric maximum likelihood estimation for it via a kernel smoother aided EM algorithm. We show that the proposed estimator for the regression coefficients is consistent, asymptotically normal and semiparametric efficient when the kernel bandwidth is properly chosen. An EM-aided numerical differentiation method is derived for estimating its variance. Simulation studies evaluate the finite sample performance of the estimator, and it is applied to the Diabetic Retinopathy data set. PMID:24443587

  18. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  19. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  20. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  1. Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets.

    PubMed

    Wang, Shitong; Wang, Jun; Chung, Fu-lai

    2014-01-01

    Kernel methods such as the standard support vector machine and support vector regression trainings take O(N(3)) time and O(N(2)) space complexities in their naïve implementations, where N is the training set size. It is thus computationally infeasible in applying them to large data sets, and a replacement of the naive method for finding the quadratic programming (QP) solutions is highly desirable. By observing that many kernel methods can be linked up with kernel density estimate (KDE) which can be efficiently implemented by some approximation techniques, a new learning method called fast KDE (FastKDE) is proposed to scale up kernel methods. It is based on establishing a connection between KDE and the QP problems formulated for kernel methods using an entropy-based integrated-squared-error criterion. As a result, FastKDE approximation methods can be applied to solve these QP problems. In this paper, the latest advance in fast data reduction via KDE is exploited. With just a simple sampling strategy, the resulted FastKDE method can be used to scale up various kernel methods with a theoretical guarantee that their performance does not degrade a lot. It has a time complexity of O(m(3)) where m is the number of the data points sampled from the training set. Experiments on different benchmarking data sets demonstrate that the proposed method has comparable performance with the state-of-art method and it is effective for a wide range of kernel methods to achieve fast learning in large data sets.

  2. Protoribosome by quantum kernel energy method.

    PubMed

    Huang, Lulu; Krupkin, Miri; Bashan, Anat; Yonath, Ada; Massa, Lou

    2013-09-10

    Experimental evidence suggests the existence of an RNA molecular prebiotic entity, called by us the "protoribosome," which may have evolved in the RNA world before evolution of the genetic code and proteins. This vestige of the RNA world, which possesses all of the capabilities required for peptide bond formation, seems to be still functioning in the heart of all of the contemporary ribosome. Within the modern ribosome this remnant includes the peptidyl transferase center. Its highly conserved nucleotide sequence is suggestive of its robustness under diverse environmental conditions, and hence on its prebiotic origin. Its twofold pseudosymmetry suggests that this entity could have been a dimer of self-folding RNA units that formed a pocket within which two activated amino acids might be accommodated, similar to the binding mode of modern tRNA molecules that carry amino acids or peptidyl moieties. Using quantum mechanics and crystal coordinates, this work studies the question of whether the putative protoribosome has properties necessary to function as an evolutionary precursor to the modern ribosome. The quantum model used in the calculations is density functional theory--B3LYP/3-21G*, implemented using the kernel energy method to make the computations practical and efficient. It occurs that the necessary conditions that would characterize a practicable protoribosome--namely (i) energetic structural stability and (ii) energetically stable attachment to substrates--are both well satisfied.

  3. Cold-moderator scattering kernel methods

    SciTech Connect

    MacFarlane, R. E.

    1998-01-01

    An accurate representation of the scattering of neutrons by the materials used to build cold sources at neutron scattering facilities is important for the initial design and optimization of a cold source, and for the analysis of experimental results obtained using the cold source. In practice, this requires a good representation of the physics of scattering from the material, a method to convert this into observable quantities (such as scattering cross sections), and a method to use the results in a neutron transport code (such as the MCNP Monte Carlo code). At Los Alamos, the authors have been developing these capabilities over the last ten years. The final set of cold-moderator evaluations, together with evaluations for conventional moderator materials, was released in 1994. These materials have been processed into MCNP data files using the NJOY Nuclear Data Processing System. Over the course of this work, they were able to develop a new module for NJOY called LEAPR based on the LEAP + ADDELT code from the UK as modified by D.J. Picton for cold-moderator calculations. Much of the physics for methane came from Picton`s work. The liquid hydrogen work was originally based on a code using the Young-Koppel approach that went through a number of hands in Europe (including Rolf Neef and Guy Robert). It was generalized and extended for LEAPR, and depends strongly on work by Keinert and Sax of the University of Stuttgart. Thus, their collection of cold-moderator scattering kernels is truly an international effort, and they are glad to be able to return the enhanced evaluations and processing techniques to the international community. In this paper, they give sections on the major cold moderator materials (namely, solid methane, liquid methane, and liquid hydrogen) using each section to introduce the relevant physics for that material and to show typical results.

  4. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    DTIC Science & Technology

    2016-01-05

    events (and subsequently, their likelihood of occurrence) based on historical evidence of the counts of previous event occurrences. The novel Bayesian...Aug-2014 22-May-2015 Approved for Public Release; Distribution Unlimited Final Report: Sparse Event Modeling with Hierarchical Bayesian Kernel Methods...Sparse Event Modeling with Hierarchical Bayesian Kernel Methods Report Title The research objective of this proposal was to develop a predictive Bayesian

  5. Intelligent classification methods of grain kernels using computer vision analysis

    NASA Astrophysics Data System (ADS)

    Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo

    2011-06-01

    In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.

  6. A 3D Contact Smoothing Method

    SciTech Connect

    Puso, M A; Laursen, T A

    2002-05-02

    Smoothing of contact surfaces can be used to eliminate the chatter typically seen with node on facet contact and give a better representation of the actual contact surface. The latter affect is well demonstrated for problems with interference fits. In this work we present two methods for the smoothing of contact surfaces for 3D finite element contact. In the first method, we employ Gregory patches to smooth the faceted surface in a node on facet implementation. In the second method, we employ a Bezier interpolation of the faceted surface in a mortar method implementation of contact. As is well known, node on facet approaches can exhibit locking due to the failure of the Babuska-Brezzi condition and in some instances fail the patch test. The mortar method implementation is stable and provides optimal convergence in the energy of error. In the this work we demonstrate the superiority of the smoothed versus the non-smoothed node on facet implementations. We also show where the node on facet method fails and some results from the smoothed mortar method implementation.

  7. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  8. Optimizing spatial filters with kernel methods for BCI applications

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacai; Tang, Jianjun; Yao, Li

    2007-11-01

    Brain Computer Interface (BCI) is a communication or control system in which the user's messages or commands do not depend on the brain's normal output channels. The key step of BCI technology is to find a reliable method to detect the particular brain signals, such as the alpha, beta and mu components in EEG/ECOG trials, and then translate it into usable control signals. In this paper, our objective is to introduce a novel approach that is able to extract the discriminative pattern from the non-stationary EEG signals based on the common spatial patterns(CSP) analysis combined with kernel methods. The basic idea of our Kernel CSP method is performing a nonlinear form of CSP by the use of kernel methods that can efficiently compute the common and distinct components in high dimensional feature spaces related to input space by some nonlinear map. The algorithm described here is tested off-line with dataset I from the BCI Competition 2005. Our experiments show that the spatial filters employed with kernel CSP can effectively extract discriminatory information from single-trial EGOG recorded during imagined movements. The high recognition of linear discriminative rates and computational simplicity of "Kernel Trick" make it a promising method for BCI systems.

  9. Enrichment of the finite element method with reproducing kernel particle method

    SciTech Connect

    Chen, Y.; Liu, W.K.; Uras, R.A.

    1995-07-01

    Based on the reproducing kernel particle method on enrichment procedure is introduced to enhance the effectiveness of the finite element method. The basic concepts for the reproducing kernel particle method are briefly reviewed. By adopting the well-known completeness requirements, a generalized form of the reproducing kernel particle method is developed. Through a combination of these two methods their unique advantages can be utilized. An alternative approach, the multiple field method is also introduced.

  10. Multiple predictor smoothing methods for sensitivity analysis.

    SciTech Connect

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  11. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  12. A locally adaptive kernel regression method for facies delineation

    NASA Astrophysics Data System (ADS)

    Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.

    2015-12-01

    Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.

  13. Method for producing smooth inner surfaces

    DOEpatents

    Cooper, Charles A.

    2016-05-17

    The invention provides a method for preparing superconducting cavities, the method comprising causing polishing media to tumble by centrifugal barrel polishing within the cavities for a time sufficient to attain a surface smoothness of less than 15 nm root mean square roughness over approximately a 1 mm.sup.2 scan area. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media bound to a carrier to tumble within the cavities. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media in a slurry to tumble within the cavities.

  14. Rotorcraft Smoothing Via Linear Time Periodic Methods

    DTIC Science & Technology

    2007-07-01

    Optimal Control Methodology for Rotor Vibration Smoothing . . 30 vii Page IV. Mathematic Foundations of Linear Time Periodic Systems . . . . 33 4.1 The...62 6.3 The Maximum Likelihood Estimator . . . . . . . . . . . 63 6.4 The Cramer-Rao Inequality . . . . . . . . . . . . . . . . 66 6.4.1 Statistical ...adjustments for vibration reduction. 2.2.2.4 1980’s to late 1990’s. Rotor vibrational reduction methods during the 1980’s began to adopt a mathematical

  15. Adaptive Kernel Based Machine Learning Methods

    DTIC Science & Technology

    2012-10-15

    multiscale collocation method with a matrix compression strategy to discretize the system of integral equations and then use the multilevel...augmentation method to solve the resulting discrete system. A priori and a posteriori 1 parameter choice strategies are developed for thesemethods. The...performance of the proximity algo- rithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed

  16. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  17. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  18. Hardness methods for testing maize kernels.

    PubMed

    Fox, Glen; Manley, Marena

    2009-07-08

    Maize is a highly important crop to many countries around the world, through the sale of the maize crop to domestic processors and subsequent production of maize products and also provides a staple food to subsistance farms in undeveloped countries. In many countries, there have been long-term research efforts to develop a suitable hardness method that could assist the maize industry in improving efficiency in processing as well as possibly providing a quality specification for maize growers, which could attract a premium. This paper focuses specifically on hardness and reviews a number of methodologies as well as important biochemical aspects of maize that contribute to maize hardness used internationally. Numerous foods are produced from maize, and hardness has been described as having an impact on food quality. However, the basis of hardness and measurement of hardness are very general and would apply to any use of maize from any country. From the published literature, it would appear that one of the simpler methods used to measure hardness is a grinding step followed by a sieving step, using multiple sieve sizes. This would allow the range in hardness within a sample as well as average particle size and/or coarse/fine ratio to be calculated. Any of these parameters could easily be used as reference values for the development of near-infrared (NIR) spectroscopy calibrations. The development of precise NIR calibrations will provide an excellent tool for breeders, handlers, and processors to deliver specific cultivars in the case of growers and bulk loads in the case of handlers, thereby ensuring the most efficient use of maize by domestic and international processors. This paper also considers previous research describing the biochemical aspects of maize that have been related to maize hardness. Both starch and protein affect hardness, with most research focusing on the storage proteins (zeins). Both the content and composition of the zein fractions affect

  19. Kernel Methods on Spike Train Space for Neuroscience: A Tutorial

    NASA Astrophysics Data System (ADS)

    Park, Il Memming; Seth, Sohan; Paiva, Antonio R. C.; Li, Lin; Principe, Jose C.

    2013-07-01

    Over the last decade several positive definite kernels have been proposed to treat spike trains as objects in Hilbert space. However, for the most part, such attempts still remain a mere curiosity for both computational neuroscientists and signal processing experts. This tutorial illustrates why kernel methods can, and have already started to, change the way spike trains are analyzed and processed. The presentation incorporates simple mathematical analogies and convincing practical examples in an attempt to show the yet unexplored potential of positive definite functions to quantify point processes. It also provides a detailed overview of the current state of the art and future challenges with the hope of engaging the readers in active participation.

  20. Kernel-Smoothing Estimation of Item Characteristic Functions for Continuous Personality Items: An Empirical Comparison with the Linear and the Continuous-Response Models

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2004-01-01

    This study used kernel-smoothing procedures to estimate the item characteristic functions (ICFs) of a set of continuous personality items. The nonparametric ICFs were compared with the ICFs estimated (a) by the linear model and (b) by Samejima's continuous-response model. The study was based on a conditioned approach and used an error-in-variables…

  1. An efficient method for correcting the edge artifact due to smoothing.

    PubMed

    Maisog, J M; Chmielowska, J

    1998-01-01

    Spatial smoothing is a common pre-processing step in the analysis of functional brain imaging data. It can increase sensitivity to signals of specific shapes and sizes (Rosenfeld and Kak [1982]: Digital Picture Processing, vol. 2. Orlando, Fla.: Academic; Worsley et al. [1996]: Hum Brain Mapping 4:74-90). Also, some amount of spatial smoothness is required if methods from the theory of Gaussian random fields are to be used (Holmes [1994]: Statistical Issues in Functional Brain Mapping. PhD thesis, University of Glasgow). Smoothing is most often implemented as a convolution of the imaging data with a smoothing kernel, and convolution is most efficiently performed using the Convolution Theorem and the Fast Fourier Transform (Cooley and Tukey [1965]: Math Comput 19:297-301; Priestly [1981]: Spectral Analysis and Time Series. San Diego: Academic; Press et al. [1992]: Numerical Recipes in C: The Art of Scientific Computing, 2nd ed. Cambridge: Cambridge University Press). An undesirable side effect of smoothing is an artifact along the edges of the brain, where brain voxels become smoothed with non-brain voxels. This results in a dark rim which might be mistaken for hypoactivity. In this short methodological paper, we present a method for correcting functional brain images for the edge artifact due to smoothing, while retaining the use of the Convolution Theorem and the Fast Fourier Transform for efficient calculation of convolutions.

  2. NIRS method for precise identification of Fusarium damaged wheat kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Development of scab resistant wheat varieties may be enhanced by non-destructive evaluation of kernels for Fusarium damaged kernels (FDKs) and deoxynivalenol (DON) levels. Fusarium infection generally affects kernel appearance, but insect damage and other fungi can cause similar symptoms. Also, some...

  3. Kernel weights optimization for error diffusion halftoning method

    NASA Astrophysics Data System (ADS)

    Fedoseev, Victor

    2015-02-01

    This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.

  4. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods

    PubMed Central

    Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  5. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  6. Earthquake forecasting through a smoothing Kernel and the rate-and-state friction law: Application to the Taiwan region

    NASA Astrophysics Data System (ADS)

    Chan, C.; Wu, Y.

    2011-12-01

    We applied two forecasting models for spatio-temporal distribution of seismicity density rate based on the smoothing Kernel function (SKF) and rate-and-state friction law (RFL) in Taiwan region to test their feasibility. Earthquake catalog from 1973 to 2007 was used to build up a time-independent forecasting model through the SKF. Coulomb stress changes imparted by the M≥4.5 earthquakes from 2008 to 2009 were calculated in order to propose a time-dependent model by the RFL. The distribution of M≥3.0 earthquakes from 2008 to 2009 is forecasted to examine our results. For SKF and RFL models, the percentage of forecasted earthquakes located within the 50% of the study area with high calculated seismicity rate are 82% and 72%, respectively. Results show that both of the models represent good abilities in earthquake forecasting. We further propose another model by combination of these two approaches. The rate could further reach to 84 %. It will be useful for seismicity forecasting in the near future.

  7. Estimating the Bias of Local Polynomial Approximation Methods Using the Peano Kernel

    SciTech Connect

    Blair, J.; Machorro, E.; Luttman, A.

    2013-03-01

    The determination of uncertainty of an estimate requires both the variance and the bias of the estimate. Calculating the variance of local polynomial approximation (LPA) estimates is straightforward. We present a method, using the Peano Kernel Theorem, to estimate the bias of LPA estimates and show how this can be used to optimize the LPA parameters in terms of the bias-variance tradeoff. Figures of merit are derived and values calculated for several common methods. The results in the literature are expanded by giving bias error bounds that are valid for all lengths of the smoothing interval, generalizing the currently available asymptotic results that are only valid in the limit as the length of this interval goes to zero.

  8. Sensitivity kernels for viscoelastic loading based on adjoint methods

    NASA Astrophysics Data System (ADS)

    Al-Attar, David; Tromp, Jeroen

    2014-01-01

    Observations of glacial isostatic adjustment (GIA) allow for inferences to be made about mantle viscosity, ice sheet history and other related parameters. Typically, this inverse problem can be formulated as minimizing the misfit between the given observations and a corresponding set of synthetic data. When the number of parameters is large, solution of such optimization problems can be computationally challenging. A practical, albeit non-ideal, solution is to use gradient-based optimization. Although the gradient of the misfit required in such methods could be calculated approximately using finite differences, the necessary computation time grows linearly with the number of model parameters, and so this is often infeasible. A far better approach is to apply the `adjoint method', which allows the exact gradient to be calculated from a single solution of the forward problem, along with one solution of the associated adjoint problem. As a first step towards applying the adjoint method to the GIA inverse problem, we consider its application to a simpler viscoelastic loading problem in which gravitationally self-consistent ocean loading is neglected. The earth model considered is non-rotating, self-gravitating, compressible, hydrostatically pre-stressed, laterally heterogeneous and possesses a Maxwell solid rheology. We determine adjoint equations and Fréchet kernels for this problem based on a Lagrange multiplier method. Given an objective functional J defined in terms of the surface deformation fields, we show that its first-order perturbation can be written δ J = int _{MS}K_{η }δ ln η dV +int _{t0}^{t1}int _{partial M}K_{dot{σ }} δ dot{σ } dS dt, where δ ln η = δη/η denotes relative viscosity variations in solid regions MS, dV is the volume element, δ dot{σ } is the perturbation to the time derivative of the surface load which is defined on the earth model's surface ∂M and for times [t0, t1] and dS is the surface element on ∂M. The `viscosity

  9. An improved method for rebinning kernels from cylindrical to Cartesian coordinates.

    PubMed

    Rathee, S; McClean, B A; Field, C

    1993-01-01

    This paper describes the errors in rebinning photon dose point spread functions and pencil beam kernels (PBKs) from cylindrical to Cartesian coordinates. An area overlap method, which assumes that the fractional energy deposited per unit volume remains constant within cylindrical voxels, provides large deviations (up to 20%) in rebinned Cartesian voxels while conserving the total energy. A modified area overlap method is presented that allows the fractional energy deposited per unit volume within cylindrical voxels to vary according to an interpolating function. This method rebins the kernels accurately in each Cartesian voxel while conserving the total energy. The dose distributions were computed for a partially blocked beam of uniform fluence using the Cartesian coordinate kernel and the kernels rebinned by both methods. The kernel rebinned by the modified area overlap method provided errors less than 1.7%, while the kernel rebinned by the area overlap method gave errors up to 4.4%.

  10. Kernel energy method applied to vesicular stomatitis virus nucleoprotein

    PubMed Central

    Huang, Lulu; Massa, Lou; Karle, Jerome

    2009-01-01

    The kernel energy method (KEM) is applied to the vesicular stomatitis virus (VSV) nucleoprotein (PDB ID code 2QVJ). The calculations employ atomic coordinates from the crystal structure at 2.8-Å resolution, except for the hydrogen atoms, whose positions were modeled by using the computer program HYPERCHEM. The calculated KEM ab initio limited basis Hartree-Fock energy for the full 33,175 atom molecule (including hydrogen atoms) is obtained. In the KEM, a full biological molecule is represented by smaller “kernels” of atoms, greatly simplifying the calculations. Collections of kernels are well suited for parallel computation. VSV consists of five similar chains, and we obtain the energy of each chain. Interchain hydrogen bonds contribute to the interaction energy between the chains. These hydrogen bond energies are calculated in Hartree-Fock (HF) and Møller-Plesset perturbation theory to second order (MP2) approximations by using 6–31G** basis orbitals. The correlation energy, included in MP2, is a significant factor in the interchain hydrogen bond energies. PMID:19188588

  11. Integral Transform Methods: A Critical Review of Various Kernels

    NASA Astrophysics Data System (ADS)

    Orlandini, Giuseppina; Turro, Francesco

    2017-03-01

    Some general remarks about integral transform approaches to response functions are made. Their advantage for calculating cross sections at energies in the continuum is stressed. In particular we discuss the class of kernels that allow calculations of the transform by matrix diagonalization. A particular set of such kernels, namely the wavelets, is tested in a model study.

  12. Simulating non-Newtonian flows with the moving particle semi-implicit method with an SPH kernel

    NASA Astrophysics Data System (ADS)

    Xiang, Hao; Chen, Bin

    2015-02-01

    The moving particle semi-implicit (MPS) method and smoothed particle hydrodynamics (SPH) are commonly used mesh-free particle methods for free surface flows. The MPS method has superiority in incompressible flow simulation and simple programing. However, the crude kernel function is not accurate enough for the discretization of the divergence of the shear stress tensor by the particle inconsistency when the MPS method is extended to non-Newtonian flows. This paper presents an improved MPS method with an SPH kernel to simulate non-Newtonian flows. To improve the consistency of the partial derivative, the SPH cubic spline kernel and the Taylor series expansion are combined with the MPS method. This approach is suitable for all non-Newtonian fluids that can be described with τ = μ(|γ|) Δ (where τ is the shear stress tensor, μ is the viscosity, |γ| is the shear rate, and Δ is the strain tensor), e.g., the Casson and Cross fluids. Two examples are simulated including the Newtonian Poiseuille flow and container filling process of the Cross fluid. The results of Poiseuille flow are more accurate than the traditional MPS method, and different filling processes are obtained with good agreement with previous results, which verified the validation of the new algorithm. For the Cross fluid, the jet fracture length can be correlated with We0.28Fr0.78 (We is the Weber number, Fr is the Froude number).

  13. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    PubMed Central

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  14. Kernel method based human model for enhancing interactive evolutionary optimization.

    PubMed

    Pei, Yan; Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly.

  15. Probabilistic seismic hazard assessment of Italy using kernel estimation methods

    NASA Astrophysics Data System (ADS)

    Zuccolo, Elisa; Corigliano, Mirko; Lai, Carlo G.

    2013-07-01

    A representation of seismic hazard is proposed for Italy based on the zone-free approach developed by Woo (BSSA 86(2):353-362, 1996a), which is based on a kernel estimation method governed by concepts of fractal geometry and self-organized seismicity, not requiring the definition of seismogenic zoning. The purpose is to assess the influence of seismogenic zoning on the results obtained for the probabilistic seismic hazard analysis (PSHA) of Italy using the standard Cornell's method. The hazard has been estimated for outcropping rock site conditions in terms of maps and uniform hazard spectra for a selected site, with 10 % probability of exceedance in 50 years. Both spectral acceleration and spectral displacement have been considered as ground motion parameters. Differences in the results of PSHA between the two methods are compared and discussed. The analysis shows that, in areas such as Italy, characterized by a reliable earthquake catalog and in which faults are generally not easily identifiable, a zone-free approach can be considered a valuable tool to address epistemic uncertainty within a logic tree framework.

  16. A method for anisotropic spatial smoothing of functional magnetic resonance images using distance transformation of a structural image

    NASA Astrophysics Data System (ADS)

    Nam, Haewon; Lee, Dongha; Doo Lee, Jong; Park, Hae-Jeong

    2011-08-01

    Spatial smoothing using isotropic Gaussian kernels to remove noise reduces spatial resolution and increases the partial volume effect of functional magnetic resonance images (fMRI), thereby reducing localization power. To minimize these limitations, we propose a novel anisotropic smoothing method for fMRI data. To extract an anisotropic tensor for each voxel of the functional data, we derived an intensity gradient using the distance transformation of the segmented gray matter of the fMRI-coregistered T1-weighted image. The intensity gradient was then used to determine the anisotropic smoothing kernel at each voxel of the fMRI data. Performance evaluations on both real and simulated data showed that the proposed method had 10% higher statistical power and about 20% higher gray matter localization compared to isotropic smoothing and robustness to the registration errors (up to 4 mm translations and 4° rotations) between T1 structural images and fMRI data. The proposed method also showed higher performance than the anisotropic smoothing with diffusion gradients derived from the fMRI intensity data.

  17. A Comparison of the Kernel Equating Method with Traditional Equating Methods Using SAT[R] Data

    ERIC Educational Resources Information Center

    Liu, Jinghua; Low, Albert C.

    2008-01-01

    This study applied kernel equating (KE) in two scenarios: equating to a very similar population and equating to a very different population, referred to as a distant population, using SAT[R] data. The KE results were compared to the results obtained from analogous traditional equating methods in both scenarios. The results indicate that KE results…

  18. Autonomic function assessment in Parkinson's disease patients using the kernel method and entrainment techniques.

    PubMed

    Kamal, Ahmed K

    2007-01-01

    The experimental procedure of lowering and raising a leg while the subject is in the supine position is considered to stimulate and entrain the autonomic nervous system of fifteen untreated patients with Parkinson's disease and fifteen age and sex matched control subjects. The assessment of autonomic function for each group is achieved using an algorithm based on Volterra kernel estimation. By applying this algorithm and considering the process of lowering and raising a leg as stimulus input and the Heart Rate Variability signal (HRV) as output for system identification, a mathematical model is expressed as integral equations. The integral equations are considered and fixed for control subjects and Parkinson's disease patients so that the identification method reduced to the determination of the values within the integral called kernels, resulting in an integral equations whose input-output behavior is nearly identical to that of the system in both healthy subjects and Parkinson's disease patients. The model for each group contains the linear part (first order kernel) and quadratic part (second order kernel). A difference equation model was employed to represent the system for both control subjects and patients with Parkinson's disease. The results show significant difference in first order kernel(impulse response) and second order kernel (mesh diagram) for each group. Using first order kernel and second order kernel, it is possible to assess autonomic function qualitatively and quantitatively in both groups.

  19. Aflatoxin detection in whole corn kernels using hyperspectral methods

    NASA Astrophysics Data System (ADS)

    Casasent, David; Chen, Xue-Wen

    2004-03-01

    Hyperspectral (HS) data for the inspection of whole corn kernels for aflatoxin is considered. The high-dimensionality of HS data requires feature extraction or selection for good classifier generalization. For fast and inexpensive data collection, only several features (λ responses) can be used. These are obtained by feature selection from the full HS response. A new high dimensionality branch and bound (HDBB) feature selection algorithm is used; it is found to be optimum, fast and very efficient. Initial results indicate that HS data is very promising for aflatoxin detection in whole kernel corn.

  20. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    DTIC Science & Technology

    2006-01-01

    successfully used by the machine learning community for pattern recognition and image denoising [14]. A Gaussian kernel was used by Cremers et al. [8] for...matrix M, where φi ∈ RNd . Using Singular Value Decomposition ( SVD ), the covariance matrix 1nMM T is decomposed as: UΣUT = 1 n MMT (1) where U is a

  1. LoCoH: nonparameteric kernel methods for constructing home ranges and utilization distributions.

    PubMed

    Getz, Wayne M; Fortmann-Roe, Scott; Cross, Paul C; Lyons, Andrew J; Ryan, Sadie J; Wilmers, Christopher C

    2007-02-14

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: "fixed sphere-of-influence," or r-LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an "adaptive sphere-of-influence," or a-LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a), and compare them to the original "fixed-number-of-points," or k-LoCoH (all kernels constructed from k-1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a-LoCoH is generally superior to k- and r-LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).

  2. LoCoH: Nonparameteric Kernel Methods for Constructing Home Ranges and Utilization Distributions

    PubMed Central

    Getz, Wayne M.; Fortmann-Roe, Scott; Wilmers, Christopher C.

    2007-01-01

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: “fixed sphere-of-influence,” or r-LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an “adaptive sphere-of-influence,” or a-LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a), and compare them to the original “fixed-number-of-points,” or k-LoCoH (all kernels constructed from k-1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a-LoCoH is generally superior to k- and r-LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu). PMID:17299587

  3. LoCoH: Non-parameteric kernel methods for constructing home ranges and utilization distributions

    USGS Publications Warehouse

    Getz, Wayne M.; Fortmann-Roe, Scott; Cross, Paul C.; Lyons, Andrew J.; Ryan, Sadie J.; Wilmers, Christopher C.

    2007-01-01

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: ‘‘fixed sphere-of-influence,’’ or r -LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an ‘‘adaptive sphere-of-influence,’’ or a -LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a ), and compare them to the original ‘‘fixed-number-of-points,’’ or k -LoCoH (all kernels constructed from k -1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a -LoCoH is generally superior to k - and r -LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).

  4. Kernel density estimator methods for Monte Carlo radiation transport

    NASA Astrophysics Data System (ADS)

    Banerjee, Kaushik

    In this dissertation, the Kernel Density Estimator (KDE), a nonparametric probability density estimator, is studied and used to represent global Monte Carlo (MC) tallies. KDE is also employed to remove the singularities from two important Monte Carlo tallies, namely point detector and surface crossing flux tallies. Finally, KDE is also applied to accelerate the Monte Carlo fission source iteration for criticality problems. In the conventional MC calculation histograms are used to represent global tallies which divide the phase space into multiple bins. Partitioning the phase space into bins can add significant overhead to the MC simulation and the histogram provides only a first order approximation to the underlying distribution. The KDE method is attractive because it can estimate MC tallies in any location within the required domain without any particular bin structure. Post-processing of the KDE tallies is sufficient to extract detailed, higher order tally information for an arbitrary grid. The quantitative and numerical convergence properties of KDE tallies are also investigated and they are shown to be superior to conventional histograms as well as the functional expansion tally developed by Griesheimer. Monte Carlo point detector and surface crossing flux tallies are two widely used tallies but they suffer from an unbounded variance. As a result, the central limit theorem can not be used for these tallies to estimate confidence intervals. By construction, KDE tallies can be directly used to estimate flux at a point but the variance of this point estimate does not converge as 1/N, which is not unexpected for a point quantity. However, an improved approach is to modify both point detector and surface crossing flux tallies directly by using KDE within a variance reduction approach by taking advantage of the fact that KDE estimates the underlying probability density function. This methodology is demonstrated by several numerical examples and demonstrates that

  5. Development of a single kernel analysis method for detection of 2-acetyl-1-pyrroline in aromatic rice germplasm

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Solid-phase microextraction (SPME) in conjunction with GC/MS was used to distinguish non-aromatic rice (Oryza sativa, L.) kernels from aromatic rice kernels. In this method, single kernels along with 10 µl of 0.1 ng 2,4,6-Trimethylpyridine (TMP) were placed in sealed vials and heated to 80oC for 18...

  6. A Non-smooth Newton Method for Multibody Dynamics

    SciTech Connect

    Erleben, K.; Ortiz, R.

    2008-09-01

    In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.

  7. Postprocessing Fourier spectral methods: The case of smooth solutions

    SciTech Connect

    Garcia-Archilla, B.; Novo, J.; Titi, E.S.

    1998-11-01

    A postprocessing technique to improve the accuracy of Galerkin methods, when applied to dissipative partial differential equations, is examined in the particular case of smooth solutions. Pseudospectral methods are shown to perform poorly. This performance is analyzed and a refined postprocessing technique is proposed.

  8. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    PubMed

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1].

  9. A Fast Multiple-Kernel Method with Applications to Detect Gene-Environment Interaction

    PubMed Central

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M.; Worrall, Bradford B.; Williams, Stephen R.; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-01-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. While most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multi-kernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multi-factor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the EM algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous “fastKM” algorithm for multi-kernel analysis that is based on a low-rank approximation to the nuisance-effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. PMID:26139508

  10. A spectral-spatial kernel-based method for hyperspectral imagery classification

    NASA Astrophysics Data System (ADS)

    Li, Li; Ge, Hongwei; Gao, Jianqiang

    2017-02-01

    Spectral-based classification methods have gained increasing attention in hyperspectral imagery classification. Nevertheless, the spectral cannot fully represent the inherent spatial distribution of the imagery. In this paper, a spectral-spatial kernel-based method for hyperspectral imagery classification is proposed. Firstly, the spatial feature was extracted by using area median filtering (AMF). Secondly, the result of the AMF was used to construct spatial feature patch according to different window sizes. Finally, using the kernel technique, the spectral feature and the spatial feature were jointly used for the classification through a support vector machine (SVM) formulation. Therefore, for hyperspectral imagery classification, the proposed method was called spectral-spatial kernel-based support vector machine (SSF-SVM). To evaluate the proposed method, experiments are performed on three hyperspectral images. The experimental results show that an improvement is possible with the proposed technique in most of the real world classification problems.

  11. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  12. Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods

    NASA Astrophysics Data System (ADS)

    Liu, Qinya; Tromp, Jeroen

    2008-07-01

    We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.

  13. 3D MR image denoising using rough set and kernel PCA method.

    PubMed

    Phophalia, Ashish; Mitra, Suman K

    2017-02-01

    In this paper, we have presented a two stage method, using kernel principal component analysis (KPCA) and rough set theory (RST), for denoising volumetric MRI data. A rough set theory (RST) based clustering technique has been used for voxel based processing. The method groups similar voxels (3D cubes) using class and edge information derived from noisy input. Each clusters thus formed now represented via basis vector. These vectors now projected into kernel space and PCA is performed in the feature space. This work is motivated by idea that under Rician noise MRI data may be non-linear and kernel mapping will help to define linear separator between these clusters/basis vectors thus used for image denoising. We have further investigated various kernels for Rician noise for different noise levels. The best kernel is then selected on the performance basis over PSNR and structure similarity (SSIM) measures. The work has been compared with state-of-the-art methods under various measures for synthetic and real databases.

  14. Early discriminant method of infected kernel based on the erosion effects of laser ultrasonics

    NASA Astrophysics Data System (ADS)

    Fan, Chao

    2015-07-01

    To discriminate the infected kernel of the wheat as early as possible, a new kind of detection method of hidden insects, especially in their egg and larvae stage, was put forward based on the erosion effect of the laser ultrasonic in this paper. The surface of the grain is exposured by the pulsed laser, the energy of which is absorbed and the ultrasonic is excited, and the infected kernel can be recognized by appropriate signal analyzing. Firstly, the detection principle was given based on the classical wave equation and the platform was established. Then, the detected ultrasonic signal was processed both in the time domain and the frequency domain by using FFT and DCT , and six significant features were selected as the characteristic parameters of the signal by the method of stepwise discriminant analysis. Finally, a BP neural network was designed by using these six parameters as the input to classify the infected kernels from the normal ones. Numerous experiments were performed by using twenty wheat varieties, the results shown that the the infected kernels can be recognized effectively, and the false negative error and the false positive error was 12% and 9% respectively, the discriminant method of the infected kernels based on the erosion effect of laser ultrasonics is feasible.

  15. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    PubMed

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  16. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    USGS Publications Warehouse

    Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.

    2014-01-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood

  17. Kernel methods for HyMap imagery knowledge discovery

    NASA Astrophysics Data System (ADS)

    Camps-Valls, Gustavo; Gomez-Chova, Luis; Calpe-Maravilla, Javier; Soria-Olivas, Emilio; Martin-Guerrero, Jose D.; Moreno, Jose

    2004-02-01

    In this paper, we propose a kernel-based approach for hyperspectral knowledge discovery, which is defined as a process that involves three steps: pre-processing, modeling and analysis of the classifier. Firstly, we select the most representative bands analyzing the surrogate and main splits of a Classification And Regression Trees (CART) approach. This yields three datasets with different reduced input dimensionality (6, 3 and 2 bands, respectively) along with the original one (128 bands). Secondly, we develop several crop cover classifiers for each of them. We use Support Vector Machines (SVM) and analyze its performance in terms of efficiency and robustness, as compared to multilayer perceptrons (MLP) and radial basis functions (RBF) neural networks. Suitability to real-time working conditions, whenever a preprocessing stage is not possible, is evaluated by considering models with and without the CART-based feature selection stage. Finally, we analyze the support vectors distribution in the input space and through Principal Component Analysis (PCA) in order to gain knowledge about the problem. Several conclusions are drawn: (1) SVM yield better outcomes than neural networks; (2) training neural models is unfeasible when working with high dimensional spaces; (3) SVM perform similarly in the four classification scenarios, which indicates that noisy bands are successfully detected and (4) relevant bands for the classification are identified.

  18. Smoothed Profile Method to Simulate Colloidal Particles in Complex Fluids

    NASA Astrophysics Data System (ADS)

    Yamamoto, Ryoichi; Nakayama, Yasuya; Kim, Kang

    A new direct numerical simulation scheme, called "Smoothed Profile (SP) method," is presented. The SP method, as a direct numerical simulation of particulate flow, provides a way to couple continuum fluid dynamics with rigid-body dynamics through smoothed profile of colloidal particle. Our formulation includes extensions to colloids in multicomponent solvents such as charged colloids in electrolyte solutions. This method enables us to compute the time evolutions of colloidal particles, ions, and host fluids simultaneously by solving Newton, advection-diffusion, and Navier-Stokes equations so that the electro-hydrodynamic couplings can be fully taken into account. The electrophoretic mobilities of charged spherical particles are calculated in several situations. The comparisons with approximation theories show quantitative agreements for dilute dispersions without any empirical parameters.

  19. A Fourier-series-based kernel-independent fast multipole method

    SciTech Connect

    Zhang Bo; Huang Jingfang; Pitsianis, Nikos P.; Sun Xiaobai

    2011-07-01

    We present in this paper a new kernel-independent fast multipole method (FMM), named as FKI-FMM, for pairwise particle interactions with translation-invariant kernel functions. FKI-FMM creates, using numerical techniques, sufficiently accurate and compressive representations of a given kernel function over multi-scale interaction regions in the form of a truncated Fourier series. It provides also economic operators for the multipole-to-multipole, multipole-to-local, and local-to-local translations that are typical and essential in the FMM algorithms. The multipole-to-local translation operator, in particular, is readily diagonal and does not dominate in arithmetic operations. FKI-FMM provides an alternative and competitive option, among other kernel-independent FMM algorithms, for an efficient application of the FMM, especially for applications where the kernel function consists of multi-physics and multi-scale components as those arising in recent studies of biological systems. We present the complexity analysis and demonstrate with experimental results the FKI-FMM performance in accuracy and efficiency.

  20. REGIONALLY SMOOTHED META-ANALYSIS METHODS FOR GWAS DATASETS

    PubMed Central

    Begum, Ferdouse; Sharker, Monir H.; Sherman, Stephanie L.; Tseng, George C.; Feingold, Eleanor

    2015-01-01

    Genome-wide association studies (GWAS) are proven tools for finding disease genes, but it is often necessary to combine many cohorts into a meta-analysis to detect statistically significant genetic effects. Often the component studies are performed by different investigators on different populations, using different chips with minimal SNPs overlap. In some cases, raw data are not available for imputation so that only the genotyped SNP results can be used in meta-analysis. Even when SNP sets are comparable, different cohorts may have peak association signals at different SNPs within the same gene due to population differences in linkage disequilibrium or environmental interactions. We hypothesize that the power to detect statistical signals in these situations will improve by using a method that simultaneously meta-analyzes and smooths the signal over nearby markers. In this study we propose regionally smoothed meta-analysis (RSM) methods and compare their performance on real and simulated data. PMID:26707090

  1. A simple and fast method for computing the relativistic Compton Scattering Kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Kershaw, David S.; Prasad, Manoj K.; Beason, J. Douglas

    1986-01-01

    The Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution is analytically reduced to a single integral, which can then be rapidly evaluated in a variety of ways. A particularly fast method for numerically computing this single integral is presented. This is, to the authors' knowledge, the first correct computation of the Compton scattering kernel.

  2. Standard Errors of the Kernel Equating Methods under the Common-Item Design.

    ERIC Educational Resources Information Center

    Liou, Michelle; And Others

    This research derives simplified formulas for computing the standard error of the frequency estimation method for equating score distributions that are continuized using a uniform or Gaussian kernel function (P. W. Holland, B. F. King, and D. T. Thayer, 1989; Holland and Thayer, 1987). The simplified formulas are applicable to equating both the…

  3. A 3-dimensional Bergman kernel method with applications to rectangular domains

    NASA Astrophysics Data System (ADS)

    Bock, S.; Falcao, M. I.; Gurlebeck, K.; Malonek, H.

    2006-05-01

    In this paper we revisit the so-called Bergman kernel method--BKM--for solving conformal mapping problems and propose a generalized BKM-approach to extend the theory to three-dimensional mapping problems. A special software package for quaternions was developed for the numerical experiments.

  4. Modelling of the control of heart rate by breathing using a kernel method.

    PubMed

    Ahmed, A K; Fakhouri, S Y; Harness, J B; Mearns, A J

    1986-03-07

    The process of the breathing (input) to the heart rate (output) of man is considered for system identification by the input-output relationship, using a mathematical model expressed as integral equations. The integral equation is considered and fixed so that the identification method reduces to the determination of the values within the integral, called kernels, resulting in an integral equation whose input-output behaviour is nearly identical to that of the system. This paper uses an algorithm of kernel identification of the Volterra series which greatly reduces the computational burden and eliminates the restriction of using white Gaussian input as a test signal. A second-order model is the most appropriate for a good estimate of the system dynamics. The model contains the linear part (first-order kernel) and quadratic part (second-order kernel) in parallel, and so allows for the possibility of separation between the linear and non-linear elements of the process. The response of the linear term exhibits the oscillatory input and underdamped nature of the system. The application of breathing as input to the system produces an oscillatory term which may be attributed to the nature of sinus node of the heart being sensitive to the modulating signal the breathing wave. The negative-on diagonal seems to cause the dynamic asymmetry of the total response of the system which opposes the oscillatory nature of the first kernel related to the restraining force present in the respiratory heart rate system. The presence of the positive-off diagonal of the second-order kernel of respiratory control of heart rate is an indication of an escape-like phenomenon in the system.

  5. Smoothed particle hydrodynamics method from a large eddy simulation perspective

    NASA Astrophysics Data System (ADS)

    Di Mascio, A.; Antuono, M.; Colagrossi, A.; Marrone, S.

    2017-03-01

    The Smoothed Particle Hydrodynamics (SPH) method, often used for the modelling of the Navier-Stokes equations by a meshless Lagrangian approach, is revisited from the point of view of Large Eddy Simulation (LES). To this aim, the LES filtering procedure is recast in a Lagrangian framework by defining a filter that moves with the positions of the fluid particles at the filtered velocity. It is shown that the SPH smoothing procedure can be reinterpreted as a sort of LES Lagrangian filtering, and that, besides the terms coming from the LES convolution, additional contributions (never accounted for in the SPH literature) appear in the equations when formulated in a filtered fashion. Appropriate closure formulas are derived for the additional terms and a preliminary numerical test is provided to show the main features of the proposed LES-SPH model.

  6. Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.

    PubMed

    Echinaka, Yuki; Ozeki, Yukiyasu

    2016-10-01

    The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.

  7. Method for smoothing the surface of a protective coating

    DOEpatents

    Sangeeta, D.; Johnson, Curtis Alan; Nelson, Warren Arthur

    2001-01-01

    A method for smoothing the surface of a ceramic-based protective coating which exhibits roughness is disclosed. The method includes the steps of applying a ceramic-based slurry or gel coating to the protective coating surface; heating the slurry/gel coating to remove volatile material; and then further heating the slurry/gel coating to cure the coating and bond it to the underlying protective coating. The slurry/gel coating is often based on yttria-stabilized zirconia, and precursors of an oxide matrix. Related articles of manufacture are also described.

  8. The Continuized Log-Linear Method: An Alternative to the Kernel Method of Continuization in Test Equating

    ERIC Educational Resources Information Center

    Wang, Tianyou

    2008-01-01

    Von Davier, Holland, and Thayer (2004) laid out a five-step framework of test equating that can be applied to various data collection designs and equating methods. In the continuization step, they presented an adjusted Gaussian kernel method that preserves the first two moments. This article proposes an alternative continuization method that…

  9. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    PubMed

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level.

  10. The complex variable reproducing kernel particle method for elasto-plasticity problems

    NASA Astrophysics Data System (ADS)

    Chen, Li; Cheng, Yumin

    2010-05-01

    On the basis of reproducing kernel particle method (RKPM), using complex variable theory, the complex variable reproducing kernel particle method (CVRKPM) is discussed in this paper. The advantage of the CVRKPM is that the correction function of a two-dimensional problem is formed with one-dimensional basis function when the shape function is formed. Then the CVRKPM is applied to solve two-dimensional elasto-plasticity problems. The Galerkin weak form is employed to obtain the discretized system equation, the penalty method is used to apply the essential boundary conditions. And then, the CVRKPM for two-dimensional elasto-plasticity problems is formed, the corresponding formulae are obtained, and the Newton-Raphson method is used in the numerical implementation. Three numerical examples are given to show that this method in this paper is effective for elasto-plasticity analysis.

  11. A new efficient hybrid intelligent method for nonlinear dynamical systems identification: The Wavelet Kernel Fuzzy Neural Network

    NASA Astrophysics Data System (ADS)

    Loussifi, Hichem; Nouri, Khaled; Benhadj Braiek, Naceur

    2016-03-01

    In this paper a hybrid computational intelligent approach of combining kernel methods with wavelet Multi-resolution Analysis (MRA) is presented for fuzzy wavelet network construction and initialization. Mother wavelets are used as activation functions for the neural network structure, and as kernel functions in the machine learning process. By choosing precise values of scale parameters based on the windowed scalogram representation of the Continuous Wavelet Transform (CWT), a set of kernel parameters is taken to construct the proposed Wavelet Kernel based Fuzzy Neural Network (WK-FNN) with an efficient initialization technique based on the use of wavelet kernels in Support Vector Machine for Regression (SVMR). Simulation examples are given to test usability and effectiveness of the proposed hybrid intelligent method in the system identification of dynamic plants and in the prediction of a chaotic time series. It is seen that the proposed WK-FNN achieves higher accuracy and has good performance as compared to other methods.

  12. A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System.

    PubMed

    Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan

    2015-01-01

    The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods.

  13. The method of tailored sensitivity kernels for GRACE mass change estimates

    NASA Astrophysics Data System (ADS)

    Groh, Andreas; Horwath, Martin

    2016-04-01

    To infer mass changes (such as mass changes of an ice sheet) from time series of GRACE spherical harmonic solutions, two basic approaches (with many variants) exist: The regional integration approach (or direct approach) is based on surface mass changes (equivalent water height, EWH) from GRACE and integrates those with specific integration kernels. The forward modeling approach (or mascon approach, or inverse approach) prescribes a finite set of mass change patterns and adjusts the amplitudes of those patterns (in a least squares sense) to the GRACE gravity field changes. The present study reviews the theoretical framework of both approaches. We recall that forward modeling approaches ultimately estimate mass changes by linear functionals of the gravity field changes. Therefore, they implicitly apply sensitivity kernels and may be considered as special realizations of the regional integration approach. We show examples for sensitivity kernels intrinsic to forward modeling approaches. We then propose to directly tailor sensitivity kernels (or in other words: mass change estimators) by a formal optimization procedure that minimizes the sum of propagated GRACE solution errors and leakage errors. This approach involves the incorporation of information on the structure of GRACE errors and the structure of those mass change signals that are most relevant for leakage errors. We discuss the realization of this method, as applied within the ESA "Antarctic Ice Sheet CCI (Climate Change Initiative)" project. Finally, results for the Antarctic Ice Sheet in terms of time series of mass changes of individual drainage basins and time series of gridded EWH changes are presented.

  14. A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System

    PubMed Central

    Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan

    2015-01-01

    The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods. PMID:26229526

  15. Verification and large deformation analysis using the reproducing kernel particle method

    SciTech Connect

    Beckwith, Frank

    2015-09-01

    The reproducing kernel particle method (RKPM) is a meshless method used to solve general boundary value problems using the principle of virtual work. RKPM corrects the kernel approximation by introducing reproducing conditions which force the method to be complete to arbritrary order polynomials selected by the user. Effort in recent years has led to the implementation of RKPM within the Sierra/SM physics software framework. The purpose of this report is to investigate convergence of RKPM for verification and validation purposes as well as to demonstrate the large deformation capability of RKPM in problems where the finite element method is known to experience difficulty. Results from analyses using RKPM are compared against finite element analysis. A host of issues associated with RKPM are identified and a number of potential improvements are discussed for future work.

  16. A kernel method for calculating effective radiative forcing in transient climate simulations

    NASA Astrophysics Data System (ADS)

    Larson, E. J. L.; Portmann, R. W.

    2015-12-01

    Effective radiative forcing (ERF) is calculated as the flux change at the top of the atmosphere, after allowing fast adjustments, due to a forcing agent such as greenhouse gasses or volcanic events. Accurate estimates of the ERF are necessary in order to understand the drivers of climate change. ERF cannot be observed directly and is difficult to estimate from indirect observations due to the complexity of climate responses to individual forcing factors. We present a new method of calculating ERF using a kernel populated from a time series of a model variable (e.g. global mean surface temperature) in a CO2 step change experiment. The top of atmosphere (TOA) radiative imbalance has the best noise tolerance for retrieving the ERF of the model variables we tested. We compare the kernel method with the energy balance method for estimating ERF in the CMIP5 models. The energy balance method uses the regression between the TOA imbalance and temperature change in a CO2 step change experiment to estimate the climate feedback parameter. It then assumes the feedback parameter is constant to calculate the forcing time series. This method is sensitive to the number of years chosen for the regression and the nonlinearity in the regression leads to a bias. We quantify the sensitivities and biases of these methods and compare their estimates of forcing. The kernel method is more accurate for models in which a linear fit is a poor approximation for the relationship between temperature change and TOA imbalance.

  17. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  18. Arima model and exponential smoothing method: A comparison

    NASA Astrophysics Data System (ADS)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  19. Modeling Electrokinetic Flows by the Smoothed Profile Method

    PubMed Central

    Luo, Xian; Beskok, Ali; Karniadakis, George Em

    2010-01-01

    We propose an efficient modeling method for electrokinetic flows based on the Smoothed Profile Method (SPM) [1–4] and spectral element discretizations. The new method allows for arbitrary differences in the electrical conductivities between the charged surfaces and the the surrounding electrolyte solution. The electrokinetic forces are included into the flow equations so that the Poisson-Boltzmann and electric charge continuity equations are cast into forms suitable for SPM. The method is validated by benchmark problems of electroosmotic flow in straight channels and electrophoresis of charged cylinders. We also present simulation results of electrophoresis of charged microtubules, and show that the simulated electrophoretic mobility and anisotropy agree with the experimental values. PMID:20352076

  20. Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.; Forys, John W., Jr.

    1986-01-01

    Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)

  1. Scalable Kernel Methods and Algorithms for General Sequence Analysis

    ERIC Educational Resources Information Center

    Kuksa, Pavel

    2011-01-01

    Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…

  2. Single corn kernel aflatoxin B1 extraction and analysis method

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aflatoxins are highly carcinogenic compounds produced by the fungus Aspergillus flavus. Aspergillus flavus is a phytopathogenic fungus that commonly infects crops such as cotton, peanuts, and maize. The goal was to design an effective sample preparation method and analysis for the extraction of afla...

  3. Noninvasive reconstruction of cardiac transmembrane potentials using a kernelized extreme learning method

    NASA Astrophysics Data System (ADS)

    Jiang, Mingfeng; Zhang, Heng; Zhu, Lingyan; Cao, Li; Wang, Yaming; Xia, Ling; Gong, Yinglan

    2015-04-01

    Non-invasively reconstructing the cardiac transmembrane potentials (TMPs) from body surface potentials can act as a regression problem. The support vector regression (SVR) method is often used to solve the regression problem, however the computational complexity of the SVR training algorithm is usually intensive. In this paper, another learning algorithm, termed as extreme learning machine (ELM), is proposed to reconstruct the cardiac transmembrane potentials. Moreover, ELM can be extended to single-hidden layer feed forward neural networks with kernel matrix (kernelized ELM), which can achieve a good generalization performance at a fast learning speed. Based on the realistic heart-torso models, a normal and two abnormal ventricular activation cases are applied for training and testing the regression model. The experimental results show that the ELM method can perform a better regression ability than the single SVR method in terms of the TMPs reconstruction accuracy and reconstruction speed. Moreover, compared with the ELM method, the kernelized ELM method features a good approximation and generalization ability when reconstructing the TMPs.

  4. A kernel machine-based fMRI physiological noise removal method.

    PubMed

    Song, Xiaomu; Chen, Nan-kuei; Gaur, Pooja

    2014-02-01

    Functional magnetic resonance imaging (fMRI) technique with blood oxygenation level dependent (BOLD) contrast is a powerful tool for noninvasive mapping of brain function under task and resting states. The removal of cardiac- and respiration-induced physiological noise in fMRI data has been a significant challenge as fMRI studies seek to achieve higher spatial resolutions and characterize more subtle neuronal changes. The low temporal sampling rate of most multi-slice fMRI experiments often causes aliasing of physiological noise into the frequency range of BOLD activation signal. In addition, changes of heartbeat and respiration patterns also generate physiological fluctuations that have similar frequencies with BOLD activation. Most existing physiological noise-removal methods either place restrictive limitations on image acquisition or utilize filtering or regression based post-processing algorithms, which cannot distinguish the frequency-overlapping BOLD activation and the physiological noise. In this work, we address the challenge of physiological noise removal via the kernel machine technique, where a nonlinear kernel machine technique, kernel principal component analysis, is used with a specifically identified kernel function to differentiate BOLD signal from the physiological noise of the frequency. The proposed method was evaluated in human fMRI data acquired from multiple task-related and resting state fMRI experiments. A comparison study was also performed with an existing adaptive filtering method. The results indicate that the proposed method can effectively identify and reduce the physiological noise in fMRI data. The comparison study shows that the proposed method can provide comparable or better noise removal performance than the adaptive filtering approach.

  5. A Kernel Machine-based fMRI Physiological Noise Removal Method

    PubMed Central

    Song, Xiaomu; Chen, Nan-kuei; Gaur, Pooja

    2013-01-01

    Functional magnetic resonance imaging (fMRI) technique with blood oxygenation level dependent (BOLD) contrast is a powerful tool for noninvasive mapping of brain function under task and resting states. The removal of cardiac- and respiration-induced physiological noise in fMRI data has been a significant challenge as fMRI studies seek to achieve higher spatial resolutions and characterize more subtle neuronal changes. The low temporal sampling rate of most multi-slice fMRI experiments often causes aliasing of physiological noise into the frequency range of BOLD activation signal. In addition, changes of heartbeat and respiration patterns also generate physiological fluctuations that have similar frequencies with BOLD activation. Most existing physiological noise-removal methods either place restrictive limitations on image acquisition or utilize filtering or regression based post-processing algorithms, which cannot distinguish the frequency-overlapping BOLD activation and the physiological noise. In this work, we address the challenge of physiological noise removal via the kernel machine technique, where a nonlinear kernel machine technique, kernel principal component analysis, is used with a specifically identified kernel function to differentiate BOLD signal from the physiological noise of the frequency. The proposed method was evaluated in human fMRI data acquired from multiple task-related and resting state fMRI experiments. A comparison study was also performed with an existing adaptive filtering method. The results indicate that the proposed method can effectively identify and reduce the physiological noise in fMRI data. The comparison study shows that the proposed method can provide comparable or better noise removal performance than the adaptive filtering approach. PMID:24321306

  6. A robust, high-throughput method for computing maize ear, cob, and kernel attributes automatically from images.

    PubMed

    Miller, Nathan D; Haase, Nicholas J; Lee, Jonghyun; Kaeppler, Shawn M; de Leon, Natalia; Spalding, Edgar P

    2017-01-01

    Grain yield of the maize plant depends on the sizes, shapes, and numbers of ears and the kernels they bear. An automated pipeline that can measure these components of yield from easily-obtained digital images is needed to advance our understanding of this globally important crop. Here we present three custom algorithms designed to compute such yield components automatically from digital images acquired by a low-cost platform. One algorithm determines the average space each kernel occupies along the cob axis using a sliding-window Fourier transform analysis of image intensity features. A second counts individual kernels removed from ears, including those in clusters. A third measures each kernel's major and minor axis after a Bayesian analysis of contour points identifies the kernel tip. Dimensionless ear and kernel shape traits that may interrelate yield components are measured by principal components analysis of contour point sets. Increased objectivity and speed compared to typical manual methods are achieved without loss of accuracy as evidenced by high correlations with ground truth measurements and simulated data. Millimeter-scale differences among ear, cob, and kernel traits that ranged more than 2.5-fold across a diverse group of inbred maize lines were resolved. This system for measuring maize ear, cob, and kernel attributes is being used by multiple research groups as an automated Web service running on community high-throughput computing and distributed data storage infrastructure. Users may create their own workflow using the source code that is staged for download on a public repository.

  7. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice

    PubMed Central

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel “trick” concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available. PMID:27555865

  8. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    PubMed

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  9. A Novel Cortical Thickness Estimation Method based on Volumetric Laplace-Beltrami Operator and Heat Kernel

    PubMed Central

    Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J.; Wang, Yalin

    2015-01-01

    Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the grey matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer’s disease (AD) and mild cognitive impairment (MCI) on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces. PMID:25700360

  10. A novel cortical thickness estimation method based on volumetric Laplace-Beltrami operator and heat kernel.

    PubMed

    Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J; Wang, Yalin

    2015-05-01

    Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the gray matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer's disease (AD) and mild cognitive impairment (MCI) on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces.

  11. A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4

    NASA Technical Reports Server (NTRS)

    Park, Young-Keun; Fahrenthold, Eric P.

    2004-01-01

    An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.

  12. Kernel reconstruction methods for Doppler broadening - Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    NASA Astrophysics Data System (ADS)

    Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord

    2017-04-01

    This article establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T - namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin ,Tmax ]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [ 300 K , 3000 K ] with only 9 reference temperatures.

  13. Kernel reconstruction methods for Doppler broadening — Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    DOE PAGES

    Ducru, Pablo; Josey, Colin; Dibert, Karia; ...

    2017-01-25

    This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of themore » operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin,Tmax]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.« less

  14. Efficient recovery-based error estimation for the smoothed finite element method for smooth and singular linear elasticity

    NASA Astrophysics Data System (ADS)

    González-Estrada, Octavio A.; Natarajan, Sundararajan; Ródenas, Juan José; Nguyen-Xuan, Hung; Bordas, Stéphane P. A.

    2013-07-01

    An error control technique aimed to assess the quality of smoothed finite element approximations is presented in this paper. Finite element techniques based on strain smoothing appeared in 2007 were shown to provide significant advantages compared to conventional finite element approximations. In particular, a widely cited strength of such methods is improved accuracy for the same computational cost. Yet, few attempts have been made to directly assess the quality of the results obtained during the simulation by evaluating an estimate of the discretization error. Here we propose a recovery type error estimator based on an enhanced recovery technique. The salient features of the recovery are: enforcement of local equilibrium and, for singular problems a "smooth + singular" decomposition of the recovered stress. We evaluate the proposed estimator on a number of test cases from linear elastic structural mechanics and obtain efficient error estimations whose effectivities, both at local and global levels, are improved compared to recovery procedures not implementing these features.

  15. Development of low-frequency kernel-function aerodynamics for comparison with time-dependent finite-difference methods

    NASA Technical Reports Server (NTRS)

    Bland, S. R.

    1982-01-01

    Finite difference methods for unsteady transonic flow frequency use simplified equations in which certain of the time dependent terms are omitted from the governing equations. Kernel functions are derived for two dimensional subsonic flow, and provide accurate solutions of the linearized potential equation with the same time dependent terms omitted. These solutions make possible a direct evaluation of the finite difference codes for the linear problem. Calculations with two of these low frequency kernel functions verify the accuracy of the LTRAN2 and HYTRAN2 finite difference codes. Comparisons of the low frequency kernel function results with the Possio kernel function solution of the complete linear equations indicate the adequacy of the HYTRAN approximation for frequencies in the range of interest for flutter calculations.

  16. Weighted Wilcoxon-type Smoothly Clipped Absolute Deviation Method

    PubMed Central

    Wang, Lan; Li, Runze

    2009-01-01

    Summary Shrinkage-type variable selection procedures have recently seen increasing applications in biomedical research. However, their performance can be adversely influenced by outliers in either the response or the covariate space. This paper proposes a weighted Wilcoxon-type smoothly clipped absolute deviation (WW-SCAD) method, which deals with robust variable selection and robust estimation simultaneously. The new procedure can be conveniently implemented with the statistical software R. We establish that the WW-SCAD correctly identifies the set of zero coefficients with probability approaching one and estimates the nonzero coefficients with the rate n−1/2. Moreover, with appropriately chosen weights the WW-SCAD is robust with respect to outliers in both the x and y directions. The important special case with constant weights yields an oracle-type estimator with high efficiency at the presence of heavier-tailed random errors. The robustness of the WW-SCAD is partly justified by its asymptotic performance under local shrinking contamination. We propose a BIC-type tuning parameter selector for the WW-SCAD. The performance of the WW-SCAD is demonstrated via simulations and by an application to a study that investigates the effects of personal characteristics and dietary factors on plasma beta-carotene level. PMID:18647294

  17. Bioactive compounds in cashew nut (Anacardium occidentale L.) kernels: effect of different shelling methods.

    PubMed

    Trox, Jennifer; Vadivel, Vellingiri; Vetter, Walter; Stuetz, Wolfgang; Scherbaum, Veronika; Gola, Ute; Nohr, Donatus; Biesalski, Hans Konrad

    2010-05-12

    In the present study, the effects of various conventional shelling methods (oil-bath roasting, direct steam roasting, drying, and open pan roasting) as well as a novel "Flores" hand-cracking method on the levels of bioactive compounds of cashew nut kernels were investigated. The raw cashew nut kernels were found to possess appreciable levels of certain bioactive compounds such as beta-carotene (9.57 microg/100 g of DM), lutein (30.29 microg/100 g of DM), zeaxanthin (0.56 microg/100 g of DM), alpha-tocopherol (0.29 mg/100 g of DM), gamma-tocopherol (1.10 mg/100 g of DM), thiamin (1.08 mg/100 g of DM), stearic acid (4.96 g/100 g of DM), oleic acid (21.87 g/100 g of DM), and linoleic acid (5.55 g/100 g of DM). All of the conventional shelling methods including oil-bath roasting, steam roasting, drying, and open pan roasting revealed a significant reduction, whereas the Flores hand-cracking method exhibited similar levels of carotenoids, thiamin, and unsaturated fatty acids in cashew nuts when compared to raw unprocessed samples.

  18. Introducing kernel based morphology as an enhancement method for mass classification on mammography.

    PubMed

    Amirzadi, Azardokht; Azmi, Reza

    2013-04-01

    Since mammography images are in low-contrast, applying enhancement techniques as a pre-processing step are wisely recommended in the classification of the abnormal lesions into benign or malignant. A new kind of structural enhancement is proposed by morphological operator, which introduces an optimal Gaussian Kernel primitive, the kernel parameters are optimized the use of Genetic Algorithm. We also take the advantages of optical density (OD) images to promote the diagnosis rate. The proposed enhancement method is applied on both the gray level (GL) images and their OD values respectively, as a result morphological patterns get bolder on GL images; then, local binary patterns are extracted from this kind of images. Applying the enhancement method on OD images causes more differences between the values therefore a threshold method is applied toremove some background pixels. Those pixels that are more eligible to be mass are remained, and some statistical texture features are extracted from their equivalent GL images. Support vector machine is used for both approaches and the final decision is made by combining these two classifiers. The classification performance rate is evaluated by Az, under the receiver operating characteristic curve. The designed method yields Az = 0.9231, which demonstrates good results.

  19. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2016-10-14

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  20. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM

    PubMed Central

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir

    2016-01-01

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research. PMID:27754428

  1. The complex variable reproducing kernel particle method for the analysis of Kirchhoff plates

    NASA Astrophysics Data System (ADS)

    Chen, L.; Cheng, Y. M.; Ma, H. P.

    2015-03-01

    In this paper, the complex variable reproducing kernel particle method (CVRKPM) for the bending problem of arbitrary Kirchhoff plates is presented. The advantage of the CVRKPM is that the shape function of a two-dimensional problem is obtained one-dimensional basis function. The CVRKPM is used to form the approximation function of the deflection of a Kirchhoff plate, the Galerkin weak form of the bending problem of Kirchhoff plates is adopted to obtain the discretized system equations, and the penalty method is employed to enforce the essential boundary conditions, then the corresponding formulae of the CVRKPM for the bending problem of Kirchhoff plates are presented in detail. Several numerical examples of Kirchhoff plates with different geometry and loads are given to demonstrate that the CVRKPM in this paper has higher computational precision and efficiency than the reproducing kernel particle method under the same node distribution. And the influences of the basis function, weight function, scaling factor, node distribution and penalty factor on the computational precision of the CVRKPM in this paper are discussed.

  2. An effective meshfree reproducing kernel method for buckling analysis of cylindrical shells with and without cutouts

    NASA Astrophysics Data System (ADS)

    Sadamoto, S.; Ozdemir, M.; Tanaka, S.; Taniguchi, K.; Yu, T. T.; Bui, T. Q.

    2017-02-01

    The paper is concerned with eigen buckling analysis of curvilinear shells with and without cutouts by an effective meshfree method. In particular, shallow shell, cylinder and perforated cylinder buckling problems are considered. A Galerkin meshfree reproducing kernel (RK) approach is then developed. The present meshfree curvilinear shell model is based on Reissner-Mindlin plate formulation, which allows the transverse shear deformation of the curved shells. There are five degrees of freedom per node (i.e., three displacements and two rotations). In this setting, the meshfree interpolation functions are derived from the RK. A singular kernel is introduced to impose the essential boundary conditions because of the RK shape functions, which do not automatically possess the Kronecker delta property. The stiffness matrix is derived using the stabilized conforming nodal integration technique. A convected coordinate system is introduced into the formulation to deal with the curvilinear surface. More importantly, the RKs taken here are used not only for the interpolation of the curved geometry, but also for the approximation of field variables. Several numerical examples with shallow shells and full cylinder models are considered, and the critical buckling loads and their buckling mode shapes are calculated by the meshfree eigenvalue analysis and examined. To show the accuracy and performance of the developed meshfree method, the computed critical buckling loads and mode shapes are compared with reference solutions based on boundary domain element, finite element and analytical methods.

  3. Multiple genetic variant association testing by collapsing and kernel methods with pedigree or population structured data.

    PubMed

    Schaid, Daniel J; McDonnell, Shannon K; Sinnwell, Jason P; Thibodeau, Stephen N

    2013-07-01

    Searching for rare genetic variants associated with complex diseases can be facilitated by enriching for diseased carriers of rare variants by sampling cases from pedigrees enriched for disease, possibly with related or unrelated controls. This strategy, however, complicates analyses because of shared genetic ancestry, as well as linkage disequilibrium among genetic markers. To overcome these problems, we developed broad classes of "burden" statistics and kernel statistics, extending commonly used methods for unrelated case-control data to allow for known pedigree relationships, for autosomes and the X chromosome. Furthermore, by replacing pedigree-based genetic correlation matrices with estimates of genetic relationships based on large-scale genomic data, our methods can be used to account for population-structured data. By simulations, we show that the type I error rates of our developed methods are near the asymptotic nominal levels, allowing rapid computation of P-values. Our simulations also show that a linear weighted kernel statistic is generally more powerful than a weighted "burden" statistic. Because the proposed statistics are rapid to compute, they can be readily used for large-scale screening of the association of genomic sequence data with disease status.

  4. A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks

    PubMed Central

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  5. A Distributed Learning Method for ℓ 1 -Regularized Kernel Machine over Wireless Sensor Networks.

    PubMed

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-07-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ 1 norm regularization ( ℓ 1 -regularized) is investigated, and a novel distributed learning algorithm for the ℓ 1 -regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost.

  6. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  7. Evaluation of Two New Smoothing Methods in Equating: The Cubic B-Spline Presmoothing Method and the Direct Presmoothing Method

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2009-01-01

    This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…

  8. Immersed Boundary Smooth Extension (IBSE): A high-order method for solving incompressible flows in arbitrary smooth domains

    NASA Astrophysics Data System (ADS)

    Stein, David B.; Guy, Robert D.; Thomases, Becca

    2017-04-01

    The Immersed Boundary method is a simple, efficient, and robust numerical scheme for solving PDE in general domains, yet for fluid problems it only achieves first-order spatial accuracy near embedded boundaries for the velocity field and fails to converge pointwise for elements of the stress tensor. In a previous work we introduced the Immersed Boundary Smooth Extension (IBSE) method, a variation of the IB method that achieves high-order accuracy for elliptic PDE by smoothly extending the unknown solution of the PDE from a given smooth domain to a larger computational domain, enabling the use of simple Cartesian-grid discretizations. In this work, we extend the IBSE method to allow for the imposition of a divergence constraint, and demonstrate high-order convergence for the Stokes and incompressible Navier-Stokes equations: up to third-order pointwise convergence for the velocity field, and second-order pointwise convergence for all elements of the stress tensor. The method is flexible to the underlying discretization: we demonstrate solutions produced using both a Fourier spectral discretization and a standard second-order finite-difference discretization.

  9. A Generalized Grid-Based Fast Multipole Method for Integrating Helmholtz Kernels.

    PubMed

    Parkkinen, Pauli; Losilla, Sergio A; Solala, Eelis; Toivanen, Elias A; Xu, Wen-Hua; Sundholm, Dage

    2017-02-14

    A grid-based fast multipole method (GB-FMM) for optimizing three-dimensional (3D) numerical molecular orbitals in the bubbles and cube double basis has been developed and implemented. The present GB-FMM method is a generalization of our recently published GB-FMM approach for numerically calculating electrostatic potentials and two-electron interaction energies. The orbital optimization is performed by integrating the Helmholtz kernel in the double basis. The steep part of the functions in the vicinity of the nuclei is represented by one-center bubbles functions, whereas the remaining cube part is expanded on an equidistant 3D grid. The integration of the bubbles part is treated by using one-center expansions of the Helmholtz kernel in spherical harmonics multiplied with modified spherical Bessel functions of the first and second kind, analogously to the numerical inward and outward integration approach for calculating two-electron interaction potentials in atomic structure calculations. The expressions and algorithms for massively parallel calculations on general purpose graphics processing units (GPGPU) are described. The accuracy and the correctness of the implementation has been checked by performing Hartree-Fock self-consistent-field calculations (HF-SCF) on H2, H2O, and CO. Our calculations show that an accuracy of 10(-4) to 10(-7) Eh can be reached in HF-SCF calculations on general molecules.

  10. Multi-class Mode of Action Classification of Toxic Compounds Using Logic Based Kernel Methods.

    PubMed

    Lodhi, Huma; Muggleton, Stephen; Sternberg, Mike J E

    2010-09-17

    Toxicity prediction is essential for drug design and development of effective therapeutics. In this paper we present an in silico strategy, to identify the mode of action of toxic compounds, that is based on the use of a novel logic based kernel method. The technique uses support vector machines in conjunction with the kernels constructed from first order rules induced by an Inductive Logic Programming system. It constructs multi-class models by using a divide and conquer reduction strategy that splits multi-classes into binary groups and solves each individual problem recursively hence generating an underlying decision list structure. In order to evaluate the effectiveness of the approach for chemoinformatics problems like predictive toxicology, we apply it to toxicity classification in aquatic systems. The method is used to identify and classify 442 compounds with respect to the mode of action. The experimental results show that the technique successfully classifies toxic compounds and can be useful in assessing environmental risks. Experimental comparison of the performance of the proposed multi-class scheme with the standard multi-class Inductive Logic Programming algorithm and multi-class Support Vector Machine yields statistically significant results and demonstrates the potential power and benefits of the approach in identifying compounds of various toxic mechanisms.

  11. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel

  12. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel

  13. Multi-feature-based robust face detection and coarse alignment method via multiple kernel learning

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Zhang, Di; He, Jun; Yu, Lejun; Wu, Xuewen

    2015-10-01

    Face detection and alignment are two crucial tasks to face recognition which is a hot topic in the field of defense and security, whatever for the safety of social public, personal property as well as information and communication security. Common approaches toward the treatment of these tasks in recent years are often of three types: template matching-based, knowledge-based and machine learning-based, which are always separate-step, high computation cost or fragile robust. After deep analysis on a great deal of Chinese face images without hats, we propose a novel face detection and coarse alignment method, which is inspired by those three types of methods. It is multi-feature fusion with Simple Multiple Kernel Learning1 (Simple-MKL) algorithm. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve promising results.

  14. Impact of beam smoothing method on direct drive target performance for the NIF

    SciTech Connect

    Rothenberg, J.E.; Weber, S.V.

    1997-01-01

    The impact of smoothing method on the performance of a direct drive target is modeled and examined in terms of its 1-mode spectrum. In particular, two classes of smoothing methods are compared, smoothing by spectral dispersion (SSD) and the induced spatial incoherence (ISI) method. It is found that SSD using sinusoidal phase modulation (FM) results in poor smoothing at low 1-modes and therefore inferior target performance at both peak velocity and ignition. This disparity is most notable if the effective imprinting integration time of the target is small. However, using SSD with more generalized phase modulation can result in smoothing at low l-modes which is identical to that obtained with ISI. For either smoothing method, the calculations indicate that at peak velocity the surface perturbations are about 100 times larger than that which leads to nonlinear hydrodynamics. Modeling of the hydrodynamic nonlinearity shows that saturation can reduce the amplified nonuniformities to the level required to achieve ignition for either smoothing method. The low l- mode behavior at ignition is found to be strongly dependent on the induced divergence of the smoothing method. For the NIF parameters the target performance asymptotes for smoothing divergence larger than {approximately}100 {mu}rad.

  15. Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods

    NASA Astrophysics Data System (ADS)

    Hora, Heinrich; Aydin, Meral

    1992-04-01

    The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too.

  16. Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods

    SciTech Connect

    Hora, H. ); Aydin, M. )

    1992-04-15

    The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too.

  17. Lattice-Boltzmann method combined with smoothed-profile method for particulate suspensions

    NASA Astrophysics Data System (ADS)

    Jafari, Saeed; Yamamoto, Ryoichi; Rahnama, Mohamad

    2011-02-01

    We developed a simulation scheme based on the coupling of the lattice-Boltzmann method with the smoothed-profile method (SPM) to predict the dynamic behavior of colloidal dispersions. The SPM provides a coupling scheme between continuum fluid dynamics and rigid-body dynamics through a smoothed profile of the fluid-particle interface. In this approach, the flow is computed on fixed Eulerian grids which are also used for the particles. Owing to the use of the same grids for simulation of fluid flow and particles, this method is highly efficient. Furthermore, an external boundary is used to impose the no-slip boundary condition at the fluid-particle interface. In addition, the operations in the present method are local; it can be easily programmed for parallel machines. The methodology is validated by comparing with previously published data.

  18. Lattice-Boltzmann method combined with smoothed-profile method for particulate suspensions.

    PubMed

    Jafari, Saeed; Yamamoto, Ryoichi; Rahnama, Mohamad

    2011-02-01

    We developed a simulation scheme based on the coupling of the lattice-Boltzmann method with the smoothed-profile method (SPM) to predict the dynamic behavior of colloidal dispersions. The SPM provides a coupling scheme between continuum fluid dynamics and rigid-body dynamics through a smoothed profile of the fluid-particle interface. In this approach, the flow is computed on fixed Eulerian grids which are also used for the particles. Owing to the use of the same grids for simulation of fluid flow and particles, this method is highly efficient. Furthermore, an external boundary is used to impose the no-slip boundary condition at the fluid-particle interface. In addition, the operations in the present method are local; it can be easily programmed for parallel machines. The methodology is validated by comparing with previously published data.

  19. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2016-02-25

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  20. Parametric kernel-driven active contours for image segmentation

    NASA Astrophysics Data System (ADS)

    Wu, Qiongzhi; Fang, Jiangxiong

    2012-10-01

    We investigated a parametric kernel-driven active contour (PKAC) model, which implicitly transfers kernel mapping and piecewise constant to modeling the image data via kernel function. The proposed model consists of curve evolution functional with three terms: global kernel-driven and local kernel-driven terms, which evaluate the deviation of the mapped image data within each region from the piecewise constant model, and a regularization term expressed as the length of the evolution curves. In the local kernel-driven term, the proposed model can effectively segment images with intensity inhomogeneity by incorporating the local image information. By balancing the weight between the global kernel-driven term and the local kernel-driven term, the proposed model can segment the images with either intensity homogeneity or intensity inhomogeneity. To ensure the smoothness of the level set function and reduce the computational cost, the distance regularizing term is applied to penalize the deviation of the level set function and eliminate the requirement of re-initialization. Compared with the local image fitting model and local binary fitting model, experimental results show the advantages of the proposed method in terms of computational efficiency and accuracy.

  1. Discriminating between HuR and TTP binding sites using the k-spectrum kernel method

    PubMed Central

    Goldberg, Debra S.; Dowell, Robin

    2017-01-01

    Background The RNA binding proteins (RBPs) human antigen R (HuR) and Tristetraprolin (TTP) are known to exhibit competitive binding but have opposing effects on the bound messenger RNA (mRNA). How cells discriminate between the two proteins is an interesting problem. Machine learning approaches, such as support vector machines (SVMs), may be useful in the identification of discriminative features. However, this method has yet to be applied to studies of RNA binding protein motifs. Results Applying the k-spectrum kernel to a support vector machine (SVM), we first verified the published binding sites of both HuR and TTP. Additional feature engineering highlighted the U-rich binding preference of HuR and AU-rich binding preference for TTP. Domain adaptation along with multi-task learning was used to predict the common binding sites. Conclusion The distinction between HuR and TTP binding appears to be subtle content features. HuR prefers strongly U-rich sequences whereas TTP prefers AU-rich as with increasing A content, the sequences are more likely to be bound only by TTP. Our model is consistent with competitive binding of the two proteins, particularly at intermediate AU-balanced sequences. This suggests that fine changes in the A/U balance within a untranslated region (UTR) can alter the binding and subsequent stability of the message. Both feature engineering and domain adaptation emphasized the extent to which these proteins recognize similar general sequence features. This work suggests that the k-spectrum kernel method could be useful when studying RNA binding proteins and domain adaptation techniques such as feature augmentation could be employed particularly when examining RBPs with similar binding preferences. PMID:28333956

  2. An Evaluation of Kernel Equating: Parallel Equating with Classical Methods in the SAT Subject Tests[TM] Program. Research Report. ETS RR-09-06

    ERIC Educational Resources Information Center

    Grant, Mary C.; Zhang, Lilly; Damiano, Michele

    2009-01-01

    This study investigated kernel equating methods by comparing these methods to operational equatings for two tests in the SAT Subject Tests[TM] program. GENASYS (ETS, 2007) was used for all equating methods and scaled score kernel equating results were compared to Tucker, Levine observed score, chained linear, and chained equipercentile equating…

  3. Electro-optical deflectors as a method of beam smoothing for Inertial Confinement Fusion

    SciTech Connect

    Rothenberg, J.E.

    1997-01-01

    The electro-optic deflector is analyzed and compared to smoothing by spectral dispersion for efficacy as a beam smoothing method for ICF. It is found that the electro-optic deflector is inherently somewhat less efficient when compared either on the basis of equal peak phase modulation or equal generated bandwidth.

  4. Prediction of posttranslational modification sites from amino acid sequences with kernel methods.

    PubMed

    Xu, Yan; Wang, Xiaobo; Wang, Yongcui; Tian, Yingjie; Shao, Xiaojian; Wu, Ling-Yun; Deng, Naiyang

    2014-03-07

    Post-translational modification (PTM) is the chemical modification of a protein after its translation and one of the later steps in protein biosynthesis for many proteins. It plays an important role which modifies the end product of gene expression and contributes to biological processes and diseased conditions. However, the experimental methods for identifying PTM sites are both costly and time-consuming. Hence computational methods are highly desired. In this work, a novel encoding method PSPM (position-specific propensity matrices) is developed. Then a support vector machine (SVM) with the kernel matrix computed by PSPM is applied to predict the PTM sites. The experimental results indicate that the performance of new method is better or comparable with the existing methods. Therefore, the new method is a useful computational resource for the identification of PTM sites. A unified standalone software PTMPred is developed. It can be used to predict all types of PTM sites if the user provides the training datasets. The software can be freely downloaded from http://www.aporc.org/doc/wiki/PTMPred.

  5. On radiative transfer using synthetic kernel and simplified spherical harmonics methods in linearly anisotropically scattering media

    NASA Astrophysics Data System (ADS)

    Altaç, Zekeriya

    2014-11-01

    The Synthetic Kernel (SKN) method is employed to a 3D absorbing, emitting and linearly anisotropically scattering inhomogeneous medium. Standard SKN approximation is applied only to the diffusive components of the radiative transfer equations. An alternative SKN (S KN*) method is also derived in full 3-D generality by extending the approximation to the direct wall contributions. Complete sets of boundary conditions for both SKN approaches are rigorously obtained. The simplified spherical harmonics (P2N-1 or SP2N-1) and simplified double spherical harmonics (DPN-1 or SDPN-1) equations for linearly anisotropically scattering homogeneous medium are also derived. Resulting full P2N-1 and DPN-1 (or SP2N-1 and SDPN-1) equations are cast as diagonalized second order coupled diffusion-like equations. By this analysis, it is shown that the SKN method is a high-order approximation, and simply by the selection of full or half range Gauss-Legendre quadratures, S KN* equations become identical to P2N-1 or DPN-1 (or SP2N-1 or SDPN-1) equations. Numerical verification of all methods presented is carried out using a 1D participating isotropic slab medium. The SKN method proves to be more accurate than S KN* approximation, but it is analytically more involved. It is shown that the S KN* with proposed BCs converges with increasing order of approximation, and the BCs are applicable to SPN or SDPN methods.

  6. An adaptive segment method for smoothing lidar signal based on noise estimation

    NASA Astrophysics Data System (ADS)

    Wang, Yuzhao; Luo, Pingping

    2014-10-01

    An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.

  7. Nonlinear hyperspectral unmixing based on constrained multiple kernel NMF

    NASA Astrophysics Data System (ADS)

    Cui, Jiantao; Li, Xiaorun; Zhao, Liaoying

    2014-05-01

    Nonlinear spectral unmixing constitutes an important field of research for hyperspectral imagery. An unsupervised nonlinear spectral unmixing algorithm, namely multiple kernel constrained nonnegative matrix factorization (MKCNMF) is proposed by coupling multiple-kernel selection with kernel NMF. Additionally, a minimum endmemberwise distance constraint and an abundance smoothness constraint are introduced to alleviate the uniqueness problem of NMF in the algorithm. In the MKCNMF, two problems of optimizing matrices and selecting the proper kernel are jointly solved. The performance of the proposed unmixing algorithm is evaluated via experiments based on synthetic and real hyperspectral data sets. The experimental results demonstrate that the proposed method outperforms some existing unmixing algorithms in terms of spectral angle distance (SAD) and abundance fractions.

  8. Application of dose kernel calculation using a simplified Monte Carlo method to treatment plan for scanned proton beams.

    PubMed

    Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo

    2016-03-01

    Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13 times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduction of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine. PACS number(s): 87.55.Gh.

  9. Application of dose kernel calculation using a simplified Monte Carlo method to treatment plan for scanned proton beams.

    PubMed

    Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo

    2016-03-08

    Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13 times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduction of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine.

  10. A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2013-01-01

    This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600

  11. Adaptive reproducing kernel particle method for extraction of the cortical surface.

    PubMed

    Xu, Meihe; Thompson, Paul M; Toga, Arthur W

    2006-06-01

    We propose a novel adaptive approach based on the Reproducing Kernel Particle Method (RKPM) to extract the cortical surfaces of the brain from three-dimensional (3-D) magnetic resonance images (MRIs). To formulate the discrete equations of the deformable model, a flexible particle shape function is employed in the Galerkin approximation of the weak form of the equilibrium equations. The proposed support generation method ensures that support of all particles cover the entire computational domains. The deformable model is adaptively adjusted by dilating the shape function and by inserting or merging particles in the high curvature regions or regions stopped by the target boundary. The shape function of the particle with a dilation parameter is adaptively constructed in response to particle insertion or merging. The proposed method offers flexibility in representing highly convolved structures and in refining the deformable models. Self-intersection of the surface, during evolution, is prevented by tracing backward along gradient descent direction from the crest interface of the distance field, which is computed by fast marching. These operations involve a significant computational cost. The initial model for the deformable surface is simple and requires no prior knowledge of the segmented structure. No specific template is required, e.g., an average cortical surface obtained from many subjects. The extracted cortical surface efficiently localizes the depths of the cerebral sulci, unlike some other active surface approaches that penalize regions of high curvature. Comparisons with manually segmented landmark data are provided to demonstrate the high accuracy of the proposed method. We also compare the proposed method to the finite element method, and to a commonly used cortical surface extraction approach, the CRUISE method. We also show that the independence of the shape functions of the RKPM from the underlying mesh enhances the convergence speed of the deformable

  12. Haplotype Kernel Association Test as a Powerful Method to Identify Chromosomal Regions Harboring Uncommon Causal Variants

    PubMed Central

    Lin, Wan-Yu; Yi, Nengjun; Lou, Xiang-Yang; Zhi, Degui; Zhang, Kui; Gao, Guimin; Tiwari, Hemant K.; Liu, Nianjun

    2014-01-01

    For most complex diseases, the fraction of heritability that can be explained by the variants discovered from genome-wide association studies is minor. Although the so-called ‘rare variants’ (minor allele frequency [MAF] < 1%) have attracted increasing attention, they are unlikely to account for much of the ‘missing heritability’ because very few people may carry these rare variants. The genetic variants that are likely to fill in the ‘missing heritability’ include uncommon causal variants (MAF < 5%), which are generally untyped in association studies using tagging single-nucleotide polymorphisms (SNPs) or commercial SNP arrays. Developing powerful statistical methods can help to identify chromosomal regions harboring uncommon causal variants, while bypassing the genome-wide or exome-wide next-generation sequencing. In this work, we propose a haplotype kernel association test (HKAT) that is equivalent to testing the variance component of random effects for distinct haplotypes. With an appropriate weighting scheme given to haplotypes, we can further enhance the ability of HKAT to detect uncommon causal variants. With scenarios simulated according to the population genetics theory, HKAT is shown to be a powerful method for detecting chromosomal regions harboring uncommon causal variants. PMID:23740760

  13. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    ERIC Educational Resources Information Center

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  14. Numerical Convergence In Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zhu, Qirong; Hernquist, Lars; Li, Yuexing

    2015-02-01

    We study the convergence properties of smoothed particle hydrodynamics (SPH) using numerical tests and simple analytic considerations. Our analysis shows that formal numerical convergence is possible in SPH only in the joint limit N → ∞, h → 0, and Nnb → ∞, where N is the total number of particles, h is the smoothing length, and Nnb is the number of neighbor particles within the smoothing volume used to compute smoothed estimates. Previous work has generally assumed that the conditions N → ∞ and h → 0 are sufficient to achieve convergence, while holding Nnb fixed. We demonstrate that if Nnb is held fixed as the resolution is increased, there will be a residual source of error that does not vanish as N → ∞ and h → 0. Formal numerical convergence in SPH is possible only if Nnb is increased systematically as the resolution is improved. Using analytic arguments, we derive an optimal compromise scaling for Nnb by requiring that this source of error balance that present in the smoothing procedure. For typical choices of the smoothing kernel, we find Nnb vpropN 0.5. This means that if SPH is to be used as a numerically convergent method, the required computational cost does not scale with particle number as O(N), but rather as O(N 1 + δ), where δ ≈ 0.5, with a weak dependence on the form of the smoothing kernel.

  15. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  16. Smooth connection method of segment test data in road surface profile measurement

    NASA Astrophysics Data System (ADS)

    Duan, Hu-Ming; Ma, Ying; Shi, Feng; Zhang, Kai-Bin; Xie, Fei

    2011-12-01

    It's reviewed that the measurement system of road surface profile and the calculation method of segment road test data have been introduced. Because of there are sudden vertical steps at the connection points of segment data which will influence the application of road surface data in automotive engineering. So a new smooth connection method of segment test data is proposed which revised the sudden vertical steps connection by the Signal Local Baseline Adjustment (SLBA) method. Besides, there is an actual example which mentioned the detailed process of the smooth connection of segment test data by the SLBA method and the adjusting results at these connection points. The application and calculation results show that the SLBA method is simple and has achieved obvious effect in smooth connection of the segment road test data. The method of SLBA can be widely applied to segment road surface data processing or the long period vibration signal processing.

  17. Smooth connection method of segment test data in road surface profile measurement

    NASA Astrophysics Data System (ADS)

    Duan, Hu-Ming; Ma, Ying; Shi, Feng; Zhang, Kai-Bin; Xie, Fei

    2012-01-01

    It's reviewed that the measurement system of road surface profile and the calculation method of segment road test data have been introduced. Because of there are sudden vertical steps at the connection points of segment data which will influence the application of road surface data in automotive engineering. So a new smooth connection method of segment test data is proposed which revised the sudden vertical steps connection by the Signal Local Baseline Adjustment (SLBA) method. Besides, there is an actual example which mentioned the detailed process of the smooth connection of segment test data by the SLBA method and the adjusting results at these connection points. The application and calculation results show that the SLBA method is simple and has achieved obvious effect in smooth connection of the segment road test data. The method of SLBA can be widely applied to segment road surface data processing or the long period vibration signal processing.

  18. Tomography, Adjoint Methods, Time-Reversal, and Banana-Doughnut Kernels

    NASA Astrophysics Data System (ADS)

    Tape, C.; Tromp, J.; Liu, Q.

    2004-12-01

    We demonstrate that Fréchet derivatives for tomographic inversions may be obtained based upon just two calculations for each earthquake: one calculation for the current model and a second, `adjoint', calculation that uses time-reversed signals at the receivers as simultaneous, fictitious sources. For a given model~m, we consider objective functions χ(m) that minimize differences between waveforms, traveltimes, or amplitudes. We show that the Fréchet derivatives of such objective functions may be written in the generic form δ χ=∫ VK_m( {x}) δ ln m( {x}) d3 {x}, where δ ln m=δ m/m denotes the relative model perturbation. The volumetric kernel Km is defined throughout the model volume V and is determined by time-integrated products between spatial and temporal derivatives of the regular displacement field {s} and the adjoint displacement field {s} obtained by using time-reversed signals at the receivers as simultaneous sources. In waveform tomography the time-reversed signal consists of differences between the data and the synthetics, in traveltime tomography it is determined by synthetic velocities, and in amplitude tomography it is controlled by synthetic displacements. For each event, the construction of the kernel Km requires one forward calculation for the regular field {s} and one adjoint calculation involving the fields {s} and {s}. For multiple events the kernels are simply summed. The final summed kernel is controlled by the distribution of events and stations and thus determines image resolution. In the case of traveltime tomography, the kernels Km are weighted combinations of banana-doughnut kernels. We demonstrate also how amplitude anomalies may be inverted for lateral variations in elastic and anelastic structure. The theory is illustrated based upon 2D spectral-element simulations.

  19. Study on preparation method of Zanthoxylum bungeanum seeds kernel oil with zero trans-fatty acids.

    PubMed

    Liu, Tong; Yao, Shi-Yong; Yin, Zhong-Yi; Zheng, Xu-Xu; Shen, Yu

    2016-04-01

    The seed of Zanthoxylum bungeanum (Z. bungeanum) is a by-product of pepper production and rich in unsaturated fatty acid, cellulose, and protein. The seed oil obtained from traditional producing process by squeezing or extracting would be bad quality and could not be used as edible oil. In this paper, a new preparation method of Z. bungeanum seed kernel oil (ZSKO) was developed by comparing the advantages and disadvantages of alkali saponification-cold squeezing, alkali saponification-solvent extraction, and alkali saponification-supercritical fluid extraction with carbon dioxide (SFE-CO2). The results showed that the alkali saponification-cold squeezing could be the optimal preparation method of ZSKO, which contained the following steps: Z. bungeanum seed was pretreated by alkali saponification under the conditions of adding 10 %NaOH (w/w), solution temperature was 80 °C, and saponification reaction time was 45 min, and pretreated seed was separated by filtering, water washing, and overnight drying at 50 °C, then repeated squeezing was taken until no oil generated at 60 °C with 15 % moisture content, and ZSKO was attained finally using centrifuge. The produced ZSKO contained more than 90 % unsaturated fatty acids and no trans-fatty acids and be testified as a good edible oil with low-value level of acid and peroxide. It was demonstrated that the alkali saponification-cold squeezing process could be scaled up and applied to industrialized production of ZSKO.

  20. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  1. Comparison of Exponential Smoothing Methods in Forecasting Palm Oil Real Production

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Butar-Butar, I. A.; Rahmat, RF; Andayani, U.; Fahmi, F.

    2017-01-01

    Palm oil has important role for the plantation subsector. Forecasting of the real palm oil production in certain period is needed by plantation companies to maintain their strategic management. This study compared several methods based on exponential smoothing (ES) technique such as single ES, double exponential smoothing holt, triple exponential smoothing, triple exponential smoothing additive and multiplicative to predict the palm oil production. We examined the accuracy of forecasting models of production data and analyzed the characteristics of the models. Programming language R was used with selected constants for double ES (α and β) and triple ES (α, β, and γ) evaluated by the technique of minimizing the root mean squared prediction error (RMSE). Our result showed that triple ES additives had lowest error rate compared to the other models with RMSE of 0.10 with a combination of parameters α = 0.6, β = 0.02, and γ = 0.02.

  2. A high-order Immersed Boundary method for solving fluid problems on arbitrary smooth domains

    NASA Astrophysics Data System (ADS)

    Stein, David; Guy, Robert; Thomases, Becca

    2015-11-01

    We present a robust, flexible, and high-order Immersed Boundary method for solving the equations of fluid motion on domains with smooth boundaries using FFT-based spectral methods. The solution to the PDE is coupled with an equation for a smooth extension of the unknown solution; high-order accuracy is a natural consequence of this additional global regularity. The method retains much of the simplicity of the original Immersed Boundary method, and enables the use of simple implicit and implicit/explicit timestepping schemes to be used to solve a wide range of problems. We show results for the Stokes, Navier-Stokes, and Oldroyd-B equations.

  3. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2015-12-22

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  4. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2016-06-01

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  5. Full Waveform Inversion Using Waveform Sensitivity Kernels

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    2013-04-01

    We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver

  6. Kernel Optimization in Discriminant Analysis

    PubMed Central

    You, Di; Hamsici, Onur C.; Martinez, Aleix M.

    2011-01-01

    Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results using a large number of databases and classifiers demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates. PMID:20820072

  7. An accurate method for evaluating the kernel of the integral equation relating lift to downwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  8. Abundance estimation of solid and liquid mixtures in hyperspectral imagery with albedo-based and kernel-based methods

    NASA Astrophysics Data System (ADS)

    Rand, Robert S.; Resmini, Ronald G.; Allen, David W.

    2016-09-01

    This study investigates methods for characterizing materials that are mixtures of granular solids, or mixtures of liquids, which may be linear or non-linear. Linear mixtures of materials in a scene are often the result of areal mixing, where the pixel size of a sensor is relatively large so they contain patches of different materials within them. Non-linear mixtures are likely to occur with microscopic mixtures of solids, such as mixtures of powders, or mixtures of liquids, or wherever complex scattering of light occurs. This study considers two approaches for use as generalized methods for un-mixing pixels in a scene that may be linear or non-linear. One method is based on earlier studies that indicate non-linear mixtures in reflectance space are approximately linear in albedo space. This method converts reflectance to single-scattering albedo (SSA) according to Hapke theory assuming bidirectional scattering at nadir look angles and uses a constrained linear model on the computed albedo values. The other method is motivated by the same idea, but uses a kernel that seeks to capture the linear behavior of albedo in non-linear mixtures of materials. The behavior of the kernel method can be highly dependent on the value of a parameter, gamma, which provides flexibility for the kernel method to respond to both linear and non-linear phenomena. Our study pays particular attention to this parameter for responding to linear and non-linear mixtures. Laboratory experiments on both granular solids and liquid solutions are performed with scenes of hyperspectral data.

  9. A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems

    SciTech Connect

    Zhang Guiyong; Liu Guirong

    2010-05-21

    In the framework of a weakened weak (W{sup 2}) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W{sup 2} formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H{sup 1} space, but in a G{sup 1} space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H{sup 1} space and G{sup 1} space can be viewed as a space of functions with weakened weak (W{sup 2}) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W{sup 2} formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W{sup 2} formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much

  10. Nondestructive In Situ Measurement Method for Kernel Moisture Content in Corn Ear.

    PubMed

    Zhang, Han-Lin; Ma, Qin; Fan, Li-Feng; Zhao, Peng-Fei; Wang, Jian-Xu; Zhang, Xiao-Dong; Zhu, De-Hai; Huang, Lan; Zhao, Dong-Jie; Wang, Zhong-Yi

    2016-12-20

    Moisture content is an important factor in corn breeding and cultivation. A corn breed with low moisture at harvest is beneficial for mechanical operations, reduces drying and storage costs after harvesting and, thus, reduces energy consumption. Nondestructive measurement of kernel moisture in an intact corn ear allows us to select corn varieties with seeds that have high dehydration speeds in the mature period. We designed a sensor using a ring electrode pair for nondestructive measurement of the kernel moisture in a corn ear based on a high-frequency detection circuit. Through experiments using the effective scope of the electrodes' electric field, we confirmed that the moisture in the corn cob has little effect on corn kernel moisture measurement. Before the sensor was applied in practice, we investigated temperature and conductivity effects on the output impedance. Results showed that the temperature was linearly related to the output impedance (both real and imaginary parts) of the measurement electrodes and the detection circuit's output voltage. However, the conductivity has a non-monotonic dependence on the output impedance (both real and imaginary parts) of the measurement electrodes and the output voltage of the high-frequency detection circuit. Therefore, we reduced the effect of conductivity on the measurement results through measurement frequency selection. Corn moisture measurement results showed a quadric regression between corn ear moisture and the imaginary part of the output impedance, and there is also a quadric regression between corn kernel moisture and the high-frequency detection circuit output voltage at 100 MHz. In this study, two corn breeds were measured using our sensor and gave R² values for the quadric regression equation of 0.7853 and 0.8496.

  11. Nondestructive In Situ Measurement Method for Kernel Moisture Content in Corn Ear

    PubMed Central

    Zhang, Han-Lin; Ma, Qin; Fan, Li-Feng; Zhao, Peng-Fei; Wang, Jian-Xu; Zhang, Xiao-Dong; Zhu, De-Hai; Huang, Lan; Zhao, Dong-Jie; Wang, Zhong-Yi

    2016-01-01

    Moisture content is an important factor in corn breeding and cultivation. A corn breed with low moisture at harvest is beneficial for mechanical operations, reduces drying and storage costs after harvesting and, thus, reduces energy consumption. Nondestructive measurement of kernel moisture in an intact corn ear allows us to select corn varieties with seeds that have high dehydration speeds in the mature period. We designed a sensor using a ring electrode pair for nondestructive measurement of the kernel moisture in a corn ear based on a high-frequency detection circuit. Through experiments using the effective scope of the electrodes’ electric field, we confirmed that the moisture in the corn cob has little effect on corn kernel moisture measurement. Before the sensor was applied in practice, we investigated temperature and conductivity effects on the output impedance. Results showed that the temperature was linearly related to the output impedance (both real and imaginary parts) of the measurement electrodes and the detection circuit’s output voltage. However, the conductivity has a non-monotonic dependence on the output impedance (both real and imaginary parts) of the measurement electrodes and the output voltage of the high-frequency detection circuit. Therefore, we reduced the effect of conductivity on the measurement results through measurement frequency selection. Corn moisture measurement results showed a quadric regression between corn ear moisture and the imaginary part of the output impedance, and there is also a quadric regression between corn kernel moisture and the high-frequency detection circuit output voltage at 100 MHz. In this study, two corn breeds were measured using our sensor and gave R2 values for the quadric regression equation of 0.7853 and 0.8496. PMID:27999404

  12. A simple method for computing the relativistic Compton scattering kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Prasad, M. K.; Kershaw, D. S.; Beason, J. D.

    1986-01-01

    Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.

  13. Bayesian Kernel Mixtures for Counts.

    PubMed

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  14. A new method for evaluation of the resistance to rice kernel cracking based on moisture absorption in brown rice under controlled conditions

    PubMed Central

    Hayashi, Takeshi; Kobayashi, Asako; Tomita, Katsura; Shimizu, Toyohiro

    2015-01-01

    We developed and evaluated the effectiveness of a new method to detect differences among rice cultivars in their resistance to kernel cracking. The method induces kernel cracking under laboratory controlled condition by moisture absorption to brown rice. The optimal moisture absorption conditions were determined using two japonica cultivars, ‘Nipponbare’ as a cracking-resistant cultivar and ‘Yamahikari’ as a cracking-susceptible cultivar: 12% initial moisture content of the brown rice, a temperature of 25°C, a duration of 5 h, and only a single absorption treatment. We then evaluated the effectiveness of these conditions using 12 japonica cultivars. The proportion of cracked kernels was significantly correlated with the mean 10-day maximum temperature after heading. In addition, the correlation between the proportions of cracked kernels in the 2 years of the study was higher than that for values obtained using the traditional late harvest method. The new moisture absorption method could stably evaluate the resistance to kernel cracking, and will help breeders to develop future cultivars with less cracking of the kernels. PMID:26719740

  15. On the accuracy of analytical methods for turbulent flows near smooth walls

    NASA Astrophysics Data System (ADS)

    Absi, Rafik; Di Nucci, Carmine

    2012-09-01

    This Note presents two methods for mean streamwise velocity profiles of fully-developed turbulent pipe and channel flows near smooth walls. The first is the classical approach where the mean streamwise velocity is obtained by solving the momentum equation with an eddy viscosity formulation [R. Absi, A simple eddy viscosity formulation for turbulent boundary layers near smooth walls, C. R. Mecanique 337 (2009) 158-165]. The second approach presents a formulation of the velocity profile based on an analogy with an electric field distribution [C. Di Nucci, E. Fiorucci, Mean velocity profiles of fully-developed turbulent flows near smooth walls, C. R. Mecanique 339 (2011) 388-395] and a formulation for the turbulent shear stress. However, this formulation for the turbulent shear stress shows a weakness. A corrected formulation is presented. Comparisons with DNS data show that the classical approach with the eddy viscosity formulation provides more accurate profiles for both turbulent shear stress and velocity gradient.

  16. Power Series Approximation for the Correlation Kernel Leading to Kohn-Sham Methods Combining Accuracy, Computational Efficiency, and General Applicability

    NASA Astrophysics Data System (ADS)

    Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas

    2016-09-01

    A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.

  17. A Novel Method for Modeling Neumann and Robin Boundary Conditions in Smoothed Particle Hydrodynamics

    SciTech Connect

    Ryan, Emily M.; Tartakovsky, Alexandre M.; Amon, Cristina

    2010-08-26

    In this paper we present an improved method for handling Neumann or Robin boundary conditions in smoothed particle hydrodynamics. The Neumann and Robin boundary conditions are common to many physical problems (such as heat/mass transfer), and can prove challenging to model in volumetric modeling techniques such as smoothed particle hydrodynamics (SPH). A new SPH method for diffusion type equations subject to Neumann or Robin boundary conditions is proposed. The new method is based on the continuum surface force model [1] and allows an efficient implementation of the Neumann and Robin boundary conditions in the SPH method for geometrically complex boundaries. The paper discusses the details of the method and the criteria needed to apply the model. The model is used to simulate diffusion and surface reactions and its accuracy is demonstrated through test cases for boundary conditions describing different surface reactions.

  18. A method for smoothing segmented lung boundary in chest CT images

    NASA Astrophysics Data System (ADS)

    Yim, Yeny; Hong, Helen

    2007-03-01

    To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.

  19. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  20. Melnikov Method for a Three-Zonal Planar Hybrid Piecewise-Smooth System and Application

    NASA Astrophysics Data System (ADS)

    Li, Shuangbao; Ma, Wensai; Zhang, Wei; Hao, Yuxin

    In this paper, we extend the well-known Melnikov method for smooth systems to a class of planar hybrid piecewise-smooth systems, defined in three domains separated by two switching manifolds x = a and x = b. The dynamics in each domain is governed by a smooth system. When an orbit reaches the separation lines, then a reset map describing an impacting rule applies instantaneously before the orbit enters into another domain. We assume that the unperturbed system has a continuum of periodic orbits transversally crossing the separation lines. Then, we wish to study the persistence of the periodic orbits under an autonomous perturbation and the reset map. To achieve this objective, we first choose four appropriate switching sections and build a Poincaré map, after that, we present a displacement function and carry on the Taylor expansion of the displacement function to the first-order in the perturbation parameter ɛ near ɛ = 0. We denote the first coefficient in the expansion as the first-order Melnikov function whose zeros provide us the persistence of periodic orbits under perturbation. Finally, we study periodic orbits of a concrete planar hybrid piecewise-smooth system by the obtained Melnikov function.

  1. Kernel principal component analysis residual diagnosis (KPCARD): An automated method for cosmic ray artifact removal in Raman spectra.

    PubMed

    Li, Boyan; Calvet, Amandine; Casamayou-Boucau, Yannick; Ryder, Alan G

    2016-03-24

    A new, fully automated, rapid method, referred to as kernel principal component analysis residual diagnosis (KPCARD), is proposed for removing cosmic ray artifacts (CRAs) in Raman spectra, and in particular for large Raman imaging datasets. KPCARD identifies CRAs via a statistical analysis of the residuals obtained at each wavenumber in the spectra. The method utilizes the stochastic nature of CRAs; therefore, the most significant components in principal component analysis (PCA) of large numbers of Raman spectra should not contain any CRAs. The process worked by first implementing kernel PCA (kPCA) on all the Raman mapping data and second accurately estimating the inter- and intra-spectrum noise to generate two threshold values. CRA identification was then achieved by using the threshold values to evaluate the residuals for each spectrum and assess if a CRA was present. CRA correction was achieved by spectral replacement where, the nearest neighbor (NN) spectrum, most spectroscopically similar to the CRA contaminated spectrum and principal components (PCs) obtained by kPCA were both used to generate a robust, best curve fit to the CRA contaminated spectrum. This best fit spectrum then replaced the CRA contaminated spectrum in the dataset. KPCARD efficacy was demonstrated by using simulated data and real Raman spectra collected from solid-state materials. The results showed that KPCARD was fast (<1 min per 8400 spectra), accurate, precise, and suitable for the automated correction of very large (>1 million) Raman datasets.

  2. Analysis of the incomplete Galerkin method for modelling of smoothly-irregular transition between planar waveguides

    NASA Astrophysics Data System (ADS)

    Divakov, D.; Sevastianov, L.; Nikolaev, N.

    2017-01-01

    The paper deals with a numerical solution of the problem of waveguide propagation of polarized light in smoothly-irregular transition between closed regular waveguides using the incomplete Galerkin method. This method consists in replacement of variables in the problem of reduction of the Helmholtz equation to the system of differential equations by the Kantorovich method and in formulation of the boundary conditions for the resulting system. The formulation of the boundary problem for the ODE system is realized in computer algebra system Maple. The stated boundary problem is solved using Maples libraries of numerical methods.

  3. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment

    PubMed Central

    Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong

    2016-01-01

    In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405

  4. The implementation of binned Kernel density estimation to determine open clusters' proper motions: validation of the method

    NASA Astrophysics Data System (ADS)

    Priyatikanto, R.; Arifyanto, M. I.

    2015-01-01

    Stellar membership determination of an open cluster is an important process to do before further analysis. Basically, there are two classes of membership determination method: parametric and non-parametric. In this study, an alternative of non-parametric method based on Binned Kernel Density Estimation that accounts measurements errors (simply called BKDE- e) is proposed. This method is applied upon proper motions data to determine cluster's membership kinematically and estimate the average proper motions of the cluster. Monte Carlo simulations show that the average proper motions determination using this proposed method is statistically more accurate than ordinary Kernel Density Estimator (KDE). By including measurement errors in the calculation, the mode location from the resulting density estimate is less sensitive to non-physical or stochastic fluctuation as compared to ordinary KDE that excludes measurement errors. For the typical mean measurement error of 7 mas/yr, BKDE- e suppresses the potential of miscalculation by a factor of two compared to KDE. With median accuracy of about 93 %, BKDE- e method has comparable accuracy with respect to parametric method (modified Sanders algorithm). Application to real data from The Fourth USNO CCD Astrograph Catalog (UCAC4), especially to NGC 2682 is also performed. The mode of member stars distribution on Vector Point Diagram is located at μ α cos δ=-9.94±0.85 mas/yr and μ δ =-4.92±0.88 mas/yr. Although the BKDE- e performance does not overtake parametric approach, it serves a new view of doing membership analysis, expandable to astrometric and photometric data or even in binary cluster search.

  5. A strategy to couple the material point method (MPM) and smoothed particle hydrodynamics (SPH) computational techniques

    NASA Astrophysics Data System (ADS)

    Raymond, Samuel J.; Jones, Bruce; Williams, John R.

    2016-12-01

    A strategy is introduced to allow coupling of the material point method (MPM) and smoothed particle hydrodynamics (SPH) for numerical simulations. This new strategy partitions the domain into SPH and MPM regions, particles carry all state variables and as such no special treatment is required for the transition between regions. The aim of this work is to derive and validate the coupling methodology between MPM and SPH. Such coupling allows for general boundary conditions to be used in an SPH simulation without further augmentation. Additionally, as SPH is a purely particle method, and MPM is a combination of particles and a mesh. This coupling also permits a smooth transition from particle methods to mesh methods, where further coupling to mesh methods could in future provide an effective farfield boundary treatment for the SPH method. The coupling technique is introduced and described alongside a number of simulations in 1D and 2D to validate and contextualize the potential of using these two methods in a single simulation. The strategy shown here is capable of fully coupling the two methods without any complicated algorithms to transform information from one method to another.

  6. Modeling particle-laden turbulent flows with two-way coupling using a high-order kernel density function method

    NASA Astrophysics Data System (ADS)

    Smith, Timothy; Lu, Xiaoyi; Ranjan, Reetesh; Pantano, Carlos

    2016-11-01

    We describe a two-way coupled turbulent dispersed flow computational model using a high-order kernel density function (KDF) method. The carrier-phase solution is obtained using a high-order spatial and temporal incompressible Navier-Stokes solver while the KDF dispersed-phase solver uses the high-order Legendre WENO method. The computational approach is used to model carrier-phase turbulence modulation by the dispersed phase, and particle dispersion by turbulence as a function of momentum coupling strength (particle loading) and number of KDF basis functions. The use of several KDF's allows the model to capture statistical effects of particle trajectory crossing to high degree. Details of the numerical implementation and the coupling between the incompressible flow and dispersed-phase solvers will be discussed, and results at a range of Reynolds numbers will be presented. This work was supported by the National Science Foundation under Grant DMS-1318161.

  7. Robust signal reconstruction for condition monitoring of industrial components via a modified Auto Associative Kernel Regression method

    NASA Astrophysics Data System (ADS)

    Baraldi, Piero; Di Maio, Francesco; Turati, Pietro; Zio, Enrico

    2015-08-01

    In this work, we propose a modification of the traditional Auto Associative Kernel Regression (AAKR) method which enhances the signal reconstruction robustness, i.e., the capability of reconstructing abnormal signals to the values expected in normal conditions. The modification is based on the definition of a new procedure for the computation of the similarity between the present measurements and the historical patterns used to perform the signal reconstructions. The underlying conjecture for this is that malfunctions causing variations of a small number of signals are more frequent than those causing variations of a large number of signals. The proposed method has been applied to real normal condition data collected in an industrial plant for energy production. Its performance has been verified considering synthetic and real malfunctioning. The obtained results show an improvement in the early detection of abnormal conditions and the correct identification of the signals responsible of triggering the detection.

  8. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    SciTech Connect

    Huang, J; Followill, D; Howell, R; Liu, X; Mirkovic, D; Stingo, F; Kry, S

    2015-06-15

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titanium and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus

  9. An Evaluation of the Kernel Equating Method: A Special Study with Pseudotests Constructed from Real Test Data. Research Report. ETS RR-06-02

    ERIC Educational Resources Information Center

    von Davier, Alina A.; Holland, Paul W.; Livingston, Samuel A.; Casabianca, Jodi; Grant, Mary C.; Martin, Kathleen

    2006-01-01

    This study examines how closely the kernel equating (KE) method (von Davier, Holland, & Thayer, 2004a) approximates the results of other observed-score equating methods--equipercentile and linear equatings. The study used pseudotests constructed of item responses from a real test to simulate three equating designs: an equivalent groups (EG)…

  10. NUMERICAL CONVERGENCE IN SMOOTHED PARTICLE HYDRODYNAMICS

    SciTech Connect

    Zhu, Qirong; Li, Yuexing; Hernquist, Lars

    2015-02-10

    We study the convergence properties of smoothed particle hydrodynamics (SPH) using numerical tests and simple analytic considerations. Our analysis shows that formal numerical convergence is possible in SPH only in the joint limit N → ∞, h → 0, and N{sub nb} → ∞, where N is the total number of particles, h is the smoothing length, and N{sub nb} is the number of neighbor particles within the smoothing volume used to compute smoothed estimates. Previous work has generally assumed that the conditions N → ∞ and h → 0 are sufficient to achieve convergence, while holding N{sub nb} fixed. We demonstrate that if N{sub nb} is held fixed as the resolution is increased, there will be a residual source of error that does not vanish as N → ∞ and h → 0. Formal numerical convergence in SPH is possible only if N{sub nb} is increased systematically as the resolution is improved. Using analytic arguments, we derive an optimal compromise scaling for N{sub nb} by requiring that this source of error balance that present in the smoothing procedure. For typical choices of the smoothing kernel, we find N{sub nb} ∝N {sup 0.5}. This means that if SPH is to be used as a numerically convergent method, the required computational cost does not scale with particle number as O(N), but rather as O(N {sup 1} {sup +} {sup δ}), where δ ≈ 0.5, with a weak dependence on the form of the smoothing kernel.

  11. An immersed boundary method for smoothed particle hydrodynamics of self-propelled swimmers

    NASA Astrophysics Data System (ADS)

    Hieber, S. E.; Koumoutsakos, P.

    2008-10-01

    We present a novel particle method, combining remeshed Smoothed Particle Hydrodynamics with Immersed Boundary and Level Set techniques for the simulation of flows past complex deforming geometries. The present method retains the Lagrangian adaptivity of particle methods and relies on the remeshing of particle locations in order to ensure the accuracy of the method. In fact this remeshing step enables the introduction of Immersed Boundary Techniques used in grid based methods. The method is applied to simulations of flows of isothermal and compressible fluids past steady and unsteady solid boundaries that are described using a particle Level Set formulation. The method is validated with two and three-dimensional benchmark problems of flows past cylinders and spheres and it is shown to be well suited to simulations of large scale simulations using tens of millions of particles, on flow-structure interaction problems as they pertain to self-propelled anguilliform swimmers.

  12. Incomplete iterations in multistep backward difference methods for parabolic problems with smooth and nonsmooth data

    SciTech Connect

    Bramble, J. H.; Pasciak, J. E.; Sammon, P. H.; Thomee, V.

    1989-04-01

    Backward difference methods for the discretization of parabolic boundary value problems are considered in this paper. In particular, we analyze the case when the backward difference equations are only solved 'approximately' by a preconditioned iteration. We provide an analysis which shows that these methods remain stable and accurate if a suitable number of iterations (often independent of the spatial discretization and time step size) are used. Results are provided for the smooth as well as nonsmooth initial data cases. Finally, the results of numerical experiments illustrating the algorithms' performance on model problems are given.

  13. Image reconstruction for 3D light microscopy with a regularized linear method incorporating a smoothness prior

    NASA Astrophysics Data System (ADS)

    Preza, Chrysanthe; Miller, Michael I.; Conchello, Jose-Angel

    1993-07-01

    We have shown that the linear least-squares (LLS) estimate of the intensities of a 3-D object obtained from a set of optical sections is unstable due to the inversion of small and zero-valued eigenvalues of the point-spread function (PSF) operator. The LLS solution was regularized by constraining it to lie in a subspace spanned by the eigenvectors corresponding to a selected number of the largest eigenvalues. In this paper we extend the regularized LLS solution to a maximum a posteriori (MAP) solution induced by a prior formed from a 'Good's like' smoothness penalty. This approach also yields a regularized linear estimator which reduces noise as well as edge artifacts in the reconstruction. The advantage of the linear MAP (LMAP) estimate over the current regularized LLS (RLLS) is its ability to regularize the inverse problem by smoothly penalizing components in the image associated with small eigenvalues. Computer simulations were performed using a theoretical PSF and a simple phantom to compare the two regularization techniques. It is shown that the reconstructions using the smoothness prior, give superior variance and bias results compared to the RLLS reconstructions. Encouraging reconstructions obtained with the LMAP method from real microscopical images of a 10 micrometers fluorescent bead, and a four-cell Volvox embryo are shown.

  14. A Fast Variational Method for the Construction of Resolution Adaptive C-Smooth Molecular Surfaces.

    PubMed

    Bajaj, Chandrajit L; Xu, Guoliang; Zhang, Qin

    2009-05-01

    We present a variational approach to smooth molecular (proteins, nucleic acids) surface constructions, starting from atomic coordinates, as available from the protein and nucleic-acid data banks. Molecular dynamics (MD) simulations traditionally used in understanding protein and nucleic-acid folding processes, are based on molecular force fields, and require smooth models of these molecular surfaces. To accelerate MD simulations, a popular methodology is to employ coarse grained molecular models, which represent clusters of atoms with similar physical properties by psuedo- atoms, resulting in coarser resolution molecular surfaces. We consider generation of these mixed-resolution or adaptive molecular surfaces. Our approach starts from deriving a general form second order geometric partial differential equation in the level-set formulation, by minimizing a first order energy functional which additionally includes a regularization term to minimize the occurrence of chemically infeasible molecular surface pockets or tunnel-like artifacts. To achieve even higher computational efficiency, a fast cubic B-spline C(2) interpolation algorithm is also utilized. A narrow band, tri-cubic B-spline level-set method is then used to provide C(2) smooth and resolution adaptive molecular surfaces.

  15. Source Region Identification Using Kernel Smoothing

    EPA Science Inventory

    As described in this paper, Nonparametric Wind Regression is a source-to-receptor source apportionment model that can be used to identify and quantify the impact of possible source regions of pollutants as defined by wind direction sectors. It is described in detail with an exam...

  16. Kernel phase and kernel amplitude in Fizeau imaging

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin J. S.

    2016-12-01

    Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent history of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.

  17. The multiscale restriction smoothed basis method for fractured porous media (F-MsRSB)

    NASA Astrophysics Data System (ADS)

    Shah, Swej; Møyner, Olav; Tene, Matei; Lie, Knut-Andreas; Hajibeygi, Hadi

    2016-08-01

    A novel multiscale method for multiphase flow in heterogeneous fractured porous media is devised. The discrete fine-scale system is described using an embedded fracture modeling approach, in which the heterogeneous rock (matrix) and highly-conductive fractures are represented on independent grids. Given this fine-scale discrete system, the method first partitions the fine-scale volumetric grid representing the matrix and the lower-dimensional grids representing fractures into independent coarse grids. Then, basis functions for matrix and fractures are constructed by restricted smoothing, which gives a flexible and robust treatment of complex geometrical features and heterogeneous coefficients. From the basis functions one constructs a prolongation operator that maps between the coarse- and fine-scale systems. The resulting method allows for general coupling of matrix and fracture basis functions, giving efficient treatment of a large variety of fracture conductivities. In addition, basis functions can be adaptively updated using efficient global smoothing strategies to account for multiphase flow effects. The method is conservative and because it is described and implemented in algebraic form, it is straightforward to employ it to both rectilinear and unstructured grids. Through a series of challenging test cases for single and multiphase flow, in which synthetic and realistic fracture maps are combined with heterogeneous petrophysical matrix properties, we validate the method and conclude that it is an efficient and accurate approach for simulating flow in complex, large-scale, fractured media.

  18. The CACAO Method for Smoothing, Gap Filling, and Characterizing Seasonal Anomalies in Satellite Time Series

    NASA Technical Reports Server (NTRS)

    Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.

    2013-01-01

    Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.

  19. Adaptive smoothing of valleys in DEMs using TIN interpolation from ridgeline elevations: An application to morphotectonic aspect analysis

    NASA Astrophysics Data System (ADS)

    Jordan, Gyozo

    2007-05-01

    This paper presents a smoothing method that eliminates valleys of various Strahler-order drainage lines from a digital elevation model (DEM), thus enabling the recovery of local and regional trends in a terrain. A novel method for automated extraction of high-density channel network is developed to identify ridgelines defined as the watershed boundaries of channel segments. A DEM using TIN interpolation is calculated based on elevations of digitally extracted ridgelines. This removes first-order watersheds from the DEM. Higher levels of DEM smoothing can be achieved by the application of the method to ridgelines of higher-order channels. The advantage of the proposed smoothing method over traditional smoothing methods of moving kernel, trend and spectral methods is that it does not require pre-definition of smoothing parameters, such as kernel or trend parameters, and thus it follows topography in an adaptive way. Another advantage is that smoothing is controlled by the physical-hydrological properties of the terrain, as opposed to mathematical filters. Level of smoothing depends on ridgeline geometry and density, and the applied user-defined channel order. The method requires digital extraction of a high-density channel and ridgeline network. The advantage of the smoothing method over traditional methods is demonstrated through a case study of the Kali Basin test site in Hungary. The smoothing method is used in this study for aspect generalisation for morphotectonic investigations in a small watershed.

  20. Generalized hidden-mapping ridge regression, knowledge-leveraged inductive transfer learning for neural networks, fuzzy systems and kernel methods.

    PubMed

    Deng, Zhaohong; Choi, Kup-Sze; Jiang, Yizhang; Wang, Shitong

    2014-12-01

    Inductive transfer learning has attracted increasing attention for the training of effective model in the target domain by leveraging the information in the source domain. However, most transfer learning methods are developed for a specific model, such as the commonly used support vector machine, which makes the methods applicable only to the adopted models. In this regard, the generalized hidden-mapping ridge regression (GHRR) method is introduced in order to train various types of classical intelligence models, including neural networks, fuzzy logical systems and kernel methods. Furthermore, the knowledge-leverage based transfer learning mechanism is integrated with GHRR to realize the inductive transfer learning method called transfer GHRR (TGHRR). Since the information from the induced knowledge is much clearer and more concise than that from the data in the source domain, it is more convenient to control and balance the similarity and difference of data distributions between the source and target domains. The proposed GHRR and TGHRR algorithms have been evaluated experimentally by performing regression and classification on synthetic and real world datasets. The results demonstrate that the performance of TGHRR is competitive with or even superior to existing state-of-the-art inductive transfer learning algorithms.

  1. Method of adiabatic modes in studying problems of smoothly irregular open waveguide structures

    SciTech Connect

    Sevastianov, L. A.; Egorov, A. A.; Sevastyanov, A. L.

    2013-02-15

    Basic steps in developing an original method of adiabatic modes that makes it possible to solve the direct and inverse problems of simulating and designing three-dimensional multilayered smoothly irregular open waveguide structures are described. A new element in the method is that an approximate solution of Maxwell's equations is made to obey 'inclined' boundary conditions at the interfaces between themedia being considered. These boundary conditions take into account the obliqueness of planes tangent to nonplanar boundaries between the media and lead to new equations for coupled vector quasiwaveguide hybrid adiabatic modes. Solutions of these equations describe the phenomenon of 'entanglement' of two linear polarizations of an irregular multilayered waveguide, the appearance of a new mode in an entangled state, and the effect of rotation of the polarization plane of quasiwaveguide modes. The efficiency of the method is demonstrated by considering the example of numerically simulating a thin-film generalized waveguide Lueneburg lens.

  2. A method for the accurate and smooth approximation of standard thermodynamic functions

    NASA Astrophysics Data System (ADS)

    Coufal, O.

    2013-01-01

    A method is proposed for the calculation of approximations of standard thermodynamic functions. The method is consistent with the physical properties of standard thermodynamic functions. This means that the approximation functions are, in contrast to the hitherto used approximations, continuous and smooth in every temperature interval in which no phase transformations take place. The calculation algorithm was implemented by the SmoothSTF program in the C++ language which is part of this paper. Program summaryProgram title:SmoothSTF Catalogue identifier: AENH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3807 No. of bytes in distributed program, including test data, etc.: 131965 Distribution format: tar.gz Programming language: C++. Computer: Any computer with gcc version 4.3.2 compiler. Operating system: Debian GNU Linux 6.0. The program can be run in operating systems in which the gcc compiler can be installed, see http://gcc.gnu.org/install/specific.html. RAM: 256 MB are sufficient for the table of standard thermodynamic functions with 500 lines Classification: 4.9. Nature of problem: Standard thermodynamic functions (STF) of individual substances are given by thermal capacity at constant pressure, entropy and enthalpy. STF are continuous and smooth in every temperature interval in which no phase transformations take place. The temperature dependence of STF as expressed by the table of its values is for further application approximated by temperature functions. In the paper, a method is proposed for calculating approximation functions which, in contrast to the hitherto used approximations, are continuous and smooth in every temperature interval. Solution method: The approximation functions are

  3. Immersed smoothed finite element method for fluid-structure interaction simulation of aortic valves

    NASA Astrophysics Data System (ADS)

    Yao, Jianyao; Liu, G. R.; Narmoneva, Daria A.; Hinton, Robert B.; Zhang, Zhi-Qian

    2012-12-01

    This paper presents a novel numerical method for simulating the fluid-structure interaction (FSI) problems when blood flows over aortic valves. The method uses the immersed boundary/element method and the smoothed finite element method and hence it is termed as IS-FEM. The IS-FEM is a partitioned approach and does not need a body-fitted mesh for FSI simulations. It consists of three main modules: the fluid solver, the solid solver and the FSI force solver. In this work, the blood is modeled as incompressible viscous flow and solved using the characteristic-based-split scheme with FEM for spacial discretization. The leaflets of the aortic valve are modeled as Mooney-Rivlin hyperelastic materials and solved using smoothed finite element method (or S-FEM). The FSI force is calculated on the Lagrangian fictitious fluid mesh that is identical to the moving solid mesh. The octree search and neighbor-to-neighbor schemes are used to detect efficiently the FSI pairs of fluid and solid cells. As an example, a 3D idealized model of aortic valve is modeled, and the opening process of the valve is simulated using the proposed IS-FEM. Numerical results indicate that the IS-FEM can serve as an efficient tool in the study of aortic valve dynamics to reveal the details of stresses in the aortic valves, the flow velocities in the blood, and the shear forces on the interfaces. This tool can also be applied to animal models studying disease processes and may ultimately translate to a new adaptive methods working with magnetic resonance images, leading to improvements on diagnostic and prognostic paradigms, as well as surgical planning, in the care of patients.

  4. Weighted Bergman kernels and virtual Bergman kernels

    NASA Astrophysics Data System (ADS)

    Roos, Guy

    2005-12-01

    We introduce the notion of "virtual Bergman kernel" and apply it to the computation of the Bergman kernel of "domains inflated by Hermitian balls", in particular when the base domain is a bounded symmetric domain.

  5. Numerical study of a multigrid method with four smoothing methods for the incompressible Navier-Stokes equations in general coordinates

    NASA Technical Reports Server (NTRS)

    Zeng, S.; Wesseling, P.

    1993-01-01

    The performance of a linear multigrid method using four smoothing methods, called SCGS (Symmetrical Coupled GauBeta-Seidel), CLGS (Collective Line GauBeta-Seidel), SILU (Scalar ILU), and CILU (Collective ILU), is investigated for the incompressible Navier-Stokes equations in general coordinates, in association with Galerkin coarse grid approximation. Robustness and efficiency are measured and compared by application to test problems. The numerical results show that CILU is the most robust, SILU the least, with CLGS and SCGS in between. CLGS is the best in efficiency, SCGS and CILU follow, and SILU is the worst.

  6. Total phenolics, antioxidant activity, and functional properties of 'Tommy Atkins' mango peel and kernel as affected by drying methods.

    PubMed

    Sogi, Dalbir Singh; Siddiq, Muhammad; Greiby, Ibrahim; Dolan, Kirk D

    2013-12-01

    Mango processing produces significant amount of waste (peels and kernels) that can be utilized for the production of value-added ingredients for various food applications. Mango peel and kernel were dried using different techniques, such as freeze drying, hot air, vacuum and infrared. Freeze dried mango waste had higher antioxidant properties than those from other techniques. The ORAC values of peel and kernel varied from 418-776 and 1547-1819 μmol TE/g db. The solubility of freeze dried peel and kernel powder was the highest. The water and oil absorption index of mango waste powders ranged between 1.83-6.05 and 1.66-3.10, respectively. Freeze dried powders had the lowest bulk density values among different techniques tried. The cabinet dried waste powders can be potentially used in food products to enhance their nutritional and antioxidant properties.

  7. Perturbation theory for anisotropic dielectric interfaces, and application to subpixel smoothing of discretized numerical methods.

    PubMed

    Kottke, Chris; Farjadpour, Ardavan; Johnson, Steven G

    2008-03-01

    We derive a correct first-order perturbation theory in electromagnetism for cases where an interface between two anisotropic dielectric materials is slightly shifted. Most previous perturbative methods give incorrect results for this case, even to lowest order, because of the complicated discontinuous boundary conditions on the electric field at such an interface. Our final expression is simply a surface integral, over the material interface, of the continuous field components from the unperturbed structure. The derivation is based on a "localized" coordinate-transformation technique, which avoids both the problem of field discontinuities and the challenge of constructing an explicit coordinate transformation by taking the limit in which the coordinate perturbation is infinitesimally localized around the boundary. Not only is our result potentially useful in evaluating boundary perturbations, e.g., from fabrication imperfections, in highly anisotropic media such as many metamaterials, but it also has a direct application in numerical electromagnetism. In particular, we show how it leads to a subpixel smoothing scheme to ameliorate staircasing effects in discretized simulations of anisotropic media, in such a way as to greatly reduce the numerical errors compared to other proposed smoothing schemes.

  8. Invariant measures of smooth dynamical systems, generalized functions and summation methods

    NASA Astrophysics Data System (ADS)

    Kozlov, V. V.

    2016-04-01

    We discuss conditions for the existence of invariant measures of smooth dynamical systems on compact manifolds. If there is an invariant measure with continuously differentiable density, then the divergence of the vector field along every solution tends to zero in the Cesàro sense as time increases unboundedly. Here the Cesàro convergence may be replaced, for example, by any Riesz summation method, which can be arbitrarily close to ordinary convergence (but does not coincide with it). We give an example of a system whose divergence tends to zero in the ordinary sense but none of its invariant measures is absolutely continuous with respect to the `standard' Lebesgue measure (generated by some Riemannian metric) on the phase space. We give examples of analytic systems of differential equations on analytic phase spaces admitting invariant measures of any prescribed smoothness (including a measure with integrable density), but having no invariant measures with positive continuous densities. We give a new proof of the classical Bogolyubov-Krylov theorem using generalized functions and the Hahn-Banach theorem. The properties of signed invariant measures are also discussed.

  9. Calculation of smooth potential energy surfaces using local electron correlation methods

    NASA Astrophysics Data System (ADS)

    Mata, Ricardo A.; Werner, Hans-Joachim

    2006-11-01

    The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl- with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barrier heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.

  10. Calculation of smooth potential energy surfaces using local electron correlation methods

    SciTech Connect

    Mata, Ricardo A.; Werner, Hans-Joachim

    2006-11-14

    The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl{sup -} with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barrier heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.

  11. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  12. Application of Holt exponential smoothing and ARIMA method for data population in West Java

    NASA Astrophysics Data System (ADS)

    Supriatna, A.; Susanti, D.; Hertini, E.

    2017-01-01

    One method of time series that is often used to predict data that contains trend is Holt. Holt method using different parameters used in the original data which aims to smooth the trend value. In addition to Holt, ARIMA method can be used on a wide variety of data including data pattern containing a pattern trend. Data actual of population from 1998-2015 contains the trends so can be solved by Holt and ARIMA method to obtain the prediction value of some periods. The best method is measured by looking at the smallest MAPE and MAE error. The result using Holt method is 47.205.749 populations in 2016, 47.535.324 populations in 2017, and 48.041.672 populations in 2018, with MAPE error is 0,469744 and MAE error is 189.731. While the result using ARIMA method is 46.964.682 populations in 2016, 47.342.189 in 2017, and 47.899.696 in 2018, with MAPE error is 0,4380 and MAE is 176.626.

  13. Workshop on advances in smooth particle hydrodynamics

    SciTech Connect

    Wingate, C.A.; Miller, W.A.

    1993-12-31

    This proceedings contains viewgraphs presented at the 1993 workshop held at Los Alamos National Laboratory. Discussed topics include: negative stress, reactive flow calculations, interface problems, boundaries and interfaces, energy conservation in viscous flows, linked penetration calculations, stability and consistency of the SPH method, instabilities, wall heating and conservative smoothing, tensors, tidal disruption of stars, breaking the 10,000,000 particle limit, modelling relativistic collapse, SPH without H, relativistic KSPH avoidance of velocity based kernels, tidal compression and disruption of stars near a supermassive rotation black hole, and finally relativistic SPH viscosity and energy.

  14. A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method

    NASA Astrophysics Data System (ADS)

    Bush, I. J.; Todorov, I. T.; Smith, W.

    2006-09-01

    The Smoothed Particle Mesh Ewald method [U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys. 103 (1995) 8577] for calculating long ranged forces in molecular simulation has been adapted for the parallel molecular dynamics code DL_POLY_3 [I.T. Todorov, W. Smith, Philos. Trans. Roy. Soc. London 362 (2004) 1835], making use of a novel 3D Fast Fourier Transform (DAFT) [I.J. Bush, The Daresbury Advanced Fourier transform, Daresbury Laboratory, 1999] that perfectly matches the Domain Decomposition (DD) parallelisation strategy [W. Smith, Comput. Phys. Comm. 62 (1991) 229; M.R.S. Pinches, D. Tildesley, W. Smith, Mol. Sim. 6 (1991) 51; D. Rapaport, Comput. Phys. Comm. 62 (1991) 217] of the DL_POLY_3 code. In this article we describe software adaptations undertaken to import this functionality and provide a review of its performance.

  15. A multiscale restriction-smoothed basis method for high contrast porous media represented on unstructured grids

    SciTech Connect

    Møyner, Olav Lie, Knut-Andreas

    2016-01-01

    A wide variety of multiscale methods have been proposed in the literature to reduce runtime and provide better scaling for the solution of Poisson-type equations modeling flow in porous media. We present a new multiscale restricted-smoothed basis (MsRSB) method that is designed to be applicable to both rectilinear grids and unstructured grids. Like many other multiscale methods, MsRSB relies on a coarse partition of the underlying fine grid and a set of local prolongation operators (multiscale basis functions) that map unknowns associated with the fine grid cells to unknowns associated with blocks in the coarse partition. These mappings are constructed by restricted smoothing: Starting from a constant, a localized iterative scheme is applied directly to the fine-scale discretization to compute prolongation operators that are consistent with the local properties of the differential operators. The resulting method has three main advantages: First of all, both the coarse and the fine grid can have general polyhedral geometry and unstructured topology. This means that partitions and good prolongation operators can easily be constructed for complex models involving high media contrasts and unstructured cell connections introduced by faults, pinch-outs, erosion, local grid refinement, etc. In particular, the coarse partition can be adapted to geological or flow-field properties represented on cells or faces to improve accuracy. Secondly, the method is accurate and robust when compared to existing multiscale methods and does not need expensive recomputation of local basis functions to account for transient behavior: Dynamic mobility changes are incorporated by continuing to iterate a few extra steps on existing basis functions. This way, the cost of updating the prolongation operators becomes proportional to the amount of change in fluid mobility and one reduces the need for expensive, tolerance-based updates. Finally, since the MsRSB method is formulated on top of a cell

  16. Compactness vs. Smoothness: Methods for regularizing fault slip inversions with application to subduction zone earthquakes.

    NASA Astrophysics Data System (ADS)

    Lohman, R. B.; Simons, M.

    2004-12-01

    We examine inversions of geodetic data for fault slip and discuss how inferred results are affected by choices of regularization. The final goal of any slip inversion is to enhance our understanding of the dynamics governing fault zone processes through kinematic descriptions of fault zone behavior at various temporal and spatial scales. Important kinematic observations include ascertaining whether fault slip is correlated with topographic and gravitational anomalies, whether coseismic and postseismic slip occur on complementary or overlapping regions of the fault plane, and how aftershock distributions compare with areas of coseismic and postseismic slip. Fault slip inversions are generally poorly-determined inverse problems requiring some sort of regularization. Attempts to place inversion results in the context of understanding fault zone processes should be accompanied by careful treatment of how the applied regularization affects characteristics of the inferred slip model. Most regularization techniques involve defining a metric that quantifies the solution "simplicity". A frequently employed method defines a "simple" slip distribution as one that is spatially smooth, balancing the fit to the data vs. the spatial complexity of the slip distribution. One problem related to the use of smoothing constraints is the "smearing" of fault slip into poorly-resolved areas on the fault plane. In addition, even if the data is fit well by a point source, the fact that a point source is spatially "rough" will force the inversion to choose a smoother model with slip over a broader area. Therefore, when we interpret the area of inferred slip we must ask whether the slipping area is truly constrained by the data, or whether it could be fit equally well by a more spatially compact source with larger amplitudes of slip. We introduce an alternate regularization technique for fault slip inversions, where we seek an end member model that is the smallest region of fault slip that

  17. Effects of spatial smoothing on inter-subject correlation based analysis of FMRI.

    PubMed

    Pajula, Juha; Tohka, Jussi

    2014-11-01

    This study evaluates the effects of spatial smoothing on inter-subject correlation (ISC) analysis for FMRI data using the traditional model based analysis as a reference. So far within ISC analysis the effects of smoothing have not been studied systematically and linear Gaussian filters with varying kernel widths have been used without better knowledge about the effects of filtering. Instead, with the traditional general linear model (GLM) based analysis, the effects of smoothing have been studied extensively. In this study, ISC and GLM analyses were computed with two experimental and one simulated block-design datasets. The test statistics and the detected activation areas were compared numerically with correlation and Dice similarity measures, respectively. The study verified that (1) the choice of the filter substantially affected the activations detected by ISC analysis, (2) the detected activations according to ISC and GLM methods were highly similar regardless of the smoothing kernel and (3) the effect of spatial smoothing was mildly smaller on ISC than GLM analysis. Our results indicated that a good selection of the full width at half maximum of the Gaussian smoothing kernel for ISC was slightly larger than double the original voxel size.

  18. Ab initio-driven nuclear energy density functional method. A proposal for safe/correlated/improvable parametrizations of the off-diagonal EDF kernels

    NASA Astrophysics Data System (ADS)

    Duguet, T.; Bender, M.; Ebran, J.-P.; Lesinski, T.; Somà, V.

    2015-12-01

    This programmatic paper lays down the possibility to reconcile the necessity to resum many-body correlations into the energy kernel with the fact that safe multi-reference energy density functional (EDF) calculations cannot be achieved whenever the Pauli principle is not enforced, as is for example the case when many-body correlations are parametrized under the form of empirical density dependencies. Our proposal is to exploit a newly developed ab initio many-body formalism to guide the construction of safe, explicitly correlated and systematically improvable parametrizations of the off-diagonal energy and norm kernels that lie at the heart of the nuclear EDF method. The many-body formalism of interest relies on the concepts of symmetry breaking and restoration that have made the fortune of the nuclear EDF method and is, as such, amenable to this guidance. After elaborating on our proposal, we briefly outline the project we plan to execute in the years to come.

  19. GPUs, a new tool of acceleration in CFD: efficiency and reliability on smoothed particle hydrodynamics methods.

    PubMed

    Crespo, Alejandro C; Dominguez, Jose M; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D

    2011-01-01

    Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability.

  20. GPUs, a New Tool of Acceleration in CFD: Efficiency and Reliability on Smoothed Particle Hydrodynamics Methods

    PubMed Central

    Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.

    2011-01-01

    Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185

  1. Large-scale prediction of disulphide bridges using kernel methods, two-dimensional recursive neural networks, and weighted graph matching.

    PubMed

    Cheng, Jianlin; Saigo, Hiroto; Baldi, Pierre

    2006-03-15

    The formation of disulphide bridges between cysteines plays an important role in protein folding, structure, function, and evolution. Here, we develop new methods for predicting disulphide bridges in proteins. We first build a large curated data set of proteins containing disulphide bridges to extract relevant statistics. We then use kernel methods to predict whether a given protein chain contains intrachain disulphide bridges or not, and recursive neural networks to predict the bonding probabilities of each pair of cysteines in the chain. These probabilities in turn lead to an accurate estimation of the total number of disulphide bridges and to a weighted graph matching problem that can be addressed efficiently to infer the global disulphide bridge connectivity pattern. This approach can be applied both in situations where the bonded state of each cysteine is known, or in ab initio mode where the state is unknown. Furthermore, it can easily cope with chains containing an arbitrary number of disulphide bridges, overcoming one of the major limitations of previous approaches. It can classify individual cysteine residues as bonded or nonbonded with 87% specificity and 89% sensitivity. The estimate for the total number of bridges in each chain is correct 71% of the times, and within one from the true value over 94% of the times. The prediction of the overall disulphide connectivity pattern is exact in about 51% of the chains. In addition to using profiles in the input to leverage evolutionary information, including true (but not predicted) secondary structure and solvent accessibility information yields small but noticeable improvements. Finally, once the system is trained, predictions can be computed rapidly on a proteomic or protein-engineering scale. The disulphide bridge prediction server (DIpro), software, and datasets are available through www.igb.uci.edu/servers/psss.html.

  2. Analysis and Implementation of Particle-to-Particle (P2P) Graphics Processor Unit (GPU) Kernel for Black-Box Adaptive Fast Multipole Method

    DTIC Science & Technology

    2015-06-01

    ARL-TR-7315 ● JUNE 2015 US Army Research Laboratory Analysis and Implementation of Particle-to- Particle (P2P) Graphics Processor ...Particle-to- Particle (P2P) Graphics Processor Unit (GPU) Kernel for Black-Box Adaptive Fast Multipole Method by Richard H Haney and Dale Shires...reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection information

  3. Enrollment Forecasting with Double Exponential Smoothing: Two Methods for Objective Weight Factor Selection. AIR Forum 1980 Paper.

    ERIC Educational Resources Information Center

    Gardner, Don E.

    The merits of double exponential smoothing are discussed relative to other types of pattern-based enrollment forecasting methods. The difficulties associated with selecting an appropriate weight factor are discussed, and their potential effects on prediction results are illustrated. Two methods for objectively selecting the "best" weight…

  4. Biological Rhythms Modelisation of Vigilance and Sleep in Microgravity State with COSINOR and Volterra's Kernels Methods

    NASA Astrophysics Data System (ADS)

    Gaudeua de Gerlicz, C.; Golding, J. G.; Bobola, Ph.; Moutarde, C.; Naji, S.

    2008-06-01

    The spaceflight under microgravity cause basically biological and physiological imbalance in human being. Lot of study has been yet release on this topic especially about sleep disturbances and on the circadian rhythms (alternation vigilance-sleep, body, temperature...). Factors like space motion sickness, noise, or excitement can cause severe sleep disturbances. For a stay of longer than four months in space, gradual increases in the planned duration of sleep were reported. [1] The average sleep in orbit was more than 1.5 hours shorter than the during control periods on earth, where sleep averaged 7.9 hours. [2] Alertness and calmness were unregistered yield clear circadian pattern of 24h but with a phase delay of 4h.The calmness showed a biphasic component (12h) mean sleep duration was 6.4 structured by 3-5 non REM/REM cycles. Modelisations of neurophysiologic mechanisms of stress and interactions between various physiological and psychological variables of rhythms have can be yet release with the COSINOR method. [3

  5. Do we really need a large number of particles to simulate bimolecular reactive transport with random walk methods? A kernel density estimation approach

    NASA Astrophysics Data System (ADS)

    Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-12-01

    Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a

  6. A novel method for modeling of complex wall geometries in smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Eitzlmayr, Andreas; Koscher, Gerold; Khinast, Johannes

    2014-10-01

    Smoothed particle hydrodynamics (SPH) has become increasingly important during recent decades. Its meshless nature, inherent representation of convective transport and ability to simulate free surface flows make SPH particularly promising with regard to simulations of industrial mixing devices for high-viscous fluids, which often have complex rotating geometries and partially filled regions (e.g., twin-screw extruders). However, incorporating the required geometries remains a challenge in SPH since the most obvious and most common ways to model solid walls are based on particles (i.e., boundary particles and ghost particles), which leads to complications with arbitrarily-curved wall surfaces. To overcome this problem, we developed a systematic method for determining an adequate interaction between SPH particles and a continuous wall surface based on the underlying SPH equations. We tested our new approach by using the open-source particle simulator "LIGGGHTS" and comparing the velocity profiles to analytical solutions and SPH simulations with boundary particles. Finally, we followed the evolution of a tracer in a twin-cam mixer during the rotation, which was experimentally and numerically studied by several other authors, and ascertained good agreement with our results. This supports the validity of our newly-developed wall interaction method, which constitutes a step forward in SPH simulations of complex geometries.

  7. Estimation of mass ratio of the total kernels within a sample of in-shell peanuts using RF Impedance Method

    Technology Transfer Automated Retrieval System (TEKTRAN)

    It would be useful to know the total kernel mass within a given mass of peanuts (mass ratio) while the peanuts are bought or being processed. In this work, the possibility of finding this mass ratio while the peanuts were in their shells was investigated. Capacitance, phase angle and dissipation fa...

  8. New Equating Methods and Their Relationships with Levine Observed Score Linear Equating under the Kernel Equating Framework

    ERIC Educational Resources Information Center

    Chen, Haiwen; Holland, Paul

    2010-01-01

    In this paper, we develop a new curvilinear equating for the nonequivalent groups with anchor test (NEAT) design under the assumption of the classical test theory model, that we name curvilinear Levine observed score equating. In fact, by applying both the kernel equating framework and the mean preserving linear transformation of…

  9. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  10. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  11. Development of a Smooth Trajectory Maneuver Method to Accommodate the Ares I Flight Control Constraints

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Schmitt, Terri L.; Hanson, John M.

    2008-01-01

    Six degree-of-freedom (DOF) launch vehicle trajectories are designed to follow an optimized 3-DOF reference trajectory. A vehicle has a finite amount of control power that it can allocate to performing maneuvers. Therefore, the 3-DOF trajectory must be designed to refrain from using 100% of the allowable control capability to perform maneuvers, saving control power for handling off-nominal conditions, wind gusts and other perturbations. During the Ares I trajectory analysis, two maneuvers were found to be hard for the control system to implement; a roll maneuver prior to the gravity turn and an angle of attack maneuver immediately after the J-2X engine start-up. It was decided to develop an approach for creating smooth maneuvers in the optimized reference trajectories that accounts for the thrust available from the engines. A feature of this method is that no additional angular velocity in the direction of the maneuver has been added to the vehicle after the maneuver completion. This paper discusses the equations behind these new maneuvers and their implementation into the Ares I trajectory design cycle. Also discussed is a possible extension to adjusting closed-loop guidance.

  12. Using two soft computing methods to predict wall and bed shear stress in smooth rectangular channels

    NASA Astrophysics Data System (ADS)

    Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein

    2017-03-01

    Two soft computing methods were extended in order to predict the mean wall and bed shear stress in open channels. The genetic programming (GP) and Genetic Algorithm Artificial Neural Network (GAA) were investigated to determine the accuracy of these models in estimating wall and bed shear stress. The GP and GAA model results were compared in terms of testing dataset in order to find the best model. In modeling both bed and wall shear stress, the GP model performed better with RMSE of 0.0264 and 0.0185, respectively. Then both proposed models were compared with equations for rectangular open channels, trapezoidal channels and ducts. According to the results, the proposed models performed the best in predicting wall and bed shear stress in smooth rectangular channels. The obtained equation for rectangular channels could estimate values closer to experimental data, but the equations for ducts had poor, inaccurate results in predicting wall and bed shear stress. The equation presented for trapezoidal channels did not have acceptable accuracy in predicting wall and bed shear stress either.

  13. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  14. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  15. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  16. Optimizing seeding and culture methods to engineer smooth muscle tissue on biodegradable polymer matrices.

    PubMed

    Kim, B S; Putnam, A J; Kulik, T J; Mooney, D J

    1998-01-05

    The engineering of functional smooth muscle (SM) tissue is critical if one hopes to successfully replace the large number of tissues containing an SM component with engineered equivalents. This study reports on the effects of SM cell (SMC) seeding and culture conditions on the cellularity and composition of SM tissues engineered using biodegradable matrices (5 x 5 mm, 2-mm thick) of polyglycolic acid (PGA) fibers. Cells were seeded by injecting a cell suspension into polymer matrices in tissue culture dishes (static seeding), by stirring polymer matrices and a cell suspension in spinner flasks (stirred seeding), or by agitating polymer matrices and a cell suspension in tubes with an orbital shaker (agitated seeding). The density of SMCs adherent to these matrices was a function of cell concentration in the seeding solution, but under all conditions a larger number (approximately 1 order of magnitude) and more uniform distribution of SMCs adherent to the matrices were obtained with dynamic versus static seeding methods. The dynamic seeding methods, as compared to the static method, also ultimately resulted in new tissues that had a higher cellularity, more uniform cell distribution, and greater elastin deposition. The effects of culture conditions were next studied by culturing cell-polymer constructs in a stirred bioreactor versus static culture conditions. The stirred culture of SMC-seeded polymer matrices resulted in tissues with a cell density of 6.4 +/- 0.8 x 10(8) cells/cm3 after 5 weeks, compared to 2.0 +/- 1.1 x 10(8) cells/cm3 with static culture. The elastin and collagen synthesis rates and deposition within the engineered tissues were also increased by culture in the bioreactors. The elastin content after 5-week culture in the stirred bioreactor was 24 +/- 3%, and both the elastin content and the cellularity of these tissues are comparable to those of native SM tissue. New tissues were also created in vivo when dynamically seeded polymer matrices were

  17. Semisupervised kernel matrix learning by kernel propagation.

    PubMed

    Hu, Enliang; Chen, Songcan; Zhang, Daoqiang; Yin, Xuesong

    2010-11-01

    The goal of semisupervised kernel matrix learning (SS-KML) is to learn a kernel matrix on all the given samples on which just a little supervised information, such as class label or pairwise constraint, is provided. Despite extensive research, the performance of SS-KML still leaves some space for improvement in terms of effectiveness and efficiency. For example, a recent pairwise constraints propagation (PCP) algorithm has formulated SS-KML into a semidefinite programming (SDP) problem, but its computation is very expensive, which undoubtedly restricts PCPs scalability in practice. In this paper, a novel algorithm, called kernel propagation (KP), is proposed to improve the comprehensive performance in SS-KML. The main idea of KP is first to learn a small-sized sub-kernel matrix (named seed-kernel matrix) and then propagate it into a larger-sized full-kernel matrix. Specifically, the implementation of KP consists of three stages: 1) separate the supervised sample (sub)set X(l) from the full sample set X; 2) learn a seed-kernel matrix on X(l) through solving a small-scale SDP problem; and 3) propagate the learnt seed-kernel matrix into a full-kernel matrix on X . Furthermore, following the idea in KP, we naturally develop two conveniently realizable out-of-sample extensions for KML: one is batch-style extension, and the other is online-style extension. The experiments demonstrate that KP is encouraging in both effectiveness and efficiency compared with three state-of-the-art algorithms and its related out-of-sample extensions are promising too.

  18. The context-tree kernel for strings.

    PubMed

    Cuturi, Marco; Vert, Jean-Philippe

    2005-10-01

    We propose a new kernel for strings which borrows ideas and techniques from information theory and data compression. This kernel can be used in combination with any kernel method, in particular Support Vector Machines for string classification, with notable applications in proteomics. By using a Bayesian averaging framework with conjugate priors on a class of Markovian models known as probabilistic suffix trees or context-trees, we compute the value of this kernel in linear time and space while only using the information contained in the spectrum of the considered strings. This is ensured through an adaptation of a compression method known as the context-tree weighting algorithm. Encouraging classification results are reported on a standard protein homology detection experiment, showing that the context-tree kernel performs well with respect to other state-of-the-art methods while using no biological prior knowledge.

  19. Coronary Stent Artifact Reduction with an Edge-Enhancing Reconstruction Kernel – A Prospective Cross-Sectional Study with 256-Slice CT

    PubMed Central

    Tan, Stéphanie; Soulez, Gilles; Diez Martinez, Patricia; Larrivée, Sandra; Stevens, Louis-Mathieu; Goussard, Yves; Mansour, Samer; Chartrand-Lefebvre, Carl

    2016-01-01

    Purpose Metallic artifacts can result in an artificial thickening of the coronary stent wall which can significantly impair computed tomography (CT) imaging in patients with coronary stents. The objective of this study is to assess in vivo visualization of coronary stent wall and lumen with an edge-enhancing CT reconstruction kernel, as compared to a standard kernel. Methods This is a prospective cross-sectional study involving the assessment of 71 coronary stents (24 patients), with blinded observers. After 256-slice CT angiography, image reconstruction was done with medium-smooth and edge-enhancing kernels. Stent wall thickness was measured with both orthogonal and circumference methods, averaging thickness from diameter and circumference measurements, respectively. Image quality was assessed quantitatively using objective parameters (noise, signal to noise (SNR) and contrast to noise (CNR) ratios), as well as visually using a 5-point Likert scale. Results Stent wall thickness was decreased with the edge-enhancing kernel in comparison to the standard kernel, either with the orthogonal (0.97 ± 0.02 versus 1.09 ± 0.03 mm, respectively; p<0.001) or the circumference method (1.13 ± 0.02 versus 1.21 ± 0.02 mm, respectively; p = 0.001). The edge-enhancing kernel generated less overestimation from nominal thickness compared to the standard kernel, both with the orthogonal (0.89 ± 0.19 versus 1.00 ± 0.26 mm, respectively; p<0.001) and the circumference (1.06 ± 0.26 versus 1.13 ± 0.31 mm, respectively; p = 0.005) methods. The edge-enhancing kernel was associated with lower SNR and CNR, as well as higher background noise (all p < 0.001), in comparison to the medium-smooth kernel. Stent visual scores were higher with the edge-enhancing kernel (p<0.001). Conclusion In vivo 256-slice CT assessment of coronary stents shows that the edge-enhancing CT reconstruction kernel generates thinner stent walls, less overestimation from nominal thickness, and better image quality

  20. High-order Eulerian incompressible smoothed particle hydrodynamics with transition to Lagrangian free-surface motion

    NASA Astrophysics Data System (ADS)

    Lind, S. J.; Stansby, P. K.

    2016-12-01

    The incompressible Smoothed Particle Hydrodynamics (ISPH) method is derived in Eulerian form with high-order smoothing kernels to provide increased accuracy for a range of steady and transient internal flows. Periodic transient flows, in particular, demonstrate high-order convergence and accuracies approaching, for example, spectral mesh-based methods. The improved accuracies are achieved through new high-order Gaussian kernels applied over regular particle distributions with time stepping formally up to 2nd order for transient flows. The Eulerian approach can be easily extended to model free surface flows by merging from Eulerian to Lagrangian regions in an Arbitrary-Lagrangian-Eulerian (ALE) fashion, and a demonstration with periodic wave propagation is presented. In the long term, it is envisaged that the method will greatly increase the accuracy and efficiency of SPH methods, while retaining the flexibility of SPH in modelling free surface and multiphase flows.

  1. Kernel regression for fMRI pattern prediction

    PubMed Central

    Chu, Carlton; Ni, Yizhao; Tan, Geoffrey; Saunders, Craig J.; Ashburner, John

    2011-01-01

    This paper introduces two kernel-based regression schemes to decode or predict brain states from functional brain scans as part of the Pittsburgh Brain Activity Interpretation Competition (PBAIC) 2007, in which our team was awarded first place. Our procedure involved image realignment, spatial smoothing, detrending of low-frequency drifts, and application of multivariate linear and non-linear kernel regression methods: namely kernel ridge regression (KRR) and relevance vector regression (RVR). RVR is based on a Bayesian framework, which automatically determines a sparse solution through maximization of marginal likelihood. KRR is the dual-form formulation of ridge regression, which solves regression problems with high dimensional data in a computationally efficient way. Feature selection based on prior knowledge about human brain function was also used. Post-processing by constrained deconvolution and re-convolution was used to furnish the prediction. This paper also contains a detailed description of how prior knowledge was used to fine tune predictions of specific “feature ratings,” which we believe is one of the key factors in our prediction accuracy. The impact of pre-processing was also evaluated, demonstrating that different pre-processing may lead to significantly different accuracies. Although the original work was aimed at the PBAIC, many techniques described in this paper can be generally applied to any fMRI decoding works to increase the prediction accuracy. PMID:20348000

  2. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  3. Examination of tear film smoothness on corneae after refractive surgeries using a noninvasive interferometric method

    NASA Astrophysics Data System (ADS)

    Szczesna, Dorota H.; Kulas, Zbigniew; Kasprzak, Henryk T.; Stenevi, Ulf

    2009-11-01

    A lateral shearing interferometer was used to examine the smoothness of the tear film. The information about the distribution and stability of the precorneal tear film is carried out by the wavefront reflected from the surface of tears and coded in interference fringes. Smooth and regular fringes indicate a smooth tear film surface. On corneae after laser in situ keratomileusis (LASIK) or radial keratotomy (RK) surgery, the interference fringes are seldom regular. The fringes are bent on bright lines, which are interpreted as tear film breakups. The high-intensity pattern seems to appear in similar location on the corneal surface after refractive surgery. Our purpose was to extract information about the pattern existing under the interference fringes and calculate its shape reproducibility over time and following eye blinks. A low-pass filter was applied and correlation coefficient was calculated to compare a selected fragment of the template image to each of the following frames in the recorded sequence. High values of the correlation coefficient suggest that irregularities of the corneal epithelium might influence tear film instability and that tear film breakup may be associated with local irregularities of the corneal topography created after the LASIK and RK surgeries.

  4. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    NASA Astrophysics Data System (ADS)

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-01

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  5. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    DOE PAGES

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...

    2016-08-10

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) tomore » evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less

  6. Adaptive wiener image restoration kernel

    SciTech Connect

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  7. Analysis of maize ( Zea mays ) kernel density and volume using microcomputed tomography and single-kernel near-infrared spectroscopy.

    PubMed

    Gustin, Jeffery L; Jackson, Sean; Williams, Chekeria; Patel, Anokhee; Armstrong, Paul; Peter, Gary F; Settles, A Mark

    2013-11-20

    Maize kernel density affects milling quality of the grain. Kernel density of bulk samples can be predicted by near-infrared reflectance (NIR) spectroscopy, but no accurate method to measure individual kernel density has been reported. This study demonstrates that individual kernel density and volume are accurately measured using X-ray microcomputed tomography (μCT). Kernel density was significantly correlated with kernel volume, air space within the kernel, and protein content. Embryo density and volume did not influence overall kernel density. Partial least-squares (PLS) regression of μCT traits with single-kernel NIR spectra gave stable predictive models for kernel density (R(2) = 0.78, SEP = 0.034 g/cm(3)) and volume (R(2) = 0.86, SEP = 2.88 cm(3)). Density and volume predictions were accurate for data collected over 10 months based on kernel weights calculated from predicted density and volume (R(2) = 0.83, SEP = 24.78 mg). Kernel density was significantly correlated with bulk test weight (r = 0.80), suggesting that selection of dense kernels can translate to improved agronomic performance.

  8. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint.

    PubMed

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2016-04-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint.

  9. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint

    PubMed Central

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2015-01-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575

  10. Self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation theorem and the exact-exchange kernel

    SciTech Connect

    Bleiziffer, Patrick Krug, Marcel; Görling, Andreas

    2015-06-28

    A self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation (ACFD) theorem, employing the frequency-dependent exact exchange kernel f{sub x} is presented. The resulting SC-exact-exchange-only (EXX)-ACFD method leads to even more accurate correlation potentials than those obtained within the direct random phase approximation (dRPA). In contrast to dRPA methods, not only the Coulomb kernel but also the exact exchange kernel f{sub x} is taken into account in the EXX-ACFD correlation which results in a method that, unlike dRPA methods, is free of self-correlations, i.e., a method that treats exactly all one-electron systems, like, e.g., the hydrogen atom. The self-consistent evaluation of EXX-ACFD total energies improves the accuracy compared to EXX-ACFD total energies evaluated non-self-consistently with EXX or dRPA orbitals and eigenvalues. Reaction energies of a set of small molecules, for which highly accurate experimental reference data are available, are calculated and compared to quantum chemistry methods like Møller-Plesset perturbation theory of second order (MP2) or coupled cluster methods [CCSD, coupled cluster singles, doubles, and perturbative triples (CCSD(T))]. Moreover, we compare our methods to other ACFD variants like dRPA combined with perturbative corrections such as the second order screened exchange corrections or a renormalized singles correction. Similarly, the performance of our EXX-ACFD methods is investigated for the non-covalently bonded dimers of the S22 reference set and for potential energy curves of noble gas, water, and benzene dimers. The computational effort of the SC-EXX-ACFD method exhibits the same scaling of N{sup 5} with respect to the system size N as the non-self-consistent evaluation of only the EXX-ACFD correlation energy; however, the prefactor increases significantly. Reaction energies from the SC-EXX-ACFD method deviate quite little from EXX-ACFD energies obtained non

  11. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  12. A smooth dissipative particle dynamics method for domains with arbitrary-geometry solid boundaries

    NASA Astrophysics Data System (ADS)

    Gatsonis, Nikolaos A.; Potami, Raffaele; Yang, Jun

    2014-01-01

    A smooth dissipative particle dynamics method with dynamic virtual particle allocation (SDPD-DV) for modeling and simulation of mesoscopic fluids in wall-bounded domains is presented. The physical domain in SDPD-DV may contain external and internal solid boundaries of arbitrary geometries, periodic inlets and outlets, and the fluid region. The SDPD-DV method is realized with fluid particles, boundary particles, and dynamically allocated virtual particles. The internal or external solid boundaries of the domain can be of arbitrary geometry and are discretized with a surface grid. These boundaries are represented by boundary particles with assigned properties. The fluid domain is discretized with fluid particles of constant mass and variable volume. Conservative and dissipative force models due to virtual particles exerted on a fluid particle in the proximity of a solid boundary supplement the original SDPD formulation. The dynamic virtual particle allocation approach provides the density and the forces due to virtual particles. The integration of the SDPD equations is accomplished with a velocity-Verlet algorithm for the momentum and a Runge-Kutta for the entropy equation. The velocity integrator is supplemented by a bounce-forward algorithm in cases where the virtual particle force model is not able to prevent particle penetration. For the incompressible isothermal systems considered in this work, the pressure of a fluid particle is obtained by an artificial compressibility formulation for liquids and the ideal gas law for gases. The self-diffusion coefficient is obtained by an implementation of the generalized Einstein and the Green-Kubo relations. Field properties are obtained by sampling SDPD-DV outputs on a post-processing grid that allows harnessing the particle information on desired spatiotemporal scales. The SDPD-DV method is verified and validated with simulations in bounded and periodic domains that cover the hydrodynamic and mesoscopic regimes for

  13. A nonparametric mean-variance smoothing method to assess Arabidopsis cold stress transcriptional regulator CBF2 overexpression microarray data.

    PubMed

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.

  14. A steady and oscillatory kernel function method for interfering surfaces in subsonic, transonic and supersonic flow. [prediction analysis techniques for airfoils

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1976-01-01

    The theory, results and user instructions for an aerodynamic computer program are presented. The theory is based on linear lifting surface theory, and the method is the kernel function. The program is applicable to multiple interfering surfaces which may be coplanar or noncoplanar. Local linearization was used to treat nonuniform flow problems without shocks. For cases with imbedded shocks, the appropriate boundary conditions were added to account for the flow discontinuities. The data describing nonuniform flow fields must be input from some other source such as an experiment or a finite difference solution. The results are in the form of small linear perturbations about nonlinear flow fields. The method was applied to a wide variety of problems for which it is demonstrated to be significantly superior to the uniform flow method. Program user instructions are given for easy access.

  15. A New Kernel-Based Fuzzy Level Set Method for Automated Segmentation of Medical Images in the Presence of Intensity Inhomogeneity

    PubMed Central

    Shanbehzadeh, Jamshid

    2014-01-01

    Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation. PMID:24624225

  16. Difference image analysis: automatic kernel design using information criteria

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.

    2016-03-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.

  17. An analysis of smoothed particle hydrodynamics

    SciTech Connect

    Swegle, J.W.; Attaway, S.W.; Heinstein, M.W.; Mello, F.J.; Hicks, D.L.

    1994-03-01

    SPH (Smoothed Particle Hydrodynamics) is a gridless Lagrangian technique which is appealing as a possible alternative to numerical techniques currently used to analyze high deformation impulsive loading events. In the present study, the SPH algorithm has been subjected to detailed testing and analysis to determine its applicability in the field of solid dynamics. An important result of the work is a rigorous von Neumann stability analysis which provides a simple criterion for the stability or instability of the method in terms of the stress state and the second derivative of the kernel function. Instability, which typically occurs only for solids in tension, results not from the numerical time integration algorithm, but because the SPH algorithm creates an effective stress with a negative modulus. The analysis provides insight into possible methods for removing the instability. Also, SPH has been coupled into the transient dynamics finite element code PRONTO, and a weighted residual derivation of the SPH equations has been obtained.

  18. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  19. Weighted Bergman Kernels and Quantization}

    NASA Astrophysics Data System (ADS)

    Engliš, Miroslav

    Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion for x,y near z, where φ(x,y) is an almost-analytic extension of &\\phi(x)=φ(x,x) and similarly for ψ. Further, . If in addition Ω is of finite type, φ,ψ behave reasonably at the boundary, and - log φ, - log ψ are strictly plurisubharmonic on Ω, we obtain also an analogous asymptotic expansion for the Berezin transform and give applications to the Berezin quantization. Finally, for Ω smoothly bounded and strictly pseudoconvex and φ a smooth strictly plurisubharmonic defining function for Ω, we also obtain results on the Berezin-Toeplitz quantization.

  20. Efficient $\\chi ^{2}$ Kernel Linearization via Random Feature Maps.

    PubMed

    Yuan, Xiao-Tong; Wang, Zhenzhen; Deng, Jiankang; Liu, Qingshan

    2016-11-01

    Explicit feature mapping is an appealing way to linearize additive kernels, such as χ(2) kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of χ(2) kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize χ(2) kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to χ(2) multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the χ(2) kernel SVMs at almost no cost of testing accuracy.

  1. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  2. Iterative software kernels

    SciTech Connect

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  3. Learning with Box Kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-04-12

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, since the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given which dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  4. Learning with box kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-11-01

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, because the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given that dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  5. Accurate and efficient method for smoothly space-variant Gaussian blurring.

    PubMed

    Popkin, Timothy; Cavallaro, Andrea; Hands, David

    2010-05-01

    This paper presents a computationally efficient algorithm for smoothly space-variant Gaussian blurring of images. The proposed algorithm uses a specialized filter bank with optimal filters computed through principal component analysis. This filter bank approximates perfect space-variant Gaussian blurring to arbitrarily high accuracy and at greatly reduced computational cost compared to the brute force approach of employing a separate low-pass filter at each image location. This is particularly important for spatially variant image processing such as foveated coding. Experimental results show that the proposed algorithm provides typically 10 to 15 dB better approximation of perfect Gaussian blurring than the blended Gaussian pyramid blurring approach when using a bank of just eight filters.

  6. Kernel PLS Estimation of Single-trial Event-related Potentials

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.

    2004-01-01

    Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.

  7. Kernel Affine Projection Algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  8. Perturbed kernel approximation on homogeneous manifolds

    NASA Astrophysics Data System (ADS)

    Levesley, J.; Sun, X.

    2007-02-01

    Current methods for interpolation and approximation within a native space rely heavily on the strict positive-definiteness of the underlying kernels. If the domains of approximation are the unit spheres in euclidean spaces, then zonal kernels (kernels that are invariant under the orthogonal group action) are strongly favored. In the implementation of these methods to handle real world problems, however, some or all of the symmetries and positive-definiteness may be lost in digitalization due to small random errors that occur unpredictably during various stages of the execution. Perturbation analysis is therefore needed to address the stability problem encountered. In this paper we study two kinds of perturbations of positive-definite kernels: small random perturbations and perturbations by Dunkl's intertwining operators [C. Dunkl, Y. Xu, Orthogonal polynomials of several variables, Encyclopedia of Mathematics and Its Applications, vol. 81, Cambridge University Press, Cambridge, 2001]. We show that with some reasonable assumptions, a small random perturbation of a strictly positive-definite kernel can still provide vehicles for interpolation and enjoy the same error estimates. We examine the actions of the Dunkl intertwining operators on zonal (strictly) positive-definite kernels on spheres. We show that the resulted kernels are (strictly) positive-definite on spheres of lower dimensions.

  9. A computational method for three-dimensional reconstruction of the microarchitecture of myometrial smooth muscle from histological sections

    PubMed Central

    Lutton, E. Josiah; Lammers, Wim J. E. P.; James, Sean

    2017-01-01

    Background The fibrous structure of the myometrium has previously been characterised at high resolutions in small tissue samples (< 100 mm3) and at low resolutions (∼500 μm per voxel edge) in whole-organ reconstructions. However, no high-resolution visualisation of the myometrium at the organ level has previously been attained. Methods and results We have developed a technique to reconstruct the whole myometrium from serial histological slides, at a resolution of approximately 50 μm per voxel edge. Reconstructions of samples taken from human and rat uteri are presented here, along with histological verification of the reconstructions and detailed investigation of the fibrous structure of these uteri, using a range of tools specifically developed for this analysis. These reconstruction techniques enable the high-resolution rendering of global structure previously observed at lower resolution. Moreover, structures observed previously in small portions of the myometrium can be observed in the context of the whole organ. The reconstructions are in direct correspondence with the original histological slides, which allows the inspection of the anatomical context of any features identified in the three-dimensional reconstructions. Conclusions and significance The methods presented here have been used to generate a faithful representation of myometrial smooth muscle at a resolution of ∼50 μm per voxel edge. Characterisation of the smooth muscle structure of the myometrium by means of this technique revealed a detailed view of previously identified global structures in addition to a global view of the microarchitecture. A suite of visualisation tools allows researchers to interrogate the histological microarchitecture. These methods will be applicable to other smooth muscle tissues to analyse fibrous microarchitecture. PMID:28301486

  10. [Methods to smooth mortality indicators: application to analysis of inequalities in mortality in Spanish cities [the MEDEA Project

    PubMed

    Barceló, M Antònia; Saez, Marc; Cano-Serral, Gemma; Martínez-Beneito, Miguel Angel; Martínez, José Miguel; Borrell, Carme; Ocaña-Riola, Ricardo; Montoya, Imanol; Calvo, Montse; López-Abente, Gonzalo; Rodríguez-Sanz, Maica; Toro, Silvia; Alcalá, José Tomás; Saurina, Carme; Sánchez-Villegas, Pablo; Figueiras, Adolfo

    2008-01-01

    Although there is some experience in the study of mortality inequalities in Spanish cities, there are large urban centers that have not yet been investigated using the census tract as the unit of territorial analysis. The coordinated project was designed to fill this gap, with the participation of 10 groups of researchers in Andalusia, Aragon, Catalonia, Galicia, Madrid, Valencia, and the Basque Country. The MEDEA project has four distinguishing features: a) the census tract is used as the basic geographical area; b) statistical methods that include the geographical structure of the region under study are employed for risk estimation; c) data are drawn from three complementary data sources (information on air pollution, information on industrial pollution, and the records of mortality registrars), and d) a coordinated, large-scale analysis, favored by the implantation of coordinated research networks, is carried out. The main objective of the present study was to explain the methods for smoothing mortality indicators in the context of the MEDEA project. This study focusses on the methodology and the results of the Besag, York and Mollié model (BYM) in disease mapping. In the MEDEA project, standardized mortality ratios (SMR), corresponding to 17 large groups of causes of death and 28 specific causes, were smoothed by means of the BYM model; however, in the present study this methodology was applied to mortality due to cancer of the trachea, bronchi and lung in men and women in the city of Barcelona from 1996 to 2003. As a result of smoothing, a different geographical pattern for SMR in both genders was observed. In men, a SMR higher than unity was found in highly deprived areas. In contrast, in women, this pattern was observed in more affluent areas.

  11. Segmentation of 3D microPET images of the rat brain via the hybrid gaussian mixture method with kernel density estimation.

    PubMed

    Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing

    2012-01-01

    Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.

  12. RKF-PCA: robust kernel fuzzy PCA.

    PubMed

    Heo, Gyeongyong; Gader, Paul; Frigui, Hichem

    2009-01-01

    Principal component analysis (PCA) is a mathematical method that reduces the dimensionality of the data while retaining most of the variation in the data. Although PCA has been applied in many areas successfully, it suffers from sensitivity to noise and is limited to linear principal components. The noise sensitivity problem comes from the least-squares measure used in PCA and the limitation to linear components originates from the fact that PCA uses an affine transform defined by eigenvectors of the covariance matrix and the mean of the data. In this paper, a robust kernel PCA method that extends the kernel PCA and uses fuzzy memberships is introduced to tackle the two problems simultaneously. We first introduce an iterative method to find robust principal components, called Robust Fuzzy PCA (RF-PCA), which has a connection with robust statistics and entropy regularization. The RF-PCA method is then extended to a non-linear one, Robust Kernel Fuzzy PCA (RKF-PCA), using kernels. The modified kernel used in the RKF-PCA satisfies the Mercer's condition, which means that the derivation of the K-PCA is also valid for the RKF-PCA. Formal analyses and experimental results suggest that the RKF-PCA is an efficient non-linear dimension reduction method and is more noise-robust than the original kernel PCA.

  13. Multiple collaborative kernel tracking.

    PubMed

    Fan, Zhimin; Yang, Ming; Wu, Ying

    2007-07-01

    Those motion parameters that cannot be recovered from image measurements are unobservable in the visual dynamic system. This paper studies this important issue of singularity in the context of kernel-based tracking and presents a novel approach that is based on a motion field representation which employs redundant but sparsely correlated local motion parameters instead of compact but uncorrelated global ones. This approach makes it easy to design fully observable kernel-based motion estimators. This paper shows that these high-dimensional motion fields can be estimated efficiently by the collaboration among a set of simpler local kernel-based motion estimators, which makes the new approach very practical.

  14. A comparison of methods for smoothing and gap filling time series of remote sensing observations - application to MODIS LAI products

    NASA Astrophysics Data System (ADS)

    Kandasamy, S.; Baret, F.; Verger, A.; Neveux, P.; Weiss, M.

    2013-06-01

    Moderate resolution satellite sensors including MODIS (Moderate Resolution Imaging Spectroradiometer) already provide more than 10 yr of observations well suited to describe and understand the dynamics of earth's surface. However, these time series are associated with significant uncertainties and incomplete because of cloud cover. This study compares eight methods designed to improve the continuity by filling gaps and consistency by smoothing the time course. It includes methods exploiting the time series as a whole (iterative caterpillar singular spectrum analysis (ICSSA), empirical mode decomposition (EMD), low pass filtering (LPF) and Whittaker smoother (Whit)) as well as methods working on limited temporal windows of a few weeks to few months (adaptive Savitzky-Golay filter (SGF), temporal smoothing and gap filling (TSGF), and asymmetric Gaussian function (AGF)), in addition to the simple climatological LAI yearly profile (Clim). Methods were applied to the MODIS leaf area index product for the period 2000-2008 and over 25 sites showed a large range of seasonal patterns. Performances were discussed with emphasis on the balance achieved by each method between accuracy and roughness depending on the fraction of missing observations and the length of the gaps. Results demonstrate that the EMD, LPF and AGF methods were failing because of a significant fraction of gaps (more than 20%), while ICSSA, Whit and SGF were always providing estimates for dates with missing data. TSGF (Clim) was able to fill more than 50% of the gaps for sites with more than 60% (80%) fraction of gaps. However, investigation of the accuracy of the reconstructed values shows that it degrades rapidly for sites with more than 20% missing data, particularly for ICSSA, Whit and SGF. In these conditions, TSGF provides the best performances that are significantly better than the simple Clim for gaps shorter than about 100 days. The roughness of the reconstructed temporal profiles shows large

  15. A comparison of methods for smoothing and gap filling time series of remote sensing observations: application to MODIS LAI products

    NASA Astrophysics Data System (ADS)

    Kandasamy, S.; Baret, F.; Verger, A.; Neveux, P.; Weiss, M.

    2012-12-01

    Moderate resolution satellite sensors including MODIS already provide more than 10 yr of observations well suited to describe and understand the dynamics of the Earth surface. However, these time series are incomplete because of cloud cover and associated with significant uncertainties. This study compares eight methods designed to improve the continuity by filling gaps and the consistency by smoothing the time course. It includes methods exploiting the time series as a whole (Iterative caterpillar singular spectrum analysis (ICSSA), empirical mode decomposition (EMD), low pass filtering (LPF) and Whittaker smoother (Whit)) as well as methods working on limited temporal windows of few weeks to few months (Adaptive Savitzky-Golay filter (SGF), temporal smoothing and gap filling (TSGF) and asymmetric Gaussian function (AGF)) in addition to the simple climatological LAI yearly profile (Clim). Methods were applied to MODIS leaf area index product for the period 2000-2008 over 25 sites showing a large range of seasonal patterns. Performances were discussed with emphasis on the balance achieved by each method between accuracy and roughness depending on the fraction of missing observations and the length of the gaps. Results demonstrate that EMD, LPF and AGF methods were failing in case of significant fraction of gaps (%Gap > 20%), while ICSSA, Whit and SGF were always providing estimates for dates with missing data. TSGF (respectively Clim) was able to fill more than 50% of the gaps for sites with more than 60% (resp. 80%) fraction of gaps. However, investigation of the accuracy of the reconstructed values shows that it degrades rapidly for sites with more than 20% missing data, particularly for ICSSA, Whit and SGF. In these conditions, TSGF provides the best performances significantly better than the simple Clim for gaps shorter than about 100 days. The roughness of the reconstructed temporal profiles shows large differences between the several methods, with a decrease

  16. Smooth particle hydrodynamics method for modeling cavitation-induced fracture of a fluid under shock-wave loading

    NASA Astrophysics Data System (ADS)

    Davydov, M. N.; Kedrinskii, V. K.

    2013-11-01

    It is demonstrated that the method of smoothed particle hydrodynamics can be used to study the flow structure in a cavitating medium with a high concentration of the gas phase and to describe the process of inversion of the two-phase state of this medium: transition from a cavitating fluid to a system consisting of a gas and particles. A numerical analysis of the dynamics of the state of a hemispherical droplet under shock-wave loading shows that focusing of the shock wave reflected from the free surface of the droplet leads to the formation of a dense, but rapidly expanding cavitation cluster at the droplet center. By the time t = 500 µs, the bubbles at the cluster center not only coalesce and form a foam-type structure, but also transform to a gas-particle system, thus, forming an almost free rapidly expanding zone. The mechanism of this process defined previously as an internal "cavitation explosion" of the droplet is validated by means of mathematical modeling of the problem by the smoothed particle hydrodynamics method. The deformation of the cavitating droplet is finalized by its decomposition into individual fragments and particles.

  17. Automatic estimation of sleep level for nap based on conditional probability of sleep stages and an exponential smoothing method.

    PubMed

    Wang, Bei; Wang, Xingyu; Zhang, Tao; Nakamura, Masatoshi

    2013-01-01

    An automatic sleep level estimation method was developed for monitoring and regulation of day time nap sleep. The recorded nap data is separated into continuous 5-second segments. Features are extracted from EEGs, EOGs and EMG. A parameter of sleep level is defined which is estimated based on the conditional probability of sleep stages. An exponential smoothing method is applied for the estimated sleep level. There were totally 12 healthy subjects, with an averaged age of 22 yeas old, participated into the experimental work. Comparing with sleep stage determination, the presented sleep level estimation method showed better performance for nap sleep interpretation. Real time monitoring and regulation of nap is realizable based on the developed technique.

  18. A novel smooth impact drive mechanism actuation method with dual-slider for a compact zoom lens system.

    PubMed

    Lee, Jonghyun; Kwon, Won Sik; Kim, Kyung-Soo; Kim, Soohyun

    2011-08-01

    In this paper, a novel actuation method for a smooth impact drive mechanism that positions dual-slider by a single piezo-element is introduced and applied to a compact zoom lens system. A mode chart that determines the state of the slider at the expansion or shrinkage periods of the piezo-element is presented, and the design guide of a driving input profile is proposed. The motion of dual-slider holding lenses is analyzed at each mode, and proper modes for zoom functions are selected for the purpose of positioning two lenses. Because the proposed actuation method allows independent movement of two lenses by a single piezo-element, the zoom lens system can be designed to be compact. For a feasibility test, a lens system composed of an afocal zoom system and a focusing lens was developed, and the passive auto-focus method was implemented.

  19. Critical Parameters of the In Vitro Method of Vascular Smooth Muscle Cell Calcification

    PubMed Central

    Hortells, Luis; Sosa, Cecilia; Millán, Ángel; Sorribas, Víctor

    2015-01-01

    Background Vascular calcification (VC) is primarily studied using cultures of vascular smooth muscle cells. However, the use of very different protocols and extreme conditions can provide findings unrelated to VC. In this work we aimed to determine the critical experimental parameters that affect calcification in vitro and to determine the relevance to calcification in vivo. Experimental Procedures and Results Rat VSMC calcification in vitro was studied using different concentrations of fetal calf serum, calcium, and phosphate, in different types of culture media, and using various volumes and rates of change. The bicarbonate content of the media critically affected pH and resulted in supersaturation, depending on the concentration of Ca2+ and Pi. Such supersaturation is a consequence of the high dependence of bicarbonate buffers on CO2 vapor pressure and bicarbonate concentration at pHs above 7.40. Such buffer systems cause considerable pH variations as a result of minor experimental changes. The variations are more critical for DMEM and are negligible when the bicarbonate concentration is reduced to ¼. Particle nucleation and growth were observed by dynamic light scattering and electron microscopy. Using 2mM Pi, particles of ~200nm were observed at 24 hours in MEM and at 1 hour in DMEM. These nuclei grew over time, were deposited in the cells, and caused osteogene expression or cell death, depending on the precipitation rate. TEM observations showed that the initial precipitate was amorphous calcium phosphate (ACP), which converts into hydroxyapatite over time. In blood, the scenario is different, because supersaturation is avoided by a tightly controlled pH of 7.4, which prevents the formation of PO43--containing ACP. Conclusions The precipitation of ACP in vitro is unrelated to VC in vivo. The model needs to be refined through controlled pH and the use of additional procalcifying agents other than Pi in order to reproduce calcium phosphate deposition in vivo

  20. Robotic Intelligence Kernel: Communications

    SciTech Connect

    Walton, Mike C.

    2009-09-16

    The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.

  1. Robotic Intelligence Kernel: Driver

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.

  2. Quantum State Smoothing

    NASA Astrophysics Data System (ADS)

    Guevara, Ivonne; Wiseman, Howard

    2015-10-01

    Smoothing is an estimation method whereby a classical state (probability distribution for classical variables) at a given time is conditioned on all-time (both earlier and later) observations. Here we define a smoothed quantum state for a partially monitored open quantum system, conditioned on an all-time monitoring-derived record. We calculate the smoothed distribution for a hypothetical unobserved record which, when added to the real record, would complete the monitoring, yielding a pure-state "quantum trajectory." Averaging the pure state over this smoothed distribution yields the (mixed) smoothed quantum state. We study how the choice of actual unraveling affects the purity increase over that of the conventional (filtered) state conditioned only on the past record.

  3. Quantum State Smoothing.

    PubMed

    Guevara, Ivonne; Wiseman, Howard

    2015-10-30

    Smoothing is an estimation method whereby a classical state (probability distribution for classical variables) at a given time is conditioned on all-time (both earlier and later) observations. Here we define a smoothed quantum state for a partially monitored open quantum system, conditioned on an all-time monitoring-derived record. We calculate the smoothed distribution for a hypothetical unobserved record which, when added to the real record, would complete the monitoring, yielding a pure-state "quantum trajectory." Averaging the pure state over this smoothed distribution yields the (mixed) smoothed quantum state. We study how the choice of actual unraveling affects the purity increase over that of the conventional (filtered) state conditioned only on the past record.

  4. A Parallel Implementation of a Smoothed Particle Hydrodynamics Method on Graphics Hardware Using the Compute Unified Device Architecture

    SciTech Connect

    Wong Unhong; Wong Honcheng; Tang Zesheng

    2010-05-21

    The smoothed particle hydrodynamics (SPH), which is a class of meshfree particle methods (MPMs), has a wide range of applications from micro-scale to macro-scale as well as from discrete systems to continuum systems. Graphics hardware, originally designed for computer graphics, now provide unprecedented computational power for scientific computation. Particle system needs a huge amount of computations in physical simulation. In this paper, an efficient parallel implementation of a SPH method on graphics hardware using the Compute Unified Device Architecture is developed for fluid simulation. Comparing to the corresponding CPU implementation, our experimental results show that the new approach allows significant speedups of fluid simulation through handling huge amount of computations in parallel on graphics hardware.

  5. On the constrained minimization of smooth Kurdyka—Łojasiewicz functions with the scaled gradient projection method

    NASA Astrophysics Data System (ADS)

    Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone

    2016-10-01

    The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.

  6. Quantum kernel applications in medicinal chemistry.

    PubMed

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.

  7. A Novel Framework for Learning Geometry-Aware Kernels.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo

    2016-05-01

    The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.

  8. New variable selection method using interval segmentation purity with application to blockwise kernel transform support vector machine classification of high-dimensional microarray data.

    PubMed

    Tang, Li-Juan; Du, Wen; Fu, Hai-Yan; Jiang, Jian-Hui; Wu, Hai-Long; Shen, Guo-Li; Yu, Ru-Qin

    2009-08-01

    One problem with discriminant analysis of microarray data is representation of each sample by a large number of genes that are possibly irrelevant, insignificant, or redundant. Methods of variable selection are, therefore, of great significance in microarray data analysis. A new method for key gene selection has been proposed on the basis of interval segmentation purity that is defined as the purity of samples belonging to a certain class in intervals segmented by a mode search algorithm. This method identifies key variables most discriminative for each class, which offers possibility of unraveling the biological implication of selected genes. A salient advantage of the new strategy over existing methods is the capability of selecting genes that, though possibly exhibit a multimodal distribution, are the most discriminative for the classes of interest, considering that the expression levels of some genes may reflect systematic difference in within-class samples derived from different pathogenic mechanisms. On the basis of the key genes selected for individual classes, a support vector machine with block-wise kernel transform is developed for the classification of different classes. The combination of the proposed gene mining approach with support vector machine is demonstrated in cancer classification using two public data sets. The results reveal that significant genes have been identified for each class, and the classification model shows satisfactory performance in training and prediction for both data sets.

  9. Poles tracking of weakly nonlinear structures using a Bayesian smoothing method

    NASA Astrophysics Data System (ADS)

    Stephan, Cyrille; Festjens, Hugo; Renaud, Franck; Dion, Jean-Luc

    2017-02-01

    This paper describes a method for the identification and the tracking of poles of a weakly nonlinear structure from its free responses. This method is based on a model of multichannel damped sines whose parameters evolve over time. Their variations are approximated in discrete time by a nonlinear state space model. States are estimated by an iterative process which couples a two-pass Bayesian smoother with an Expectation-Maximization (EM) algorithm. The method is applied on numerical and experimental cases. As a result, accurate frequency and damping estimates are obtained as a function of amplitude.

  10. Including State Excitation in the Fixed-Interval Smoothing Algorithm and Implementation of the Maneuver Detection Method Using Error Residuals

    DTIC Science & Technology

    1990-12-01

    N is taken as the first smoothed estimate, P, must be equal to P,,, at this last data point. This can be seen graphically in Figure 4. Meditch [Ref...D-A246 336 NAVAL POSTGRADUATE SCHOOL Monterey , California R AWDTIC ELECTIE THESIS INCLUDING STATE EXCITATION IN THE FIXED-INTERVAL SMOOTHING ...Filter, Smoothing , Noise Process, Maneuver Detection. 19 Abstract (continue on reverse f necessary and idcntify by block number) The effects of the state

  11. A comparative study of energy minimization methods for Markov random fields with smoothness-based priors.

    PubMed

    Szeliski, Richard; Zabih, Ramin; Scharstein, Daniel; Veksler, Olga; Kolmogorov, Vladimir; Agarwala, Aseem; Tappen, Marshall; Rother, Carsten

    2008-06-01

    Among the most exciting advances in early vision has been the development of efficient energy minimization algorithms for pixel-labeling tasks such as depth or texture computation. It has been known for decades that such problems can be elegantly expressed as Markov random fields, yet the resulting energy minimization problems have been widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the top-performing stereo methods. However, the tradeoffs among different energy minimization algorithms are still not well understood. In this paper we describe a set of energy minimization benchmarks and use them to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods graph cuts, LBP, and tree-reweighted message passing in addition to the well-known older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching, interactive segmentation, and denoising. We also provide a general-purpose software interface that allows vision researchers to easily switch between optimization methods. Benchmarks, code, images, and results are available at http://vision.middlebury.edu/MRF/.

  12. Smoothed Particle Hydrodynamics Continuous Boundary Force method for Navier-Stokes equations subject to Robin boundary condition

    SciTech Connect

    Pan, Wenxiao; Bao, Jie; Tartakovsky, Alexandre M.

    2014-02-15

    Robin boundary condition for the Navier-Stokes equations is used to model slip conditions at the fluid-solid boundaries. A novel Continuous Boundary Force (CBF) method is proposed for solving the Navier-Stokes equations subject to Robin boundary condition. In the CBF method, the Robin boundary condition at boundary is replaced by the homogeneous Neumann boundary condition at the boundary and a volumetric force term added to the momentum conservation equation. Smoothed Particle Hydrodynamics (SPH) method is used to solve the resulting Navier-Stokes equations. We present solutions for two-dimensional and three-dimensional flows in domains bounded by flat and curved boundaries subject to various forms of the Robin boundary condition. The numerical accuracy and convergence are examined through comparison of the SPH-CBF results with the solutions of finite difference or finite element method. Taken the no-slip boundary condition as a special case of slip boundary condition, we demonstrate that the SPH-CBF method describes accurately both no-slip and slip conditions.

  13. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  14. Regularization techniques for PSF-matching kernels - I. Choice of kernel basis

    NASA Astrophysics Data System (ADS)

    Becker, A. C.; Homrighausen, D.; Connolly, A. J.; Genovese, C. R.; Owen, R.; Bickerton, S. J.; Lupton, R. H.

    2012-09-01

    We review current methods for building point spread function (PSF)-matching kernels for the purposes of image subtraction or co-addition. Such methods use a linear decomposition of the kernel on a series of basis functions. The correct choice of these basis functions is fundamental to the efficiency and effectiveness of the matching - the chosen bases should represent the underlying signal using a reasonably small number of shapes, and/or have a minimum number of user-adjustable tuning parameters. We examine methods whose bases comprise multiple Gauss-Hermite polynomials, as well as a form-free basis composed of delta-functions. Kernels derived from delta-functions are unsurprisingly shown to be more expressive; they are able to take more general shapes and perform better in situations where sum-of-Gaussian methods are known to fail. However, due to its many degrees of freedom (the maximum number allowed by the kernel size) this basis tends to overfit the problem and yields noisy kernels having large variance. We introduce a new technique to regularize these delta-function kernel solutions, which bridges the gap between the generality of delta-function kernels and the compactness of sum-of-Gaussian kernels. Through this regularization we are able to create general kernel solutions that represent the intrinsic shape of the PSF-matching kernel with only one degree of freedom, the strength of the regularization λ. The role of λ is effectively to exchange variance in the resulting difference image with variance in the kernel itself. We examine considerations in choosing the value of λ, including statistical risk estimators and the ability of the solution to predict solutions for adjacent areas. Both of these suggest moderate strengths of λ between 0.1 and 1.0, although this optimization is likely data set dependent. This model allows for flexible representations of the convolution kernel that have significant predictive ability and will prove useful in implementing

  15. Smooth Sailing.

    ERIC Educational Resources Information Center

    Price, Beverley; Pincott, Maxine; Rebman, Ashley; Northcutt, Jen; Barsanti, Amy; Silkunas, Betty; Brighton, Susan K.; Reitz, David; Winkler, Maureen

    1999-01-01

    Presents discipline tips from several teachers to keep classrooms running smoothly all year. Some of the suggestions include the following: a bear-cave warning system, peer mediation, a motivational mystery, problem students acting as the teacher's assistant, a positive-behavior-reward chain, a hallway scavenger hunt (to ensure quiet passage…

  16. Reversed-phase vortex-assisted liquid-liquid microextraction: a new sample preparation method for the determination of amygdalin in oil and kernel samples.

    PubMed

    Hosseini, Mohammad; Heydari, Rouhollah; Alimoradi, Mohammad

    2015-02-01

    A novel, simple, and rapid reversed-phase vortex-assisted liquid-liquid microextraction coupled with high-performance liquid chromatography has been introduced for the extraction, clean-up, and preconcentration of amygdalin in oil and kernel samples. In this technique, deionized water was used as the extracting solvent. Unlike the reversed-phase dispersive liquid-liquid microextraction, dispersive solvent was eliminated in the proposed method. Various parameters that affected the extraction efficiency, such as extracting solvent volume and its pH, vortex, and centrifuging times were evaluated and optimized. The calibration curve shows good linearity (r(2) = 0.9955) and precision (RSD < 5.2%) in the range of 0.07-20 μg/mL. The limit of detection and limit of quantitation were 0.02 and 0.07 μg/mL, respectively. The recoveries were in the range of 96.0-102.0% with relative standard deviation values ranging from 4.0 to 5.1%. Unlike the conventional extraction methods for plant extracts, no evaporative and re-solubilizing operations were needed in the proposed technique.

  17. Kernel mucking in top

    SciTech Connect

    LeFebvre, W.

    1994-08-01

    For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.

  18. Kernel-Based Reconstruction of Graph Signals

    NASA Astrophysics Data System (ADS)

    Romero, Daniel; Ma, Meng; Giannakis, Georgios B.

    2017-02-01

    A number of applications in engineering, social sciences, physics, and biology involve inference over networks. In this context, graph signals are widely encountered as descriptors of vertex attributes or features in graph-structured data. Estimating such signals in all vertices given noisy observations of their values on a subset of vertices has been extensively analyzed in the literature of signal processing on graphs (SPoG). This paper advocates kernel regression as a framework generalizing popular SPoG modeling and reconstruction and expanding their capabilities. Formulating signal reconstruction as a regression task on reproducing kernel Hilbert spaces of graph signals permeates benefits from statistical learning, offers fresh insights, and allows for estimators to leverage richer forms of prior information than existing alternatives. A number of SPoG notions such as bandlimitedness, graph filters, and the graph Fourier transform are naturally accommodated in the kernel framework. Additionally, this paper capitalizes on the so-called representer theorem to devise simpler versions of existing Thikhonov regularized estimators, and offers a novel probabilistic interpretation of kernel methods on graphs based on graphical models. Motivated by the challenges of selecting the bandwidth parameter in SPoG estimators or the kernel map in kernel-based methods, the present paper further proposes two multi-kernel approaches with complementary strengths. Whereas the first enables estimation of the unknown bandwidth of bandlimited signals, the second allows for efficient graph filter selection. Numerical tests with synthetic as well as real data demonstrate the merits of the proposed methods relative to state-of-the-art alternatives.

  19. An equatorially enhanced grid with smooth resolution distribution generated by a spring dynamics method

    NASA Astrophysics Data System (ADS)

    Iga, Shin-ichi

    2017-02-01

    An equatorially enhanced grid is applicable to atmospheric general circulation simulations with better representations of the cumulus convection active in the tropics. This study improved the topology of previously proposed equatorially enhanced grids (Iga, 2015) [1], which had extremely large grid intervals around the poles. The proposed grids in this study are of a triangular mesh and are generated by a spring dynamics method with stretching around singular points, which are connected to five or seven neighboring grid points. The latitudinal distribution of resolution is nearly proportional to the combination of the map factors of the Mercator, Lambert conformal conic, and polar stereographic projections. The resolution contrast between the equator and pole is 2.3 ∼ 4.5 for the sampled cases, which is much smaller than that for the LML grids. This improvement requires only a small amount of additional grid resources, less than 11% of the total. The proposed grids are also examined with shallow water tests, and were found to perform better than the previous LML grids.

  20. Smoothed Biasing Forces Yield Unbiased Free Energies with the Extended-System Adaptive Biasing Force Method.

    PubMed

    Lesage, Adrien; Lelièvre, Tony; Stoltz, Gabriel; Hénin, Jérôme

    2016-12-27

    We report a theoretical description and numerical tests of the extended-system adaptive biasing force method (eABF), together with an unbiased estimator of the free energy surface from eABF dynamics. Whereas the original ABF approach uses its running estimate of the free energy gradient as the adaptive biasing force, eABF is built on the idea that the exact free energy gradient is not necessary for efficient exploration, and that it is still possible to recover the exact free energy separately with an appropriate estimator. eABF does not directly bias the collective coordinates of interest, but rather fictitious variables that are harmonically coupled to them; therefore is does not require second derivative estimates, making it easily applicable to a wider range of problems than ABF. Furthermore, the extended variables present a smoother, coarse-grain-like sampling problem on a mollified free energy surface, leading to faster exploration and convergence. We also introduce CZAR, a simple, unbiased free energy estimator from eABF trajectories. eABF/CZAR converges to the physical free energy surface faster than standard ABF for a wide range of parameters.

  1. Kernel bandwidth optimization in spike rate estimation.

    PubMed

    Shimazaki, Hideaki; Shinomoto, Shigeru

    2010-08-01

    Kernel smoother and a time-histogram are classical tools for estimating an instantaneous rate of spike occurrences. We recently established a method for selecting the bin width of the time-histogram, based on the principle of minimizing the mean integrated square error (MISE) between the estimated rate and unknown underlying rate. Here we apply the same optimization principle to the kernel density estimation in selecting the width or "bandwidth" of the kernel, and further extend the algorithm to allow a variable bandwidth, in conformity with data. The variable kernel has the potential to accurately grasp non-stationary phenomena, such as abrupt changes in the firing rate, which we often encounter in neuroscience. In order to avoid possible overfitting that may take place due to excessive freedom, we introduced a stiffness constant for bandwidth variability. Our method automatically adjusts the stiffness constant, thereby adapting to the entire set of spike data. It is revealed that the classical kernel smoother may exhibit goodness-of-fit comparable to, or even better than, that of modern sophisticated rate estimation methods, provided that the bandwidth is selected properly for a given set of spike data, according to the optimization methods presented here.

  2. Analog forecasting with dynamics-adapted kernels

    NASA Astrophysics Data System (ADS)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  3. A Kernel-based Account of Bibliometric Measures

    NASA Astrophysics Data System (ADS)

    Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji

    The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.

  4. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  5. A smoothed finite element method for analysis of anisotropic large deformation of passive rabbit ventricles in diastole.

    PubMed

    Jiang, Chen; Liu, Gui-Rong; Han, Xu; Zhang, Zhi-Qian; Zeng, Wei

    2015-01-01

    The smoothed FEM (S-FEM) is firstly extended to explore the behavior of 3D anisotropic large deformation of rabbit ventricles during the passive filling process in diastole. Because of the incompressibility of myocardium, a special method called selective face-based/node-based S-FEM using four-node tetrahedral elements (FS/NS-FEM-TET4) is adopted in order to avoid volumetric locking. To validate the proposed algorithms of FS/NS-FEM-TET4, the 3D Lame problem is implemented. The performance contest results show that our FS/NS-FEM-TET4 is accurate, volumetric locking-free and insensitive to mesh distortion than standard linear FEM because of absence of isoparametric mapping. Actually, the efficiency of FS/NS-FEM-TET4 is comparable with higher-order FEM, such as 10-node tetrahedral elements. The proposed method for Holzapfel myocardium hyperelastic strain energy is also validated by simple shear tests through the comparison outcomes reported in available references. Finally, the FS/NS-FEM-TET4 is applied in the example of the passive filling of MRI-based rabbit ventricles with fiber architecture derived from rule-based algorithm to demonstrate its efficiency. Hence, we conclude that FS/NS-FEM-TET4 is a promising alternative other than FEM in passive cardiac mechanics.

  6. Investigation of calcium antagonist-L-type calcium channel interactions by a vascular smooth muscle cell membrane chromatography method.

    PubMed

    Du, Hui; He, Jianyu; Wang, Sicen; He, Langchong

    2010-07-01

    The dissociation equilibrium constant (K(D)) is an important affinity parameter for studying drug-receptor interactions. A vascular smooth muscle (VSM) cell membrane chromatography (CMC) method was developed for determination of the K(D) values for calcium antagonist-L-type calcium channel (L-CC) interactions. VSM cells, by means of primary culture with rat thoracic aortas, were used for preparation of the cell membrane stationary phase in the VSM/CMC model. All measurements were performed with spectrophotometric detection (237 nm) at 37 degrees C. The K(D) values obtained using frontal analysis were 3.36 x 10(-6) M for nifedipine, 1.34 x 10(-6) M for nimodipine, 6.83 x 10(-7) M for nitrendipine, 1.23 x 10(-7) M for nicardipine, 1.09 x 10(-7) M for amlodipine, and 8.51 x 10(-8) M for verapamil. This affinity rank order obtained from the VSM/CMC method had a strong positive correlation with that obtained from radioligand binding assay. The location of the binding region was examined by displacement experiments using nitrendipine as a mobile-phase additive. It was found that verapamil occupied a class of binding sites on L-CCs different from those occupied by nitrendipine. In addition, nicardipine, amlodipine, and nitrendipine had direct competition at a single common binding site. The studies showed that CMC can be applied to the investigation of drug-receptor interactions.

  7. A Real-Time Orbit Determination Method for Smooth Transition from Optical Tracking to Laser Ranging of Debris

    PubMed Central

    Li, Bin; Sang, Jizhang; Zhang, Zhongping

    2016-01-01

    A critical requirement to achieve high efficiency of debris laser tracking is to have sufficiently accurate orbit predictions (OP) in both the pointing direction (better than 20 arc seconds) and distance from the tracking station to the debris objects, with the former more important than the latter because of the narrow laser beam. When the two line element (TLE) is used to provide the orbit predictions, the resultant pointing errors are usually on the order of tens to hundreds of arc seconds. In practice, therefore, angular observations of debris objects are first collected using an optical tracking sensor, and then used to guide the laser beam pointing to the objects. The manual guidance may cause interrupts to the laser tracking, and consequently loss of valuable laser tracking data. This paper presents a real-time orbit determination (OD) and prediction method to realize smooth and efficient debris laser tracking. The method uses TLE-computed positions and angles over a short-arc of less than 2 min as observations in an OD process where simplified force models are considered. After the OD convergence, the OP is performed from the last observation epoch to the end of the tracking pass. Simulation and real tracking data processing results show that the pointing prediction errors are usually less than 10″, and the distance errors less than 100 m, therefore, the prediction accuracy is sufficient for the blind laser tracking. PMID:27347958

  8. Robotic Intelligence Kernel: Architecture

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  9. Robotic Intelligence Kernel: Visualization

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.

  10. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  11. Discrimination of Maize Haploid Seeds from Hybrid Seeds Using Vis Spectroscopy and Support Vector Machine Method.

    PubMed

    Liu, Jin; Guo, Ting-ting; Li, Hao-chuan; Jia, Shi-qiang; Yan, Yan-lu; An, Dong; Zhang, Yao; Chen, Shao-jiang

    2015-11-01

    Doubled haploid (DH) lines are routinely applied in the hybrid maize breeding programs of many institutes and companies for their advantages of complete homozygosity and short breeding cycle length. A key issue in this approach is an efficient screening system to identify haploid kernels from the hybrid kernels crossed with the inducer. At present, haploid kernel selection is carried out manually using the"red-crown" kernel trait (the haploid kernel has a non-pigmented embryo and pigmented endosperm) controlled by the R1-nj gene. Manual selection is time-consuming and unreliable. Furthermore, the color of the kernel embryo is concealed by the pericarp. Here, we establish a novel approach for identifying maize haploid kernels based on visible (Vis) spectroscopy and support vector machine (SVM) pattern recognition technology. The diffuse transmittance spectra of individual kernels (141 haploid kernels and 141 hybrid kernels from 9 genotypes) were collected using a portable UV-Vis spectrometer and integrating sphere. The raw spectral data were preprocessed using smoothing and vector normalization methods. The desired feature wavelengths were selected based on the results of the Kolmogorov-Smirnov test. The wavelengths with p values above 0. 05 were eliminated because the distributions of absorbance data in these wavelengths show no significant difference between haploid and hybrid kernels. Principal component analysis was then performed to reduce the number of variables. The SVM model was evaluated by 9-fold cross-validation. In each round, samples of one genotype were used as the testing set, while those of other genotypes were used as the training set. The mean rate of correct discrimination was 92.06%. This result demonstrates the feasibility of using Vis spectroscopy to identify haploid maize kernels. The method would help develop a rapid and accurate automated screening-system for haploid kernels.

  12. Geometric tree kernels: classification of COPD from airway tree geometry.

    PubMed

    Feragen, Aasa; Petersen, Jens; Grimm, Dominik; Dirksen, Asger; Pedersen, Jesper Holst; Borgwardt, Karsten; de Bruijne, Marleen

    2013-01-01

    Methodological contributions: This paper introduces a family of kernels for analyzing (anatomical) trees endowed with vector valued measurements made along the tree. While state-of-the-art graph and tree kernels use combinatorial tree/graph structure with discrete node and edge labels, the kernels presented in this paper can include geometric information such as branch shape, branch radius or other vector valued properties. In addition to being flexible in their ability to model different types of attributes, the presented kernels are computationally efficient and some of them can easily be computed for large datasets (N - 10.000) of trees with 30 - 600 branches. Combining the kernels with standard machine learning tools enables us to analyze the relation between disease and anatomical tree structure and geometry. Experimental results: The kernels are used to compare airway trees segmented from low-dose CT, endowed with branch shape descriptors and airway wall area percentage measurements made along the tree. Using kernelized hypothesis testing we show that the geometric airway trees are significantly differently distributed in patients with Chronic Obstructive Pulmonary Disease (COPD) than in healthy individuals. The geometric tree kernels also give a significant increase in the classification accuracy of COPD from geometric tree structure endowed with airway wall thickness measurements in comparison with state-of-the-art methods, giving further insight into the relationship between airway wall thickness and COPD. Software: Software for computing kernels and statistical tests is available at http://image.diku.dk/aasa/software.php.

  13. Smoothed particle hydrodynamics method applied to pulsatile flow inside a rigid two-dimensional model of left heart cavity.

    PubMed

    Shahriari, S; Kadem, L; Rogers, B D; Hassan, I

    2012-11-01

    This paper aims to extend the application of smoothed particle hydrodynamics (SPH), a meshfree particle method, to simulate flow inside a model of the heart's left ventricle (LV). This work is considered the first attempt to simulate flow inside a heart cavity using a meshfree particle method. Simulating this kind of flow, characterized by high pulsatility and moderate Reynolds number using SPH is challenging. As a consequence, validation of the computational code using benchmark cases is required prior to simulating the flow inside a model of the LV. In this work, this is accomplished by simulating an unsteady oscillating flow (pressure amplitude: A = 2500 N ∕ m(3) and Womersley number: W(o)  = 16) and the steady lid-driven cavity flow (Re = 3200, 5000). The results are compared against analytical solutions and reference data to assess convergence. Then, both benchmark cases are combined and a pulsatile jet in a cavity is simulated and the results are compared with the finite volume method. Here, an approach to deal with inflow and outflow boundary conditions is introduced. Finally, pulsatile inlet flow in a rigid model of the LV is simulated. The results demonstrate the ability of SPH to model complex cardiovascular flows and to track the history of fluid properties. Some interesting features of SPH are also demonstrated in this study, including the relation between particle resolution and sound speed to control compressibility effects and also order of convergence in SPH simulations, which is consistently demonstrated to be between first-order and second-order at the moderate Reynolds numbers investigated.

  14. High speed sorting of Fusarium-damaged wheat kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recent studies have found that resistance to Fusarium fungal infection can be inherited in wheat from one generation to another. However, there is not yet available a cost effective method to separate Fusarium-damaged wheat kernels from undamaged kernels so that wheat breeders can take advantage of...

  15. A method for three-dimensional quantification of vascular smooth muscle orientation: application in viable murine carotid arteries.

    PubMed

    Spronck, Bart; Megens, Remco T A; Reesink, Koen D; Delhaas, Tammo

    2016-04-01

    When studying in vivo arterial mechanical behaviour using constitutive models, smooth muscle cells (SMCs) should be considered, while they play an important role in regulating arterial vessel tone. Current constitutive models assume a strictly circumferential SMC orientation, without any dispersion. We hypothesised that SMC orientation would show considerable dispersion in three dimensions and that helical dispersion would be greater than transversal dispersion. To test these hypotheses, we developed a method to quantify the 3D orientation of arterial SMCs. Fluorescently labelled SMC nuclei of left and right carotid arteries of ten mice were imaged using two-photon laser scanning microscopy. Arteries were imaged at a range of luminal pressures. 3D image processing was used to identify individual nuclei and their orientations. SMCs showed to be arranged in two distinct layers. Orientations were quantified by fitting a Bingham distribution to the observed orientations. As hypothesised, orientation dispersion was much larger helically than transversally. With increasing luminal pressure, transversal dispersion decreased significantly, whereas helical dispersion remained unaltered. Additionally, SMC orientations showed a statistically significant (p < 0.05) mean right-handed helix angle in both left and right arteries and in both layers, which is a relevant finding from a developmental biology perspective. In conclusion, vascular SMC orientation (1) can be quantified in 3D; (2) shows considerable dispersion, predominantly in the helical direction; and (3) has a distinct right-handed helical component in both left and right carotid arteries. The obtained quantitative distribution data are instrumental for constitutive modelling of the artery wall and illustrate the merit of our method.

  16. Fractal Weyl law for Linux Kernel architecture

    NASA Astrophysics Data System (ADS)

    Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.

    2011-01-01

    We study the properties of spectrum and eigenstates of the Google matrix of a directed network formed by the procedure calls in the Linux Kernel. Our results obtained for various versions of the Linux Kernel show that the spectrum is characterized by the fractal Weyl law established recently for systems of quantum chaotic scattering and the Perron-Frobenius operators of dynamical maps. The fractal Weyl exponent is found to be ν ≈ 0.65 that corresponds to the fractal dimension of the network d ≈ 1.3. An independent computation of the fractal dimension by the cluster growing method, generalized for directed networks, gives a close value d ≈ 1.4. The eigenmodes of the Google matrix of Linux Kernel are localized on certain principal nodes. We argue that the fractal Weyl law should be generic for directed networks with the fractal dimension d < 2.

  17. Gaussian kernel width optimization for sparse Bayesian learning.

    PubMed

    Mohsenzadeh, Yalda; Sheikhzadeh, Hamid

    2015-04-01

    Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters.

  18. Choosing parameters of kernel subspace LDA for recognition of face images under pose and illumination variations.

    PubMed

    Huang, Jian; Yuen, Pong C; Chen, Wen-Sheng; Lai, Jian Huang

    2007-08-01

    This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.

  19. Computing the roots of complex orthogonal and kernel polynomials

    SciTech Connect

    Saylor, P.E.; Smolarski, D.C.

    1988-01-01

    A method is presented to compute the roots of complex orthogonal and kernel polynomials. An important application of complex kernel polynomials is the acceleration of iterative methods for the solution of nonsymmetric linear equations. In the real case, the roots of orthogonal polynomials coincide with the eigenvalues of the Jacobi matrix, a symmetric tridiagonal matrix obtained from the defining three-term recurrence relationship for the orthogonal polynomials. In the real case kernel polynomials are orthogonal. The Stieltjes procedure is an algorithm to compute the roots of orthogonal and kernel polynomials bases on these facts. In the complex case, the Jacobi matrix generalizes to a Hessenberg matrix, the eigenvalues of which are roots of either orthogonal or kernel polynomials. The resulting algorithm generalizes the Stieljes procedure. It may not be defined in the case of kernel polynomials, a consequence of the fact that they are orthogonal with respect to a nonpositive bilinear form. (Another consequence is that kernel polynomials need not be of exact degree.) A second algorithm that is always defined is presented for kernel polynomials. Numerical examples are described.

  20. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense.

  1. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  2. KERNEL PHASE IN FIZEAU INTERFEROMETRY

    SciTech Connect

    Martinache, Frantz

    2010-11-20

    The detection of high contrast companions at small angular separation appears feasible in conventional direct images using the self-calibration properties of interferometric observable quantities. The friendly notion of closure phase, which is key to the recent observational successes of non-redundant aperture masking interferometry used with adaptive optics, appears to be one example of a wide family of observable quantities that are not contaminated by phase noise. In the high-Strehl regime, soon to be available thanks to the coming generation of extreme adaptive optics systems on ground-based telescopes, and already available from space, closure phase like information can be extracted from any direct image, even taken with a redundant aperture. These new phase-noise immune observable quantities, called kernel phases, are determined a priori from the knowledge of the geometry of the pupil only. Re-analysis of archive data acquired with the Hubble Space Telescope NICMOS instrument using this new kernel-phase algorithm demonstrates the power of the method as it clearly detects and locates with milliarcsecond precision a known companion to a star at angular separation less than the diffraction limit.

  3. Kernel machine SNP-set testing under multiple candidate kernels.

    PubMed

    Wu, Michael C; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M; Harmon, Quaker E; Lin, Xinyi; Engel, Stephanie M; Molldrem, Jeffrey J; Armistead, Paul M

    2013-04-01

    Joint testing for the cumulative effect of multiple single-nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large-scale genetic association studies. The kernel machine (KM)-testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori because this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest P-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power vs. using the best candidate kernel.

  4. Estimation of Smoothing Error in SBUV Profile and Total Ozone Retrieval

    NASA Technical Reports Server (NTRS)

    Kramarova, N. A.; Bhartia, P. K.; Frith, S. M.; Fisher, B. L.; McPeters, R. D.; Taylor, S.; Labow, G. J.

    2011-01-01

    Data from the Nimbus-4, Nimbus-7 Solar Backscatter Ultra Violet (SBUV) and seven of the NOAA series of SBUV/2 instruments spanning 41 years are being reprocessed using V8.6 algorithm. The data are scheduled to be released by the end of August 2011. An important focus of the new algorithm is to estimate various sources of errors in the SBUV profiles and total ozone retrievals. We discuss here the smoothing errors that describe the components of the profile variability that the SBUV observing system can not measure. The SBUV(/2) instruments have a vertical resolution of 5 km in the middle stratosphere, decreasing to 8 to 10 km below the ozone peak and above 0.5 hPa. To estimate the smoothing effect of the SBUV algorithm, the actual statistics of the fine vertical structure of ozone profiles must be known. The covariance matrix of the ensemble of measured ozone profiles with the high vertical resolution would be a formal representation of the actual ozone variability. We merged the MLS (version 3) and sonde ozone profiles to calculate the covariance matrix, which in general case, for single profile retrieval, might be a function of the latitude and month. Using the averaging kernels of the SBUV(/2) measurements and calculated total covariance matrix one can estimate the smoothing errors for the SBUV ozone profiles. A method to estimate the smoothing effect of the SBUV algorithm is described and the covariance matrixes and averaging kernels are provided along with the SBUV(/2) ozone profiles. The magnitude of the smoothing error varies with altitude, latitude, season and solar zenith angle. The analysis of the smoothing errors, based on the SBUV(/2) monthly zonal mean time series, shows that the largest smoothing errors were detected in the troposphere and might be as large as 15-20% and rapidly decrease with the altitude. In the stratosphere above 40 hPa the smoothing errors are less than 5% and between 10 and 1 hPa the smoothing errors are on the order of 1%. We

  5. Estimation of smoothing error in SBUV profile and total ozone retrieval

    NASA Astrophysics Data System (ADS)

    Kramarova, N. A.; Bhartia, P. K.; Frith, S. M.; Fisher, B. L.; McPeters, R. D.; Taylor, S.; Labow, G. J.

    2011-12-01

    Data from the Nimbus-4, Nimbus-7 Solar Backscatter Ultra Violet (SBUV) and seven of the NOAA series of SBUV/2 instruments spanning 41 years are being reprocessed using V8.6 algorithm. The data are scheduled to be released by the end of August 2011. An important focus of the new algorithm is to estimate various sources of errors in the SBUV profiles and total ozone retrievals. We discuss here the smoothing errors that describe the components of the profile variability that the SBUV observing system can not measure. The SBUV(/2) instruments have a vertical resolution of 5 km in the middle stratosphere, decreasing to 8 to 10 km below the ozone peak and above 0.5 hPa. To estimate the smoothing effect of the SBUV algorithm, the actual statistics of the fine vertical structure of ozone profiles must be known. The covariance matrix of the ensemble of measured ozone profiles with the high vertical resolution would be a formal representation of the actual ozone variability. We merged the MLS (version 3) and sonde ozone profiles to calculate the covariance matrix, which in general case, for single profile retrieval, might be a function of the latitude and month. Using the averaging kernels of the SBUV(/2) measurements and calculated total covariance matrix one can estimate the smoothing errors for the SBUV ozone profiles. A method to estimate the smoothing effect of the SBUV algorithm is described and the covariance matrixes and averaging kernels are provided along with the SBUV(/2) ozone profiles. The magnitude of the smoothing error varies with altitude, latitude, season and solar zenith angle. The analysis of the smoothing errors, based on the SBUV(/2) monthly zonal mean time series, shows that the largest smoothing errors were detected in the troposphere and might be as large as 15-20% and rapidly decrease with the altitude. In the stratosphere above 40 hPa the smoothing errors are less than 5% and between 10 and 1 hPa the smoothing errors are on the order of 1%. We

  6. Kernel density estimation applied to bond length, bond angle, and torsion angle distributions.

    PubMed

    McCabe, Patrick; Korb, Oliver; Cole, Jason

    2014-05-27

    We describe the method of kernel density estimation (KDE) and apply it to molecular structure data. KDE is a quite general nonparametric statistical method suitable even for multimodal data. The method generates smooth probability density function (PDF) representations and finds application in diverse fields such as signal processing and econometrics. KDE appears to have been under-utilized as a method in molecular geometry analysis, chemo-informatics, and molecular structure optimization. The resulting probability densities have advantages over histograms and, importantly, are also suitable for gradient-based optimization. To illustrate KDE, we describe its application to chemical bond length, bond valence angle, and torsion angle distributions and show the ability of the method to model arbitrary torsion angle distributions.

  7. Training Lp norm multiple kernel learning in the primal.

    PubMed

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method.

  8. Numerical solution of the nonlinear Schrödinger equation using smoothed-particle hydrodynamics.

    PubMed

    Mocz, Philip; Succi, Sauro

    2015-05-01

    We formulate a smoothed-particle hydrodynamics numerical method, traditionally used for the Euler equations for fluid dynamics in the context of astrophysical simulations, to solve the nonlinear Schrödinger equation in the Madelung formulation. The probability density of the wave function is discretized into moving particles, whose properties are smoothed by a kernel function. The traditional fluid pressure is replaced by a quantum pressure tensor, for which a robust discretization is found. We demonstrate our numerical method on a variety of numerical test problems involving the simple harmonic oscillator, soliton-soliton collision, Bose-Einstein condensates, collapsing singularities, and dark matter halos governed by the Gross-Pitaevskii-Poisson equation. Our method is conservative, applicable to unbounded domains, and is automatically adaptive in its resolution, making it well suited to study problems with collapsing solutions.

  9. Numerical solution of the nonlinear Schrödinger equation using smoothed-particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Mocz, Philip; Succi, Sauro

    2015-05-01

    We formulate a smoothed-particle hydrodynamics numerical method, traditionally used for the Euler equations for fluid dynamics in the context of astrophysical simulations, to solve the nonlinear Schrödinger equation in the Madelung formulation. The probability density of the wave function is discretized into moving particles, whose properties are smoothed by a kernel function. The traditional fluid pressure is replaced by a quantum pressure tensor, for which a robust discretization is found. We demonstrate our numerical method on a variety of numerical test problems involving the simple harmonic oscillator, soliton-soliton collision, Bose-Einstein condensates, collapsing singularities, and dark matter halos governed by the Gross-Pitaevskii-Poisson equation. Our method is conservative, applicable to unbounded domains, and is automatically adaptive in its resolution, making it well suited to study problems with collapsing solutions.

  10. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or...

  11. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored...

  12. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel,...

  13. A TWO-DIMENSIONAL METHOD OF MANUFACTURED SOLUTIONS BENCHMARK SUITE BASED ON VARIATIONS OF LARSEN'S BENCHMARK WITH ESCALATING ORDER OF SMOOTHNESS OF THE EXACT SOLUTION

    SciTech Connect

    Sebastian Schunert; Yousry Y. Azmy

    2011-05-01

    The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally fine mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite first eliminates the aforementioned limitation of fine mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme.

  14. StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.

    PubMed

    Li, Chenhui; Baciu, George; Yu, Han

    2017-02-13

    Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heatmap. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.

  15. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images.

  16. Study of the Impact of Tissue Density Heterogeneities on 3-Dimensional Abdominal Dosimetry: Comparison Between Dose Kernel Convolution and Direct Monte Carlo Methods

    PubMed Central

    Dieudonné, Arnaud; Hobbs, Robert F.; Lebtahi, Rachida; Maurel, Fabien; Baechler, Sébastien; Wahl, Richard L.; Boubaker, Ariane; Le Guludec, Dominique; Sgouros, Georges; Gardin, Isabelle

    2014-01-01

    Dose kernel convolution (DK) methods have been proposed to speed up absorbed dose calculations in molecular radionuclide therapy. Our aim was to evaluate the impact of tissue density heterogeneities (TDH) on dosimetry when using a DK method and to propose a simple density-correction method. Methods This study has been conducted on 3 clinical cases: case 1, non-Hodgkin lymphoma treated with 131I-tositumomab; case 2, a neuroendocrine tumor treatment simulated with 177Lu-peptides; and case 3, hepatocellular carcinoma treated with 90Y-microspheres. Absorbed dose calculations were performed using a direct Monte Carlo approach accounting for TDH (3D-RD), and a DK approach (VoxelDose, or VD). For each individual voxel, the VD absorbed dose, DVD, calculated assuming uniform density, was corrected for density, giving DVDd. The average 3D-RD absorbed dose values, D3DRD, were compared with DVD and DVDd, using the relative difference ΔVD/3DRD. At the voxel level, density-binned ΔVD/3DRD and ΔVDd/3DRD were plotted against ρ and fitted with a linear regression. Results The DVD calculations showed a good agreement with D3DRD. ΔVD/3DRD was less than 3.5%, except for the tumor of case 1 (5.9%) and the renal cortex of case 2 (5.6%). At the voxel level, the ΔVD/3DRD range was 0%–14% for cases 1 and 2, and −3% to 7% for case 3. All 3 cases showed a linear relationship between voxel bin-averaged ΔVD/3DRD and density, ρ: case 1 (Δ = −0.56ρ + 0.62, R2 = 0.93), case 2 (Δ = −0.91ρ + 0.96, R2 = 0.99), and case 3 (Δ = −0.69ρ + 0.72, R2 = 0.91). The density correction improved the agreement of the DK method with the Monte Carlo approach (ΔVDd/3DRD < 1.1%), but with a lesser extent for the tumor of case 1 (3.1%). At the voxel level, the ΔVDd/3DRD range decreased for the 3 clinical cases (case 1, −1% to 4%; case 2, −0.5% to 1.5%, and −1.5% to 2%). No more linear regression existed for cases 2 and 3, contrary to case 1 (Δ = 0.41ρ − 0.38, R2 = 0.88) although

  17. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  18. Estimating the Bias of Local Polynomial Approximations Using the Peano Kernel

    SciTech Connect

    Blair, J., and Machorro, E.

    2012-03-22

    These presentation visuals define local polynomial approximations, give formulas for bias and random components of the error, and express bias error in terms of the Peano kernel. They further derive constants that give figures of merit, and show the figures of merit for 3 common weighting functions. The Peano kernel theorem yields estimates for the bias error for local-polynomial-approximation smoothing that are superior in several ways to the error estimates in the current literature.

  19. Spectrum-based kernel length estimation for Gaussian process classification.

    PubMed

    Wang, Liang; Li, Chuan

    2014-06-01

    Recent studies have shown that Gaussian process (GP) classification, a discriminative supervised learning approach, has achieved competitive performance in real applications compared with most state-of-the-art supervised learning methods. However, the problem of automatic model selection in GP classification, involving the kernel function form and the corresponding parameter values (which are unknown in advance), remains a challenge. To make GP classification a more practical tool, this paper presents a novel spectrum analysis-based approach for model selection by refining the GP kernel function to match the given input data. Specifically, we target the problem of GP kernel length scale estimation. Spectrums are first calculated analytically from the kernel function itself using the autocorrelation theorem as well as being estimated numerically from the training data themselves. Then, the kernel length scale is automatically estimated by equating the two spectrum values, i.e., the kernel function spectrum equals to the estimated training data spectrum. Compared with the classical Bayesian method for kernel length scale estimation via maximizing the marginal likelihood (which is time consuming and could suffer from multiple local optima), extensive experimental results on various data sets show that our proposed method is both efficient and accurate.

  20. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects.

  1. MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES

    PubMed Central

    Dunson, David B.

    2013-01-01

    Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563

  2. Prediction of kernel density of corn using single-kernel near infrared spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...

  3. The Adaptive Kernel Neural Network

    DTIC Science & Technology

    1989-10-01

    A neural network architecture for clustering and classification is described. The Adaptive Kernel Neural Network (AKNN) is a density estimation...classification layer. The AKNN retains the inherent parallelism common in neural network models. Its relationship to the kernel estimator allows the network to

  4. Diffusion tensor smoothing through weighted Karcher means

    PubMed Central

    Carmichael, Owen; Chen, Jun; Paul, Debashis; Peng, Jie

    2014-01-01

    Diffusion tensor magnetic resonance imaging (MRI) quantifies the spatial distribution of water Diffusion at each voxel on a regular grid of locations in a biological specimen by Diffusion tensors– 3 × 3 positive definite matrices. Removal of noise from DTI is an important problem due to the high scientific relevance of DTI and relatively low signal to noise ratio it provides. Leading approaches to this problem amount to estimation of weighted Karcher means of Diffusion tensors within spatial neighborhoods, under various metrics imposed on the space of tensors. However, it is unclear how the behavior of these estimators varies with the magnitude of DTI sensor noise (the noise resulting from the thermal e!ects of MRI scanning) as well as the geometric structure of the underlying Diffusion tensor neighborhoods. In this paper, we combine theoretical analysis, empirical analysis of simulated DTI data, and empirical analysis of real DTI scans to compare the noise removal performance of three kernel-based DTI smoothers that are based on Euclidean, log-Euclidean, and affine-invariant metrics. The results suggest, contrary to conventional wisdom, that imposing a simplistic Euclidean metric may in fact provide comparable or superior noise removal, especially in relatively unstructured regions and/or in the presence of moderate to high levels of sensor noise. On the contrary, log-Euclidean and affine-invariant metrics may lead to better noise removal in highly structured anatomical regions, especially when the sensor noise is of low magnitude. These findings emphasize the importance of considering the interplay of sensor noise magnitude and tensor field geometric structure when assessing Diffusion tensor smoothing options. They also point to the necessity for continued development of smoothing methods that perform well across a large range of scenarios. PMID:25419264

  5. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    PubMed

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  6. Determining the Parameters of the Hereditary Kernels of Nonlinear Viscoelastic Isotropic Materials in Torsion

    NASA Astrophysics Data System (ADS)

    Golub, V. P.; Ragulina, V. S.; Fernati, P. V.

    2015-03-01

    A method for determining the parameters of the hereditary kernels for nonlinear viscoelastic materials is tested in conditions of pure torsion. A Rabotnov-type model is chosen. The parameters of the hereditary kernels are determined by fitting discrete values of the kernels found using a similarity condition. The discrete values of the kernels in the zone of singularity occurring in short-term tests are found using weight functions. The Abel kernel, a combination of power and exponential functions, and a fractional-exponential function are considered

  7. Seismic hazard assessment in Central Asia using smoothed seismicity approaches

    NASA Astrophysics Data System (ADS)

    Ullah, Shahid; Bindi, Dino; Zuccolo, Elisa; Mikhailova, Natalia; Danciu, Laurentiu; Parolai, Stefano

    2014-05-01

    Central Asia has a long history of large to moderate frequent seismicity and is therefore considered one of the most seismically active regions with a high hazard level in the world. In the hazard map produced at global scale by GSHAP project in 1999( Giardini, 1999), Central Asia is characterized by peak ground accelerations with return period of 475 years as high as 4.8 m/s2. Therefore Central Asia was selected as a target area for EMCA project (Earthquake Model Central Asia), a regional project of GEM (Global Earthquake Model) for this area. In the framework of EMCA, a new generation of seismic hazard maps are foreseen in terms of macro-seismic intensity, in turn to be used to obtain seismic risk maps for the region. Therefore Intensity Prediction Equation (IPE) had been developed for the region based on the distribution of intensity data for different earthquakes occurred in Central Asia since the end of 19th century (Bindi et al. 2011). The same observed intensity distribution had been used to assess the seismic hazard following the site approach (Bindi et al. 2012). In this study, we present the probabilistic seismic hazard assessment of Central Asia in terms of MSK-64 based on two kernel estimation methods. We consider the smoothed seismicity approaches of Frankel (1995), modified for considering the adaptive kernel proposed by Stock and Smith (2002), and of Woo (1996), modified for considering a grid of sites and estimating a separate bandwidth for each site. The activity rate maps are shown from Frankel approach showing the effects of fixed and adaptive kernel. The hazard is estimated for rock site condition based on 10% probability of exceedance in 50 years. Maximum intensity of about 9 is observed in the Hindukush region.

  8. Robotic intelligence kernel

    DOEpatents

    Bruemmer, David J.

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  9. Flexible Kernel Memory

    PubMed Central

    Nowicki, Dimitri; Siegelmann, Hava

    2010-01-01

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013

  10. A new parameter identification method to obtain change in smooth musclecontraction state due to mechanical skin irritation

    NASA Astrophysics Data System (ADS)

    Bauer, Daniela

    2005-03-01

    A light scratch with a needle induces histamine and neuropetide release on the line of stroke and in the surrounding tissue. Histamine and neuropeptides are vasodilaters. They create vasodilation by changing the contraction state of the vascular smooth muscles and hence vessel compliance. Smooth muscle contraction state is very difficult to measure. We propose an identification procedure that determines change in compliance. The procedure is based on numerical and experimental results. Blood flow is measured by Laser Doppler Velocimetry. Numerical data is obtained by a continuous model of hierarchically arranged porous media of the vascular network [1]. We show that compliance increases after the stroke in the entire tissue. Then, compliance decreases in the surrounding tissue, while it keeps increasing on the line of stroke. Hence, blood is transported from the surrounding tissue to the line of stroke. Thus, higher blood volume on the line of stroke is obtained. [1] Bauer, D., Grebe, R. Ehrlacher, A., 2004. A three layer continuous model of porous media to describe the first phase of skin irritation. J. Theoret. Bio. in press

  11. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    SciTech Connect

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-10

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  12. Progress in smooth particle hydrodynamics

    SciTech Connect

    Wingate, C.A.; Dilts, G.A.; Mandell, D.A.; Crotzer, L.A.; Knapp, C.E.

    1998-07-01

    Smooth Particle Hydrodynamics (SPH) is a meshless, Lagrangian numerical method for hydrodynamics calculations where calculational elements are fuzzy particles which move according to the hydrodynamic equations of motion. Each particle carries local values of density, temperature, pressure and other hydrodynamic parameters. A major advantage of SPH is that it is meshless, thus large deformation calculations can be easily done with no connectivity complications. Interface positions are known and there are no problems with advecting quantities through a mesh that typical Eulerian codes have. These underlying SPH features make fracture physics easy and natural and in fact, much of the applications work revolves around simulating fracture. Debris particles from impacts can be easily transported across large voids with SPH. While SPH has considerable promise, there are some problems inherent in the technique that have so far limited its usefulness. The most serious problem is the well known instability in tension leading to particle clumping and numerical fracture. Another problem is that the SPH interpolation is only correct when particles are uniformly spaced a half particle apart leading to incorrect strain rates, accelerations and other quantities for general particle distributions. SPH calculations are also sensitive to particle locations. The standard artificial viscosity treatment in SPH leads to spurious viscosity in shear flows. This paper will demonstrate solutions for these problems that they and others have been developing. The most promising is to replace the SPH interpolant with the moving least squares (MLS) interpolant invented by Lancaster and Salkauskas in 1981. SPH and MLS are closely related with MLS being essentially SPH with corrected particle volumes. When formulated correctly, JLS is conservative, stable in both compression and tension, does not have the SPH boundary problems and is not sensitive to particle placement. The other approach to

  13. An Ensemble Approach to Building Mercer Kernels with Prior Information

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  14. Improving smoothing efficiency of rigid conformal polishing tool using time-dependent smoothing evaluation model

    NASA Astrophysics Data System (ADS)

    Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng

    2017-01-01

    A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o

  15. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off....

  16. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...

  17. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle...

  18. Effectiveness of Analytic Smoothing in Equipercentile Equating.

    ERIC Educational Resources Information Center

    Kolen, Michael J.

    1984-01-01

    An analytic procedure for smoothing in equipercentile equating using cubic smoothing splines is described and illustrated. The effectiveness of the procedure is judged by comparing the results from smoothed equipercentile equating with those from other equating methods using multiple cross-validations for a variety of sample sizes. (Author/JKS)

  19. Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring.

    PubMed

    Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi

    2017-01-18

    Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.

  20. Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring

    PubMed Central

    Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi

    2017-01-01

    Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L1-norm of kernel intensity and the squared L2-norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L1-norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations. PMID:28106764

  1. A Gabor-block-based kernel discriminative common vector approach using cosine kernels for human face recognition.

    PubMed

    Kar, Arindam; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita; Kundu, Mahantapas

    2012-01-01

    In this paper a nonlinear Gabor Wavelet Transform (GWT) discriminant feature extraction approach for enhanced face recognition is proposed. Firstly, the low-energized blocks from Gabor wavelet transformed images are extracted. Secondly, the nonlinear discriminating features are analyzed and extracted from the selected low-energized blocks by the generalized Kernel Discriminative Common Vector (KDCV) method. The KDCV method is extended to include cosine kernel function in the discriminating method. The KDCV with the cosine kernels is then applied on the extracted low-energized discriminating feature vectors to obtain the real component of a complex quantity for face recognition. In order to derive positive kernel discriminative vectors, we apply only those kernel discriminative eigenvectors that are associated with nonzero eigenvalues. The feasibility of the low-energized Gabor-block-based generalized KDCV method with cosine kernel function models has been successfully tested for classification using the L(1), L(2) distance measures; and the cosine similarity measure on both frontal and pose-angled face recognition. Experimental results on the FRAV2D and the FERET database demonstrate the effectiveness of this new approach.

  2. Nonlinear stochastic system identification of skin using volterra kernels.

    PubMed

    Chen, Yi; Hunter, Ian W

    2013-04-01

    Volterra kernel stochastic system identification is a technique that can be used to capture and model nonlinear dynamics in biological systems, including the nonlinear properties of skin during indentation. A high bandwidth and high stroke Lorentz force linear actuator system was developed and used to test the mechanical properties of bulk skin and underlying tissue in vivo using a non-white input force and measuring an output position. These short tests (5 s) were conducted in an indentation configuration normal to the skin surface and in an extension configuration tangent to the skin surface. Volterra kernel solution methods were used including a fast least squares procedure and an orthogonalization solution method. The practical modifications, such as frequency domain filtering, necessary for working with low-pass filtered inputs are also described. A simple linear stochastic system identification technique had a variance accounted for (VAF) of less than 75%. Representations using the first and second Volterra kernels had a much higher VAF (90-97%) as well as a lower Akaike information criteria (AICc) indicating that the Volterra kernel models were more efficient. The experimental second Volterra kernel matches well with results from a dynamic-parameter nonlinearity model with fixed mass as a function of depth as well as stiffness and damping that increase with depth into the skin. A study with 16 subjects showed that the kernel peak values have mean coefficients of variation (CV) that ranged from 3 to 8% and showed that the kernel principal components were correlated with location on the body, subject mass, body mass index (BMI), and gender. These fast and robust methods for Volterra kernel stochastic system identification can be applied to the characterization of biological tissues, diagnosis of skin diseases, and determination of consumer product efficacy.

  3. Kernel-aligned multi-view canonical correlation analysis for image recognition

    NASA Astrophysics Data System (ADS)

    Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao

    2016-09-01

    Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.

  4. Balancing continuous covariates based on Kernel densities.

    PubMed

    Ma, Zhenjun; Hu, Feifang

    2013-03-01

    The balance of important baseline covariates is essential for convincing treatment comparisons. Stratified permuted block design and minimization are the two most commonly used balancing strategies, both of which require the covariates to be discrete. Continuous covariates are typically discretized in order to be included in the randomization scheme. But breakdown of continuous covariates into subcategories often changes the nature of the covariates and makes distributional balance unattainable. In this article, we propose to balance continuous covariates based on Kernel density estimations, which keeps the continuity of the covariates. Simulation studies show that the proposed Kernel-Minimization can achieve distributional balance of both continuous and categorical covariates, while also keeping the group size well balanced. It is also shown that the Kernel-Minimization is less predictable than stratified permuted block design and minimization. Finally, we apply the proposed method to redesign the NINDS trial, which has been a source of controversy due to imbalance of continuous baseline covariates. Simulation shows that imbalances such as those observed in the NINDS trial can be generally avoided through the implementation of the new method.

  5. A Comparison of Methods for Estimating Conditional Item Score Differences in Differential Item Functioning (DIF) Assessments. Research Report. ETS RR-10-15

    ERIC Educational Resources Information Center

    Moses, Tim; Miao, Jing; Dorans, Neil

    2010-01-01

    This study compared the accuracies of four differential item functioning (DIF) estimation methods, where each method makes use of only one of the following: raw data, logistic regression, loglinear models, or kernel smoothing. The major focus was on the estimation strategies' potential for estimating score-level, conditional DIF. A secondary focus…

  6. Smoothing and Equating Methods Applied to Different Types of Test Score Distributions and Evaluated with Respect to Multiple Equating Criteria. Research Report. ETS RR-11-20

    ERIC Educational Resources Information Center

    Moses, Tim; Liu, Jinghua

    2011-01-01

    In equating research and practice, equating functions that are smooth are typically assumed to be more accurate than equating functions with irregularities. This assumption presumes that population test score distributions are relatively smooth. In this study, two examples were used to reconsider common beliefs about smoothing and equating. The…

  7. Kernel component analysis using an epsilon-insensitive robust loss function.

    PubMed

    Alzate, Carlos; Suykens, Johan A K

    2008-09-01

    Kernel principal component analysis (PCA) is a technique to perform feature extraction in a high-dimensional feature space, which is nonlinearly related to the original input space. The kernel PCA formulation corresponds to an eigendecomposition of the kernel matrix: eigenvectors with large eigenvalues correspond to the principal components in the feature space. Starting from the least squares support vector machine (LS-SVM) formulation to kernel PCA, we extend it to a generalized form of kernel component analysis (KCA) with a general underlying loss function made explicit. For classical kernel PCA, the underlying loss function is L(2) . In this generalized form, one can plug in also other loss functions. In the context of robust statistics, it is known that the L(2) loss function is not robust because its influence function is not bounded. Therefore, outliers can skew the solution from the desired one. Another issue with kernel PCA is the lack of sparseness: the principal components are dense expansions in terms of kernel functions. In this paper, we introduce robustness and sparseness into kernel component analysis by using an epsilon-insensitive robust loss function. We propose two different algorithms. The first method solves a set of nonlinear equations with kernel PCA as starting points. The second method uses a simplified iterative weighting procedure that leads to solving a sequence of generalized eigenvalue problems. Simulations with toy and real-life data show improvements in terms of robustness together with a sparse representation.

  8. Determining the optimal smoothing length scale for actuator line models of wind turbine blades

    NASA Astrophysics Data System (ADS)

    Martinez, Luis; Meneveau, Charles

    2015-11-01

    The actuator line model (ALM) is a widely used tool for simulating wind turbines when performing Large-Eddy Simulations. The ALM uses a smearing kernel ηɛ = 1 /ɛ3π 3 / 2 exp (-r2 /ɛ2) , where r is the distance to an actuator point, and ɛ is the smoothing length scale which establishes the kernel width, to project the lift and drag forces onto the grid. In this work, we develop formulations to establish the optimum value of the smoothing length scale ɛ, based on physical arguments, instead of purely numerical constraints. This parameter has a very important role in the ALM, to provide a length scale, which may, for example, be related to the chord of the airfoil being studied. In the proposed approach, we compare features (such as vertical pressure gradient) of a potential flow solution for flow over a lifting surface with features of the solution of the Euler equations with a body force term. The potential flow solution over a lifting surface is used as a general representation of an airfoil. The method presented aims to minimize the difference between these features of the flow fields as a function of the smearing length scale (ɛ), in order to obtain the optimum value. This work is supported by NSF (IGERT and IIA-1243482) and computations use XSEDE resources.

  9. Adaptive diffusion kernel learning from biological networks for protein function prediction

    PubMed Central

    Sun, Liang; Ji, Shuiwang; Ye, Jieping

    2008-01-01

    Background Machine-learning tools have gained considerable attention during the last few years for analyzing biological networks for protein function prediction. Kernel methods are suitable for learning from graph-based data such as biological networks, as they only require the abstraction of the similarities between objects into the kernel matrix. One key issue in kernel methods is the selection of a good kernel function. Diffusion kernels, the discretization of the familiar Gaussian kernel of Euclidean space, are commonly used for graph-based data. Results In this paper, we address the issue of learning an optimal diffusion kernel, in the form of a convex combination of a set of pre-specified kernels constructed from biological networks, for protein function prediction. Most prior work on this kernel learning task focus on variants of the loss function based on Support Vector Machines (SVM). Their extensions to other loss functions such as the one based on Kullback-Leibler (KL) divergence, which is more suitable for mining biological networks, lead to expensive optimization problems. By exploiting the special structure of the diffusion kernel, we show that this KL divergence based kernel learning problem can be formulated as a simple optimization problem, which can then be solved efficiently. It is further extended to the multi-task case where we predict multiple functions of a protein simultaneously. We evaluate the efficiency and effectiveness of the proposed algorithms using two benchmark data sets. Conclusion Results show that the performance of linearly combined diffusion kernel is better than every single candidate diffusion kernel. When the number of tasks is large, the algorithms based on multiple tasks are favored due to their competitive recognition performance and small computational costs. PMID:18366736

  10. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  11. Resummed memory kernels in generalized system-bath master equations.

    PubMed

    Mavros, Michael G; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  12. Resummed memory kernels in generalized system-bath master equations

    NASA Astrophysics Data System (ADS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  13. Resummed memory kernels in generalized system-bath master equations

    SciTech Connect

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  14. Multiple Kernel Learning for Visual Object Recognition: A Review.

    PubMed

    Bucak, Serhat S; Rong Jin; Jain, Anil K

    2014-07-01

    Multiple kernel learning (MKL) is a principled approach for selecting and combining kernels for a given recognition task. A number of studies have shown that MKL is a useful tool for object recognition, where each image is represented by multiple sets of features and MKL is applied to combine different feature sets. We review the state-of-the-art for MKL, including different formulations and algorithms for solving the related optimization problems, with the focus on their applications to object recognition. One dilemma faced by practitioners interested in using MKL for object recognition is that different studies often provide conflicting results about the effectiveness and efficiency of MKL. To resolve this, we conduct extensive experiments on standard datasets to evaluate various approaches to MKL for object recognition. We argue that the seemingly contradictory conclusions offered by studies are due to different experimental setups. The conclusions of our study are: (i) given a sufficient number of training examples and feature/kernel types, MKL is more effective for object recognition than simple kernel combination (e.g., choosing the best performing kernel or average of kernels); and (ii) among the various approaches proposed for MKL, the sequential minimal optimization, semi-infinite programming, and level method based ones are computationally most efficient.

  15. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  16. Coupled kernel embedding for low resolution face image recognition.

    PubMed

    Ren, Chuan-Xian; Dai, Dao-Qing; Yan, Hong

    2012-08-01

    Practical video scene and face recognition systems are sometimes confronted with low-resolution (LR) images. The faces may be very small even if the video is clear, thus it is difficult to directly measure the similarity between the faces and the high-resolution (HR) training samples. Traditional super-resolution (SR) methods based face recognition usually have limited performance because the target of SR may not be consistent with that of classification, and time-consuming SR algorithms are not suitable for real-time applications. In this paper, a new feature extraction method called Coupled Kernel Embedding (CKE) is proposed for LR face recognition without any SR preprocessing. In this method, the final kernel matrix is constructed by concatenating two individual kernel matrices in the diagonal direction, and the (semi-)positively definite properties are preserved for optimization. CKE addresses the problem of comparing multi-modal data that are difficult for conventional methods in practice due to the lack of an efficient similarity measure. Particularly, different kernel types (e.g., linear, Gaussian, polynomial) can be integrated into an uniformed optimization objective, which cannot be achieved by simple linear methods. CKE solves this problem by minimizing the dissimilarities captured by their kernel Gram matrices in the low- and high-resolution spaces. In the implementation, the nonlinear objective function is minimized by a generalized eigenvalue decomposition. Experiments on benchmark and real databases show that our CKE method indeed improves the recognition performance.

  17. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    SciTech Connect

    M Ali, M. K. E-mail: eutoco@gmail.com; Ruslan, M. H. E-mail: eutoco@gmail.com; Muthuvalu, M. S. E-mail: jumat@ums.edu.my; Wong, J. E-mail: jumat@ums.edu.my; Sulaiman, J. E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md. E-mail: hafidzruslan@eng.ukm.my

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  18. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    NASA Astrophysics Data System (ADS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  19. Classification of corn kernels contaminated with aflatoxins using fluorescence and reflectance hyperspectral images analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas

    2015-05-01

    Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.

  20. General-form 3-3-3 interpolation kernel and its simplified frequency-response derivation

    NASA Astrophysics Data System (ADS)

    Deng, Tian-Bo

    2016-11-01

    An interpolation kernel is required in a wide variety of signal processing applications such as image interpolation and timing adjustment in digital communications. This article presents a general-form interpolation kernel called 3-3-3 interpolation kernel and derives its frequency response in a closed-form by using a simple derivation method. This closed-form formula is preliminary to designing various 3-3-3 interpolation kernels subject to a set of design constraints. The 3-3-3 interpolation kernel is formed through utilising the third-degree piecewise polynomials, and it is an even-symmetric function. Thus, it will suffice to consider only its right-hand side when deriving its frequency response. Since the right-hand side of the interpolation kernel contains three piecewise polynomials of the third degree, i.e. the degrees of the three piecewise polynomials are (3,3,3), we call it the 3-3-3 interpolation kernel. Once the general-form frequency-response formula is derived, we can systematically formulate the design of various 3-3-3 interpolation kernels subject to a set of design constraints, which are targeted for different interpolation applications. Therefore, the closed-form frequency-response expression is preliminary to the optimal design of various 3-3-3 interpolation kernels. We will use an example to show the optimal design of a 3-3-3 interpolation kernel based on the closed-form frequency-response expression.

  1. Determining the parameters of the fractional exponential heredity kernels of linear viscoelastic materials

    NASA Astrophysics Data System (ADS)

    Golub, V. P.; Fernati, P. V.; Lyashenko, Ya. G.

    2008-09-01

    The parameters of the fractional exponential creep and relaxation kernels of linear viscoelastic materials are determined. Methods that approximate the kernel by using the Mittag-Leffler function, the Laplace-Carson transform, and direct approximation of the creep function by the original equation are analyzed. The parameters of fractional exponential kernels are determined for aramid fibers, parapolyamide fibers, glass-reinforced plastic, and polymer concrete. It is shown that the kernel parameters calculated through the direct approximation of the creep function provide the best agreement between theory and experiment. The methods are experimentally validated for constant-stress and variable-stress loading in the modes of additional loading and complete unloading

  2. Minimum classification error-based weighted support vector machine kernels for speaker verification.

    PubMed

    Suh, Youngjoo; Kim, Hoirin

    2013-04-01

    Support vector machines (SVMs) have been proved to be an effective approach to speaker verification. An appropriate selection of the kernel function is a key issue in SVM-based classification. In this letter, a new SVM-based speaker verification method utilizing weighted kernels in the Gaussian mixture model supervector space is proposed. The weighted kernels are derived by using the discriminative training approach, which minimizes speaker verification errors. Experiments performed on the NIST 2008 speaker recognition evaluation task showed that the proposed approach provides substantially improved performance over the baseline kernel-based method.

  3. RTOS kernel in portable electrocardiograph

    NASA Astrophysics Data System (ADS)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  4. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  5. Antioxidant and antimicrobial activities of bitter and sweet apricot (Prunus armeniaca L.) kernels.

    PubMed

    Yiğit, D; Yiğit, N; Mavi, A

    2009-04-01

    The present study describes the in vitro antimicrobial and antioxidant activity of methanol and water extracts of sweet and bitter apricot (Prunus armeniaca L.) kernels. The antioxidant properties of apricot kernels were evaluated by determining radical scavenging power, lipid peroxidation inhibition activity and total phenol content measured with a DPPH test, the thiocyanate method and the Folin method, respectively. In contrast to extracts of the bitter kernels, both the water and methanol extracts of sweet kernels have antioxidant potential. The highest percent inhibition of lipid peroxidation (69%) and total phenolic content (7.9 +/- 0.2 microg/mL) were detected in the methanol extract of sweet kernels (Hasanbey) and in the water extract of the same cultivar, respectively. The antimicrobial activities of the above extracts were also tested against human pathogenic microorganisms using a disc-diffusion method, and the minimal inhibitory concentration (MIC) values of each active extract were determined. The most effective antibacterial activity was observed in the methanol and water extracts of bitter kernels and in the methanol extract of sweet kernels against the Gram-positive bacteria Staphylococcus aureus. Additionally, the methanol extracts of the bitter kernels were very potent against the Gram-negative bacteria Escherichia coli (0.312 mg/mL MIC value). Significant anti-candida activity was also observed with the methanol extract of bitter apricot kernels against Candida albicans, consisting of a 14 mm in diameter of inhibition zone and a 0.625 mg/mL MIC value.

  6. Smooth halos in the cosmic web

    NASA Astrophysics Data System (ADS)

    Gaite, José

    2015-04-01

    Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in cold dark matter N-body simulations. These entropic measures provide us with a quantitative description of the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these ``smoothness sizes'' have no direct relation to the virial radii. Finally, we discuss the influence of N-body discreteness parameters on smoothness.

  7. Smooth halos in the cosmic web

    SciTech Connect

    Gaite, José

    2015-04-01

    Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in cold dark matter N-body simulations. These entropic measures provide us with a quantitative description of the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these ''smoothness sizes'' have no direct relation to the virial radii. Finally, we discuss the influence of N-body discreteness parameters on smoothness.

  8. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    PubMed

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches.

  9. Numerical Discretization-Based Estimation Methods for Ordinary Differential Equation Models via Penalized Spline Smoothing with Applications in Biomedical Research

    PubMed Central

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-01-01

    Summary Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this paper, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler’s method, trapezoidal rule and Runge-Kutta method. A higher order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators (DBE) are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods to an HIV study to further illustrate the usefulness of the proposed approaches. PMID:22376200

  10. Kernels, Degrees of Freedom, and Power Properties of Quadratic Distance Goodness-of-Fit Tests

    PubMed Central

    Lindsay, Bruce G.; Markatou, Marianthi; Ray, Surajit

    2014-01-01

    In this article, we study the power properties of quadratic-distance-based goodness-of-fit tests. First, we introduce the concept of a root kernel and discuss the considerations that enter the selection of this kernel. We derive an easy to use normal approximation to the power of quadratic distance goodness-of-fit tests and base the construction of a noncentrality index, an analogue of the traditional noncentrality parameter, on it. This leads to a method akin to the Neyman-Pearson lemma for constructing optimal kernels for specific alternatives. We then introduce a midpower analysis as a device for choosing optimal degrees of freedom for a family of alternatives of interest. Finally, we introduce a new diffusion kernel, called the Pearson-normal kernel, and study the extent to which the normal approximation to the power of tests based on this kernel is valid. Supplementary materials for this article are available online. PMID:24764609

  11. Model-based online learning with kernels.

    PubMed

    Li, Guoqi; Wen, Changyun; Li, Zheng Guo; Zhang, Aimin; Yang, Feng; Mao, Kezhi

    2013-03-01

    New optimization models and algorithms for online learning with Kernels (OLK) in classification, regression, and novelty detection are proposed in a reproducing Kernel Hilbert space. Unlike the stochastic gradient descent algorithm, called the naive online Reg minimization algorithm (NORMA), OLK algorithms are obtained by solving a constrained optimization problem based on the proposed models. By exploiting the techniques of the Lagrange dual problem like Vapnik's support vector machine (SVM), the solution of the optimization problem can be obtained iteratively and the iteration process is similar to that of the NORMA. This further strengthens the foundation of OLK and enriches the research area of SVM. We also apply the obtained OLK algorithms to problems in classification, regression, and novelty detection, including real time background substraction, to show their effectiveness. It is illustrated that, based on the experimental results of both classification and regression, the accuracy of OLK algorithms is comparable with traditional SVM-based algorithms, such as SVM and least square SVM (LS-SVM), and with the state-of-the-art algorithms, such as Kernel recursive least square (KRLS) method and projectron method, while it is slightly higher than that of NORMA. On the other hand, the computational cost of the OLK algorithm is comparable with or slightly lower than existing online methods, such as above mentioned NORMA, KRLS, and projectron methods, but much lower than that of SVM-based algorithms. In addition, different from SVM and LS-SVM, it is possible for OLK algorithms to be applied to non-stationary problems. Also, the applicability of OLK in novelty detection is illustrated by simulation results.

  12. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  13. Weighted Feature Gaussian Kernel SVM for Emotion Recognition

    PubMed Central

    Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443

  14. Weighted Feature Gaussian Kernel SVM for Emotion Recognition.

    PubMed

    Wei, Wei; Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods.

  15. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  16. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  17. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  18. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  19. Travel-Time and Amplitude Sensitivity Kernels

    DTIC Science & Technology

    2011-09-01

    amplitude sensitivity kernels shown in the lower panels concentrate about the corresponding eigenrays . Each 3D kernel exhibits a broad negative...in 2 and 3 dimensions have similar 11 shapes to corresponding travel-time sensitivity kernels (TSKs), centered about the respective eigenrays

  20. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  1. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  2. Verification of Chare-kernel programs

    SciTech Connect

    Bhansali, S.; Kale, L.V. )

    1989-01-01

    Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.

  3. A one-class kernel fisher criterion for outlier detection.

    PubMed

    Dufrenois, Franck

    2015-05-01

    Recently, Dufrenois and Noyer proposed a one class Fisher's linear discriminant to isolate normal data from outliers. In this paper, a kernelized version of their criterion is presented. Originally on the basis of an iterative optimization process, alternating between subspace selection and clustering, I show here that their criterion has an upper bound making these two problems independent. In particular, the estimation of the label vector is formulated as an unconstrained binary linear problem (UBLP) which can be solved using an iterative perturbation method. Once the label vector is estimated, an optimal projection subspace is obtained by solving a generalized eigenvalue problem. Like many other kernel methods, the performance of the proposed approach depends on the choice of the kernel. Constructed with a Gaussian kernel, I show that the proposed contrast measure is an efficient indicator for selecting an optimal kernel width. This property simplifies the model selection problem which is typically solved by costly (generalized) cross-validation procedures. Initialization, convergence analysis, and computational complexity are also discussed. Lastly, the proposed algorithm is compared with recent novelty detectors on synthetic and real data sets.

  4. Face detection based on multiple kernel learning algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun

    2016-09-01

    Face detection is important for face localization in face or facial expression recognition, etc. The basic idea is to determine whether there is a face in an image or not, and also its location, size. It can be seen as a binary classification problem, which can be well solved by support vector machine (SVM). Though SVM has strong model generalization ability, it has some limitations, which will be deeply analyzed in the paper. To access them, we study the principle and characteristics of the Multiple Kernel Learning (MKL) and propose a MKL-based face detection algorithm. In the paper, we describe the proposed algorithm in the interdisciplinary research perspective of machine learning and image processing. After analyzing the limitation of describing a face with a single feature, we apply several ones. To fuse them well, we try different kernel functions on different feature. By MKL method, the weight of each single function is determined. Thus, we obtain the face detection model, which is the kernel of the proposed method. Experiments on the public data set and real life face images are performed. We compare the performance of the proposed algorithm with the single kernel-single feature based algorithm and multiple kernels-single feature based algorithm. The effectiveness of the proposed algorithm is illustrated. Keywords: face detection, feature fusion, SVM, MKL

  5. Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Theodoridis, Sergios

    2008-12-01

    Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.

  6. Spine labeling in axial magnetic resonance imaging via integral kernels.

    PubMed

    Miles, Brandon; Ben Ayed, Ismail; Hojjat, Seyed-Parsa; Wang, Michael H; Li, Shuo; Fenster, Aaron; Garvin, Gregory J

    2016-12-01

    This study investigates a fast integral-kernel algorithm for classifying (labeling) the vertebra and disc structures in axial magnetic resonance images (MRI). The method is based on a hierarchy of feature levels, where pixel classifications via non-linear probability product kernels (PPKs) are followed by classifications of 2D slices, individual 3D structures and groups of 3D structures. The algorithm further embeds geometric priors based on anatomical measurements of the spine. Our classifier requires evaluations of computationally expensive integrals at each pixel, and direct evaluations of such integrals would be prohibitively time consuming. We propose an efficient computation of kernel density estimates and PPK evaluations for large images and arbitrary local window sizes via integral kernels. Our method requires a single user click for a whole 3D MRI volume, runs nearly in real-time, and does not require an intensive external training. Comprehensive evaluations over T1-weighted axial lumbar spine data sets from 32 patients demonstrate a competitive structure classification accuracy of 99%, along with a 2D slice classification accuracy of 88%. To the best of our knowledge, such a structure classification accuracy has not been reached by the existing spine labeling algorithms. Furthermore, we believe our work is the first to use integral kernels in the context of medical images.

  7. ibr: Iterative bias reduction multivariate smoothing

    SciTech Connect

    Hengartner, Nicholas W; Cornillon, Pierre-andre; Matzner - Lober, Eric

    2009-01-01

    Regression is a fundamental data analysis tool for relating a univariate response variable Y to a multivariate predictor X {element_of} E R{sup d} from the observations (X{sub i}, Y{sub i}), i = 1,...,n. Traditional nonparametric regression use the assumption that the regression function varies smoothly in the independent variable x to locally estimate the conditional expectation m(x) = E[Y|X = x]. The resulting vector of predicted values {cflx Y}{sub i} at the observed covariates X{sub i} is called a regression smoother, or simply a smoother, because the predicted values {cflx Y}{sub i} are less variable than the original observations Y{sub i}. Linear smoothers are linear in the response variable Y and are operationally written as {cflx m} = X{sub {lambda}}Y, where S{sub {lambda}} is a n x n smoothing matrix. The smoothing matrix S{sub {lambda}} typically depends on a tuning parameter which we denote by {lambda}, and that governs the tradeoff between the smoothness of the estimate and the goodness-of-fit of the smoother to the data by controlling the effective size of the local neighborhood over which the responses are averaged. We parameterize the smoothing matrix such that large values of {lambda} are associated to smoothers that averages over larger neighborhood and produce very smooth curves, while small {lambda} are associated to smoothers that average over smaller neighborhood to produce a more wiggly curve that wants to interpolate the data. The parameter {lambda} is the bandwidth for kernel smoother, the span size for running-mean smoother, bin smoother, and the penalty factor {lambda} for spline smoother.

  8. Delimiting Areas of Endemism through Kernel Interpolation

    PubMed Central

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  9. Kernel-based variance component estimation and whole-genome prediction of pre-corrected phenotypes and progeny tests for dairy cow health traits

    PubMed Central

    Morota, Gota; Boddhireddy, Prashanth; Vukasinovic, Natascha; Gianola, Daniel; DeNise, Sue

    2014-01-01

    Prediction of complex trait phenotypes in the presence of unknown gene action is an ongoing challenge in animals, plants, and humans. Development of flexible predictive models that perform well irrespective of genetic and environmental architectures is desirable. Methods that can address non-additive variation in a non-explicit manner are gaining attention for this purpose and, in particular, semi-parametric kernel-based methods have been applied to diverse datasets, mostly providing encouraging results. On the other hand, the gains obtained from these methods have been smaller when smoothed values such as estimated breeding value (EBV) have been used as response variables. However, less emphasis has been placed on the choice of phenotypes to be used in kernel-based whole-genome prediction. This study aimed to evaluate differences between semi-parametric and parametric approaches using two types of response variables and molecular markers as inputs. Pre-corrected phenotypes (PCP) and EBV obtained for dairy cow health traits were used for this comparison. We observed that non-additive genetic variances were major contributors to total genetic variances in PCP, whereas additivity was the largest contributor to variability of EBV, as expected. Within the kernels evaluated, non-parametric methods yielded slightly better predictive performance across traits relative to their additive counterparts regardless of the type of response variable used. This reinforces the view that non-parametric kernels aiming to capture non-linear relationships between a panel of SNPs and phenotypes are appealing for complex trait prediction. However, like past studies, the gain in predictive correlation was not large for either PCP or EBV. We conclude that capturing non-additive genetic variation, especially epistatic variation, in a cross-validation framework remains a significant challenge even when it is important, as seems to be the case for health traits in dairy cows. PMID:24715901

  10. A Comparison of Methods for Nonparametric Estimation of Item Characteristic Curves for Binary Items

    ERIC Educational Resources Information Center

    Lee, Young-Sun

    2007-01-01

    This study compares the performance of three nonparametric item characteristic curve (ICC) estimation procedures: isotonic regression, smoothed isotonic regression, and kernel smoothing. Smoothed isotonic regression, employed along with an appropriate kernel function, provides better estimates and also satisfies the assumption of strict…

  11. Nonparametric Model of Smooth Muscle Force Production During Electrical Stimulation.

    PubMed

    Cole, Marc; Eikenberry, Steffen; Kato, Takahide; Sandler, Roman A; Yamashiro, Stanley M; Marmarelis, Vasilis Z

    2017-03-01

    A nonparametric model of smooth muscle tension response to electrical stimulation was estimated using the Laguerre expansion technique of nonlinear system kernel estimation. The experimental data consisted of force responses of smooth muscle to energy-matched alternating single pulse and burst current stimuli. The burst stimuli led to at least a 10-fold increase in peak force in smooth muscle from Mytilus edulis, despite the constant energy constraint. A linear model did not fit the data. However, a second-order model fit the data accurately, so the higher-order models were not required to fit the data. Results showed that smooth muscle force response is not linearly related to the stimulation power.

  12. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  13. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    PubMed Central

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  14. Fourier smoothing of digital photographic spectra

    NASA Astrophysics Data System (ADS)

    Anupama, G. C.

    1990-03-01

    Fourier methods of smoothing one-dimensional data are discussed with particular reference to digital photographic spectra. Data smoothed using lowpass filters with different cut-off frequencies are intercompared. A method to scale densities in order to remove the dependence of grain noise on density is described. Optimal filtering technique which models signal and noise in Fourier domain is also explained.

  15. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings.

  16. Smooth eigenvalue correction

    NASA Astrophysics Data System (ADS)

    Hendrikse, Anne; Veldhuis, Raymond; Spreeuwers, Luuk

    2013-12-01

    Second-order statistics play an important role in data modeling. Nowadays, there is a tendency toward measuring more signals with higher resolution (e.g., high-resolution video), causing a rapid increase of dimensionality of the measured samples, while the number of samples remains more or less the same. As a result the eigenvalue estimates are significantly biased as described by the Marčenko Pastur equation for the limit of both the number of samples and their dimensionality going to infinity. By introducing a smoothness factor, we show that the Marčenko Pastur equation can be used in practical situations where both the number of samples and their dimensionality remain finite. Based on this result we derive methods, one already known and one new to our knowledge, to estimate the sample eigenvalues when the population eigenvalues are known. However, usually the sample eigenvalues are known and the population eigenvalues are required. We therefore applied one of the these methods in a feedback loop, resulting in an eigenvalue bias correction method. We compare this eigenvalue correction method with the state-of-the-art methods and show that our method outperforms other methods particularly in real-life situations often encountered in biometrics: underdetermined configurations, high-dimensional configurations, and configurations where the eigenvalues are exponentially distributed.

  17. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    ERIC Educational Resources Information Center

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  18. Straight-chain halocarbon forming fluids for TRISO fuel kernel production - Tests with yttria-stabilized zirconia microspheres

    NASA Astrophysics Data System (ADS)

    Baker, M. P.; King, J. C.; Gorman, B. P.; Braley, J. C.

    2015-03-01

    Current methods of TRISO fuel kernel production in the United States use a sol-gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.

  19. Smooth, seamless, and structured grid generation with flexibility in resolution distribution on a sphere based on conformal mapping and the spring dynamics method

    NASA Astrophysics Data System (ADS)

    Iga, Shin-ichi

    2015-09-01

    A generation method for smooth, seamless, and structured triangular grids on a sphere with flexibility in resolution distribution is proposed. This method is applicable to many fields that deal with a sphere on which the required resolution is not uniform. The grids were generated using the spring dynamics method, and adjustments were made using analytical functions. The mesh topology determined its resolution distribution, derived from a combination of conformal mapping factors: polar stereographic projection (PSP), Lambert conformal conic projection (LCCP), and Mercator projection (MP). Their combination generated, for example, a tropically fine grid that had a nearly constant high-resolution belt around the equator, with a gradual decrease in resolution distribution outside of the belt. This grid can be applied to boundary-less simulations of tropical meteorology. The other example involves a regionally fine grid with a nearly constant high-resolution circular region and a gradually decreasing resolution distribution outside of the region. This is applicable to regional atmospheric simulations without grid nesting. The proposed grids are compatible with computer architecture because they possess a structured form. Each triangle of the proposed grids was highly regular, implying a high local isotropy in resolution. Finally, the proposed grids were examined by advection and shallow water simulations.

  20. Prioritizing individual genetic variants after kernel machine testing using variable selection.

    PubMed

    He, Qianchuan; Cai, Tianxi; Liu, Yang; Zhao, Ni; Harmon, Quaker E; Almli, Lynn M; Binder, Elisabeth B; Engel, Stephanie M; Ressler, Kerry J; Conneely, Karen N; Lin, Xihong; Wu, Michael C

    2016-12-01

    Kernel machine learning methods, such as the SNP-set kernel association test (SKAT), have been widely used to test associations between traits and genetic polymorphisms. In contrast to traditional single-SNP analysis methods, these methods are designed to examine the joint effect of a set of related SNPs (such as a group of SNPs within a gene or a pathway) and are able to identify sets of SNPs that are associated with the trait of interest. However, as with many multi-SNP testing approaches, kernel machine testing can draw conclusion only at the SNP-set level, and does not directly inform on which one(s) of the identified SNP set is actually driving the associations. A recently proposed procedure, KerNel Iterative Feature Extraction (KNIFE), provides a general framework for incorporating variable selection into kernel machine methods. In this article, we focus on quantitative traits and relatively common SNPs, and adapt the KNIFE procedure to genetic association studies and propose an approach to identify driver SNPs after the application of SKAT to gene set analysis. Our approach accommodates several kernels that are widely used in SNP analysis, such as the linear kernel and the Identity by State (IBS) kernel. The proposed approach provides practically useful utilities to prioritize SNPs, and fills the gap between SNP set analysis and biological functional studies. Both simulation studies and real data application are used to demonstrate the proposed approach.

  1. Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain

    ERIC Educational Resources Information Center

    Hannagan, Thomas; Grainger, Jonathan

    2012-01-01

    It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jakel, Scholkopf, & Wichmann, 2009). We point out that "String kernels," initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal…

  2. Nondiagonal Values of the Heat Kernel for Scalars in a Constant Electromagnetic Field

    NASA Astrophysics Data System (ADS)

    Kalinichenko, I. S.; Kazinski, P. O.

    2017-03-01

    An original method for finding the nondiagonal values of the heat kernel associated with the wave operator Fourier-transformed in time is proposed for the case of a constant external electromagnetic field. The connection of the trace of such a heat kernel to the one-loop correction to the grand thermodynamic potential is indicated. The structure of its singularities is analyzed.

  3. Resistant-starch Formation in High-amylose Maize Starch During Kernel Development

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this study was to understand the resistant-starch (RS) formation during the kernel development of high-amylose maize, GEMS-0067 line. RS content of the starch, determined using AOAC Method 991.43 for total dietary fiber, increased with kernel maturation and the increase in amylose/...

  4. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    PubMed

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function.

  5. Travel-time sensitivity kernels in long-range propagation.

    PubMed

    Skarsoulis, E K; Cornuelle, B D; Dzieciuch, M A

    2009-11-01

    Wave-theoretic travel-time sensitivity kernels (TSKs) are calculated in two-dimensional (2D) and three-dimensional (3D) environments and their behavior with increasing propagation range is studied and compared to that of ray-theoretic TSKs and corresponding Fresnel-volumes. The differences between the 2D and 3D TSKs average out when horizontal or cross-range marginals are considered, which indicates that they are not important in the case of range-independent sound-speed perturbations or perturbations of large scale compared to the lateral TSK extent. With increasing range, the wave-theoretic TSKs expand in the horizontal cross-range direction, their cross-range extent being comparable to that of the corresponding free-space Fresnel zone, whereas they remain bounded in the vertical. Vertical travel-time sensitivity kernels (VTSKs)-one-dimensional kernels describing the effect of horizontally uniform sound-speed changes on travel-times-are calculated analytically using a perturbation approach, and also numerically, as horizontal marginals of the corresponding TSKs. Good agreement between analytical and numerical VTSKs, as well as between 2D and 3D VTSKs, is found. As an alternative method to obtain wave-theoretic sensitivity kernels, the parabolic approximation is used; the resulting TSKs and VTSKs are in good agreement with normal-mode results. With increasing range, the wave-theoretic VTSKs approach the corresponding ray-theoretic sensitivity kernels.

  6. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance.

  7. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-04-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by the so-called smoothing error. In this paper it is shown that the concept of the smoothing error is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state. The idea of a sufficiently fine sampling of this reference atmospheric state is untenable because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully talk about temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the involved a priori covariance matrix has been evaluated on the comparison grid rather than resulting from interpolation. This is, because the undefined component of the smoothing error, which is the effect of smoothing implied by the finite grid on which the measurements are compared, cancels out when the difference is calculated.

  8. Predicting dissolved oxygen concentration using kernel regression modeling approaches with nonlinear hydro-chemical data.

    PubMed

    Singh, Kunwar P; Gupta, Shikha; Rai, Premanjali

    2014-05-01

    Kernel function-based regression models were constructed and applied to a nonlinear hydro-chemical dataset pertaining to surface water for predicting the dissolved oxygen levels. Initial features were selected using nonlinear approach. Nonlinearity in the data was tested using BDS statistics, which revealed the data with nonlinear structure. Kernel ridge regression, kernel principal component regression, kernel partial least squares regression, and support vector regression models were developed using the Gaussian kernel function and their generalization and predictive abilities were compared in terms of several statistical parameters. Model parameters were optimized using the cross-validation procedure. The proposed kernel regression methods successfully captured the nonlinear features of the original data by transforming it to a high dimensional feature space using the kernel function. Performance of all the kernel-based modeling methods used here were comparable both in terms of predictive and generalization abilities. Values of the performance criteria parameters suggested for the adequacy of the constructed models to fit the nonlinear data and their good predictive capabilities.

  9. Diamond Smoothing Tools

    NASA Technical Reports Server (NTRS)

    Voronov, Oleg

    2007-01-01

    Diamond smoothing tools have been proposed for use in conjunction with diamond cutting tools that are used in many finish-machining operations. Diamond machining (including finishing) is often used, for example, in fabrication of precise metal mirrors. A diamond smoothing tool according to the proposal would have a smooth spherical surface. For a given finish machining operation, the smoothing tool would be mounted next to the cutting tool. The smoothing tool would slide on the machined surface left behind by the cutting tool, plastically deforming the surface material and thereby reducing the roughness of the surface, closing microcracks and otherwise generally reducing or eliminating microscopic surface and subsurface defects, and increasing the microhardness of the surface layer. It has been estimated that if smoothing tools of this type were used in conjunction with cutting tools on sufficiently precise lathes, it would be possible to reduce the roughness of machined surfaces to as little as 3 nm. A tool according to the proposal would consist of a smoothing insert in a metal holder. The smoothing insert would be made from a diamond/metal functionally graded composite rod preform, which, in turn, would be made by sintering together a bulk single-crystal or polycrystalline diamond, a diamond powder, and a metallic alloy at high pressure. To form the spherical smoothing tip, the diamond end of the preform would be subjected to flat grinding, conical grinding, spherical grinding using diamond wheels, and finally spherical polishing and/or buffing using diamond powders. If the diamond were a single crystal, then it would be crystallographically oriented, relative to the machining motion, to minimize its wear and maximize its hardness. Spherically polished diamonds could also be useful for purposes other than smoothing in finish machining: They would likely also be suitable for use as heat-resistant, wear-resistant, unlubricated sliding-fit bearing inserts.

  10. An Equipercentile Version of the Levine Linear Observed-Score Equating Function Using the Methods of Kernel Equating. Research Report. ETS RR-07-14

    ERIC Educational Resources Information Center

    von Davier, Alina A.; Fournier-Zajac, Stephanie; Holland, Paul W.

    2007-01-01

    In the nonequivalent groups with anchor test (NEAT) design, there are several ways to use the information provided by the anchor in the equating process. One of the NEAT-design equating methods is the linear observed-score Levine method (Kolen & Brennan, 2004). It is based on a classical test theory model of the true scores on the test forms…

  11. Epidemiological analysis of hemorrhagic fever with renal syndrome in China with the seasonal-trend decomposition method and the exponential smoothing model

    PubMed Central

    Ke, Guibao; Hu, Yao; Huang, Xin; Peng, Xuan; Lei, Min; Huang, Chaoli; Gu, Li; Xian, Ping; Yang, Dehua

    2016-01-01

    Hemorrhagic fever with renal syndrome (HFRS) is one of the most common infectious diseases globally. With the most reported cases in the world, the epidemic characteristics are still remained unclear in China. This paper utilized the seasonal-trend decomposition (STL) method to analyze the periodicity and seasonality of the HFRS data, and used the exponential smoothing model (ETS) model to predict incidence cases from July to December 2016 by using the data from January 2006 to June 2016. Analytic results demonstrated a favorable trend of HFRS in China, and with obvious periodicity and seasonality, the peak of the annual reported cases in winter concentrated on November to January of the following year, and reported in May and June also constituted another peak in summer. Eventually, the ETS (M, N and A) model was adopted for fitting and forecasting, and the fitting results indicated high accuracy (Mean absolute percentage error (MAPE) = 13.12%). The forecasting results also demonstrated a gradual decreasing trend from July to December 2016, suggesting that control measures for hemorrhagic fever were effective in China. The STL model could be well performed in the seasonal analysis of HFRS in China, and ETS could be effectively used in the time series analysis of HFRS in China. PMID:27976704

  12. A Simple Method for the Growth of Very Smooth and Ultra-Thin GaSb Films on GaAs (111) Substrate by MOCVD

    NASA Astrophysics Data System (ADS)

    Ni, Pei-Nan; Tong, Jin-Chao; Tobing, Landobasa Y. M.; Qiu, Shu-Peng; Xu, Zheng-Ji; Tang, Xiao-Hong; Zhang, Dao-Hua

    2017-02-01

    We present a simple thermal treatment with the antimony source for the metal-organic chemical vapor deposition of thin GaSb films on GaAs (111) substrates for the first time. The properties of the as-grown GaSb films are systematically analyzed by scanning electron microscopy, atomic force microscopy, x-ray diffraction, photo-luminescence (PL) and Hall measurement. It is found that the as-grown GaSb films by the proposed method can be as thin as 35 nm and have a very smooth surface with the root mean square roughness as small as 0.777 nm. Meanwhile, the grown GaSb films also have high crystalline quality, of which the full width at half maximum of the rocking-curve is as small as 218 arcsec. Moreover, the good optical quality of the GaSb films has been demonstrated by the low-temperature PL. This work provides a simple and feasible buffer-free strategy for the growth of high-quality GaSb films directly on GaAs substrates and the strategy may also be applicable to the growth on other substrates and the hetero-growth of other materials.

  13. Popping the Kernel Modeling the States of Matter

    ERIC Educational Resources Information Center

    Hitt, Austin; White, Orvil; Hanson, Debbie

    2005-01-01

    This article discusses how to use popcorn to engage students in model building and to teach them about the nature of matter. Popping kernels is a simple and effective method to connect the concepts of heat, motion, and volume with the different phases of matter. Before proceeding with the activity the class should discuss the nature of scientific…

  14. Physics Integration KErnels (PIKE)

    SciTech Connect

    Pawlowski, Roger

    2014-07-31

    Pike is a software library for coupling and solving multiphysics applications. It provides basic interfaces and utilities for performing code-to-code coupling. It provides simple “black-box” Picard iteration methods for solving the coupled system of equations including Jacobi and Gauss-Seidel solvers. Pike was developed originally to couple neutronics and thermal fluids codes to simulate a light water nuclear reactor for the Consortium for Simulation of Light-water Reactors (CASL) DOE Energy Innovation Hub. The Pike library contains no physics and just provides interfaces and utilities for coupling codes. It will be released open source under a BSD license as part of the Trilinos solver framework (trilinos.org) which is also BSD. This code provides capabilities similar to other open source multiphysics coupling libraries such as LIME, AMP, and MOOSE.

  15. Single aflatoxin contaminated corn kernel analysis with fluorescence hyperspectral image

    NASA Astrophysics Data System (ADS)

    Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Ononye, Ambrose; Brown, Robert L.; Cleveland, Thomas E.

    2010-04-01

    Aflatoxins are toxic secondary metabolites of the fungi Aspergillus flavus and Aspergillus parasiticus, among others. Aflatoxin contaminated corn is toxic to domestic animals when ingested in feed and is a known carcinogen associated with liver and lung cancer in humans. Consequently, aflatoxin levels in food and feed are regulated by the Food and Drug Administration (FDA) in the US, allowing 20 ppb (parts per billion) limits in food and 100 ppb in feed for interstate commerce. Currently, aflatoxin detection and quantification methods are based on analytical tests including thin-layer chromatography (TCL) and high performance liquid chromatography (HPLC). These analytical tests require the destruction of samples, and are costly and time consuming. Thus, the ability to detect aflatoxin in a rapid, nondestructive way is crucial to the grain industry, particularly to corn industry. Hyperspectral imaging technology offers a non-invasive approach toward screening for food safety inspection and quality control based on its spectral signature. The focus of this paper is to classify aflatoxin contaminated single corn kernels using fluorescence hyperspectral imagery. Field inoculated corn kernels were used in the study. Contaminated and control kernels under long wavelength ultraviolet excitation were imaged using a visible near-infrared (VNIR) hyperspectral camera. The imaged kernels were chemically analyzed to provide reference information for image analysis. This paper describes a procedure to process corn kernels located in different images for statistical training and classification. Two classification algorithms, Maximum Likelihood and Binary Encoding, were used to classify each corn kernel into "control" or "contaminated" through pixel classification. The Binary Encoding approach had a slightly better performance with accuracy equals to 87% or 88% when 20 ppb or 100 ppb was used as classification threshold, respectively.

  16. Smoothed-particle-hydrodynamics modeling of dissipation mechanisms in gravity waves.

    PubMed

    Colagrossi, Andrea; Souto-Iglesias, Antonio; Antuono, Matteo; Marrone, Salvatore

    2013-02-01

    The smoothed-particle-hydrodynamics (SPH) method has been used to study the evolution of free-surface Newtonian viscous flows specifically focusing on dissipation mechanisms in gravity waves. The numerical results have been compared with an analytical solution of the linearized Navier-Stokes equations for Reynolds numbers in the range 50-5000. We found that a correct choice of the number of neighboring particles is of fundamental importance in order to obtain convergence towards the analytical solution. This number has to increase with higher Reynolds numbers in order to prevent the onset of spurious vorticity inside the bulk of the fluid, leading to an unphysical overdamping of the wave amplitude. This generation of spurious vorticity strongly depends on the specific kernel function used in the SPH model.

  17. FABRICATION PROCESS AND PRODUCT QUALITY IMPROVEMENTS IN ADVANCED GAS REACTOR UCO KERNELS

    SciTech Connect

    Charles M Barnes

    2008-09-01

    A major element of the Advanced Gas Reactor (AGR) program is developing fuel fabrication processes to produce high quality uranium-containing kernels, TRISO-coated particles and fuel compacts needed for planned irradiation tests. The goals of the AGR program also include developing the fabrication technology to mass produce this fuel at low cost. Kernels for the first AGR test (“AGR-1) consisted of uranium oxycarbide (UCO) microspheres that werre produced by an internal gelation process followed by high temperature steps tot convert the UO3 + C “green” microspheres to first UO2 + C and then UO2 + UCx. The high temperature steps also densified the kernels. Babcock and Wilcox (B&W) fabricated UCO kernels for the AGR-1 irradiation experiment, which went into the Advance Test Reactor (ATR) at Idaho National Laboratory in December 2006. An evaluation of the kernel process following AGR-1 kernel production led to several recommendations to improve the fabrication process. These recommendations included testing alternative methods of dispersing carbon during broth preparation, evaluating the method of broth mixing, optimizing the broth chemistry, optimizing sintering conditions, and demonstrating fabrication of larger diameter UCO kernels needed for the second AGR irradiation test. Based on these recommendations and requirements, a test program was defined and performed. Certain portions of the test program were performed by Oak Ridge National Laboratory (ORNL), while tests at larger scale were performed by B&W. The tests at B&W have demonstrated improvements in both kernel properties and process operation. Changes in the form of carbon black used and the method of mixing the carbon prior to forming kernels led to improvements in the phase distribution in the sintered kernels, greater consistency in kernel properties, a reduction in forming run time, and simplifications to the forming process. Process parameter variation tests in both forming and sintering steps led

  18. Predicting disease trait with genomic data: a composite kernel approach.

    PubMed

    Yang, Haitao; Li, Shaoyu; Cao, Hongyan; Zhang, Chichen; Cui, Yuehua

    2016-06-02

    With the advancement of biotechniques, a vast amount of genomic data is generated with no limit. Predicting a disease trait based on these data offers a cost-effective and time-efficient way for early disease screening. Here we proposed a composite kernel partial least squares (CKPLS) regression model for quantitative disease trait prediction focusing on genomic data. It can efficiently capture nonlinear relationships among features compared with linear learning algorithms such as Least Absolute Shrinkage and Selection Operator or ridge regression. We proposed to optimize the kernel parameters and kernel weights with the genetic algorithm (GA). In addition to improved performance for parameter optimization, the proposed GA-CKPLS approach also has better learning capacity and generalization ability compared with single kernel-based KPLS method as well as other nonlinear prediction models such as the support vector regression. Extensive simulation studies demonstrated that GA-CKPLS had better prediction performance than its counterparts under different scenarios. The utility of the method was further demonstrated through two case studies. Our method provides an efficient quantitative platform for disease trait prediction based on increasing volume of omics data.

  19. Manifold Kernel Sparse Representation of Symmetric Positive-Definite Matrices and Its Applications.

    PubMed

    Wu, Yuwei; Jia, Yunde; Li, Peihua; Zhang, Jian; Yuan, Junsong

    2015-11-01

    The symmetric positive-definite (SPD) matrix, as a connected Riemannian manifold, has become increasingly popular for encoding image information. Most existing sparse models are still primarily developed in the Euclidean space. They do not consider the non-linear geometrical structure of the data space, and thus are not directly applicable to the Riemannian manifold. In this paper, we propose a novel sparse representation method of SPD matrices in the data-dependent manifold kernel space. The graph Laplacian is incorporated into the kernel space to better reflect the underlying geometry of SPD matrices. Under the proposed framework, we design two different positive definite kernel functions that can be readily transformed to the corresponding manifold kernels. The sparse representation obtained has more discriminating power. Extensive experimental results demonstrate good performance of manifold kernel sparse codes in image classification, face recognition, and visual tracking.

  20. Diffusion Map Kernel Analysis for Target Classification

    DTIC Science & Technology

    2010-06-01

    Gaussian and Polynomial kernels are most familiar from support vector machines. The Laplacian and Rayleigh were introduced previously in [7]. IV ...Cancer • Clev. Heart: Heart Disease Data Set, Cleveland • Wisc . BC: Wisconsin Breast Cancer Original • Sonar2: Shallow Water Acoustic Toolset [9...the Rayleigh kernel captures the embedding with an average PC of 77.3% and a slightly higher PFA than the Gaussian kernel. For the Wisc . BC

  1. Astrophysical smooth particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Rosswog, Stephan

    2009-04-01

    The paper presents a detailed review of the smooth particle hydrodynamics (SPH) method with particular focus on its astrophysical applications. We start by introducing the basic ideas and concepts and thereby outline all ingredients that are necessary for a practical implementation of the method in a working SPH code. Much of SPH's success relies on its excellent conservation properties and therefore the numerical conservation of physical invariants receives much attention throughout this review. The self-consistent derivation of the SPH equations from the Lagrangian of an ideal fluid is the common theme of the remainder of the text. We derive a modern, Newtonian SPH formulation from the Lagrangian of an ideal fluid. It accounts for changes of the local resolution lengths which result in corrective, so-called "grad-h-terms". We extend this strategy to special relativity for which we derive the corresponding grad-h equation set. The variational approach is further applied to a general-relativistic fluid evolving in a fixed, curved background space-time. Particular care is taken to explicitly derive all relevant equations in a coherent way.

  2. A new method for direct detection of the sites of actin polymerization in intact cells and its application to differentiated vascular smooth muscle.

    PubMed

    Kim, Hak Rim; Leavis, Paul C; Graceffa, Philip; Gallant, Cynthia; Morgan, Kathleen G

    2010-11-01

    Here we report and validate a new method, suitable broadly, for use in differentiated cells and tissues, for the direct visualization of actin polymerization under physiological conditions. We have designed and tested different versions of fluorescently labeled actin, reversibly attached to the protein transduction tag TAT, and have introduced this novel reagent into intact differentiated vascular smooth muscle cells (dVSMCs). A thiol-reactive version of the TAT peptide was synthesized by adding the amino acids glycine and cysteine to its NH(2)-terminus and forming a thionitrobenzoate adduct: viz. TAT-Cys-S-STNB. This peptide reacts readily with G-actin, and the complex is rapidly taken up by freshly enzymatically isolated dVSMC, as indicated by the fluorescence of a FITC tag on the TAT peptide. By comparing different versions of the construct, we determined that the optimal construct for biological applications is a nonfluorescently labeled TAT peptide conjugated to rhodamine-labeled actin. When TAT-Cys-S-STNB-tagged rhodamine actin (TSSAR) was added to live, freshly enzymatically isolated cells, we observed punctae of incorporated actin at the cortex of the cell. The punctae are indistinguishable from those we have previously reported to occur in the same cell type when rhodamine G-actin is added to permeabilized cells. Thus this new method allows the delivery of labeled G-actin into intact cells without disrupting the native state and will allow its further use to study the effect of physiological intracellular Ca(2+) concentration transients and signal transduction on actin dynamics in intact cells.

  3. Molecular method for sex identification of half-smooth tongue sole (Cynoglossus semilaevis) using a novel sex-linked microsatellite marker.

    PubMed

    Liao, Xiaolin; Xu, Genbo; Chen, Song-Lin

    2014-07-22

    Half-smooth tongue sole (Cynoglossus semilaevis) is one of the most important flatfish species for aquaculture in China. To produce a monosex population, we attempted to develop a marker-assisted sex control technique in this sexually size dimorphic fish. In this study, we identified a co-dominant sex-linked marker (i.e., CyseSLM) by screening genomic microsatellites and further developed a novel molecular method for sex identification in the tongue sole. CyseSLM has a sequence similarity of 73%-75% with stickleback, medaka, Fugu and Tetraodon. At this locus, two alleles (i.e., A244 and A234) were amplified from 119 tongue sole individuals with primer pairs CyseSLM-F1 and CyseSLM-R. Allele A244 was present in all individuals, while allele A234 (female-associated allele, FAA) was mostly present in females with exceptions in four male individuals. Compared with the sequence of A244, A234 has a 10-bp deletion and 28 SNPs. A specific primer (CyseSLM-F2) was then designed based on the A234 sequence, which amplified a 204 bp fragment in all females and four males with primer CyseSLM-R. A time-efficient multiplex PCR program was developed using primers CyseSLM-F2, CyseSLM-R and the newly designed primer CyseSLM-F3. The multiplex PCR products with co-dominant pattern could be detected by agarose gel electrophoresis, which accurately identified the genetic sex of the tongue sole. Therefore, we have developed a rapid and reliable method for sex identification in tongue sole with a newly identified sex-linked microsatellite marker.

  4. Privacy preserving RBF kernel support vector machine.

    PubMed

    Li, Haoran; Xiong, Li; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2014-01-01

    Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data.

  5. A Novel Method for Differentiation of Human Mesenchymal Stem Cells into Smooth Muscle-Like Cells on Clinically Deliverable Thermally Induced Phase Separation Microspheres

    PubMed Central

    Parmar, Nina; Ahmadi, Raheleh

    2015-01-01

    Muscle degeneration is a prevalent disease, particularly in aging societies where it has a huge impact on quality of life and incurs colossal health costs. Suitable donor sources of smooth muscle cells are limited and minimally invasive therapeutic approaches are sought that will augment muscle volume by delivering cells to damaged or degenerated areas of muscle. For the first time, we report the use of highly porous microcarriers produced using thermally induced phase separation (TIPS) to expand and differentiate adipose-derived mesenchymal stem cells (AdMSCs) into smooth muscle-like cells in a format that requires minimal manipulation before clinical delivery. AdMSCs readily attached to the surface of TIPS microcarriers and proliferated while maintained in suspension culture for 12 days. Switching the incubation medium to a differentiation medium containing 2 ng/mL transforming growth factor beta-1 resulted in a significant increase in both the mRNA and protein expression of cell contractile apparatus components caldesmon, calponin, and myosin heavy chains, indicative of a smooth muscle cell-like phenotype. Growth of smooth muscle cells on the surface of the microcarriers caused no change to the integrity of the polymer microspheres making them suitable for a cell-delivery vehicle. Our results indicate that TIPS microspheres provide an ideal substrate for the expansion and differentiation of AdMSCs into smooth muscle-like cells as well as a microcarrier delivery vehicle for the attached cells ready for therapeutic applications. PMID:25205072

  6. A novel kernel extreme learning machine algorithm based on self-adaptive artificial bee colony optimisation strategy

    NASA Astrophysics Data System (ADS)

    Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Ji, Jin-Chao

    2016-04-01

    In this paper, we propose a novel learning algorithm, named SABC-MKELM, based on a kernel extreme learning machine (KELM) method for single-hidden-layer feedforward networks. In SABC-MKELM, the combination of Gaussian kernels is used as the activate function of KELM instead of simple fixed kernel learning, where the related parameters of kernels and the weights of kernels can be optimised by a novel self-adaptive artificial bee colony (SABC) approach simultaneously. SABC-MKELM outperforms six other state-of-the-art approaches in general, as it could effectively determine solution updating strategies and suitable parameters to produce a flexible kernel function involved in SABC. Simulations have demonstrated that the proposed algorithm not only self-adaptively determines suitable parameters and solution updating strategies learning from the previous experiences, but also achieves better generalisation performances than several related methods, and the results show good stability of the proposed algorithm.

  7. Computer-aided identification of the water diffusion coefficient for maize kernels dried in a thin layer

    NASA Astrophysics Data System (ADS)

    Kujawa, Sebastian; Weres, Jerzy; Olek, Wiesław

    2016-07-01

    Uncertainties in mathematical modelling of water transport in cereal grain kernels during drying and storage are mainly due to implementing unreliable values of the water diffusion coefficient and simplifying the geometry of kernels. In the present study an attempt was made to reduce the uncertainties by developing a method for computer-aided identification of the water diffusion coefficient and more accurate 3D geometry modelling for individual kernels using original inverse finite element algorithms. The approach was exemplified by identifying the water diffusion coefficient for maize kernels subjected to drying. On the basis of the developed method, values of the water diffusion coefficient were estimated, 3D geometry of a maize kernel was represented by isoparametric finite elements, and the moisture content inside maize kernels dried in a thin layer was predicted. Validation of the results against experimental data showed significantly lower error values than in the case of results obtained for the water diffusion coefficient values available in the literature.

  8. Molecular Hydrodynamics from Memory Kernels

    NASA Astrophysics Data System (ADS)

    Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin

    2016-04-01

    The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t-3 /2 . We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius.

  9. Online multiple kernel similarity learning for visual search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2014-03-01

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in content-based image retrieval (CBIR). Despite their successes, most existing methods on distance metric learning are limited in two aspects. First, they usually assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multimodal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel similarity learning framework for learning kernel-based proximity functions which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel online multiple kernel similarity (OMKS) learning method which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets in which encouraging results show that OMKS outperforms the state-of-the-art techniques significantly.

  10. Online Multiple Kernel Similarity Learning for Visual Search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2013-08-13

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in Content-Based Image Retrieval (CBIR). Despite their popularity and success, most existing methods on distance metric learning are limited in two aspects. First, they typically assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multi-modal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel ranking framework for learning kernel-based proximity functions, which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel Online Multiple Kernel Ranking (OMKR) method, which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets, in which encouraging results show that OMKR outperforms the state-of-the-art techniques significantly.

  11. Nonequilibrium Flows with Smooth Particle Applied Mechanics.

    NASA Astrophysics Data System (ADS)

    Kum, Oyeon

    Smooth particle methods are relatively new methods for simulating solid and fluid flows though they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separately controlled. The gradient algorithm, based on differentiating the smooth particle expressions for (urho) and (Trho), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier's heat -flow law and Newton's viscous force law are used. Smooth particle methods show an interesting parallel linking them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh -Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails. Considerably fewer smooth particles are required than atoms in a corresponding molecular dynamics

  12. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions

    NASA Astrophysics Data System (ADS)

    Song, Bongyong; Park, Justin C.; Song, William Y.

    2014-11-01

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires ‘at most one function evaluation’ in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a ‘smoothed TV’ or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT

  13. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions.

    PubMed

    Song, Bongyong; Park, Justin C; Song, William Y

    2014-11-07

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires 'at most one function evaluation' in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a 'smoothed TV' or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for

  14. Statistical Analysis of Photopyroelectric Signals using Histogram and Kernel Density Estimation for differentiation of Maize Seeds

    NASA Astrophysics Data System (ADS)

    Rojas-Lima, J. E.; Domínguez-Pacheco, A.; Hernández-Aguilar, C.; Cruz-Orea, A.

    2016-09-01

    Considering the necessity of photothermal alternative approaches for characterizing nonhomogeneous materials like maize seeds, the objective of this research work was to analyze statistically the amplitude variations of photopyroelectric signals, by means of nonparametric techniques such as the histogram and the kernel density estimator, and the probability density function of the amplitude variations of two genotypes of maize seeds with different pigmentations and structural components: crystalline and floury. To determine if the probability density function had a known parametric form, the histogram was determined which did not present a known parametric form, so the kernel density estimator using the Gaussian kernel, with an efficiency of 95 % in density estimation, was used to obtain the probability density function. The results obtained indicated that maize seeds could be differentiated in terms of the statistical values for floury and crystalline seeds such as the mean (93.11, 159.21), variance (1.64× 103, 1.48× 103), and standard deviation (40.54, 38.47) obtained from the amplitude variations of photopyroelectric signals in the case of the histogram approach. For the case of the kernel density estimator, seeds can be differentiated in terms of kernel bandwidth or smoothing constant h of 9.85 and 6.09 for floury and crystalline seeds, respectively.

  15. Heat kernel asymptotic expansions for the Heisenberg sub-Laplacian and the Grushin operator

    PubMed Central

    Chang, Der-Chen; Li, Yutian

    2015-01-01

    The sub-Laplacian on the Heisenberg group and the Grushin operator are typical examples of sub-elliptic operators. Their heat kernels are both given in the form of Laplace-type integrals. By using Laplace's method, the method of stationary phase and the method of steepest descent, we derive the small-time asymptotic expansions for these heat kernels, which are related to the geodesic structure of the induced geometries. PMID:25792966

  16. Excitons in solids with time-dependent density-functional theory: the bootstrap kernel and beyond

    NASA Astrophysics Data System (ADS)

    Byun, Young-Moo; Yang, Zeng-Hui; Ullrich, Carsten

    Time-dependent density-functional theory (TDDFT) is an efficient method to describe the optical properties of solids. Lately, a series of bootstrap-type exchange-correlation (xc) kernels have been reported to produce accurate excitons in solids, but different bootstrap-type kernels exist in the literature, with mixed results. In this presentation, we reveal the origin of the confusion and show a new empirical TDDFT xc kernel to compute excitonic properties of semiconductors and insulators efficiently and accurately. Our method can be used for high-throughput screening calculations and large unit cell calculations. Work supported by NSF Grant DMR-1408904.

  17. Spline-Based Smoothing of Airfoil Curvatures

    NASA Technical Reports Server (NTRS)

    Li, W.; Krist, S.

    2008-01-01

    Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been

  18. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  19. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...

  20. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...

  1. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  2. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  3. Kernel estimation for robust motion deblurring of noisy and blurry images

    NASA Astrophysics Data System (ADS)

    Sun, Shijie; Zhao, Huaici; Li, Bo; Hao, Mingguo; Lv, Jinfeng

    2016-05-01

    Most state-of-the-art single image blind deblurring techniques are still sensitive to image noise, leading to serious performance degradation in their blur kernel estimation when the input image noise increases. We found that reliable kernel estimation could not be given by directly using denoising and existing deblurring algorithms in many cases. We focus on how to estimate a good blur kernel from a noisy blurred image via using the image structure. First, we applied denoising as a preprocess to remove the input image noise and then computed salient image structure of the denoised result based on the total variation model. We also applied a gradient selection method to remove those salient edges that have a possible adverse effect on blur kernel estimation. Next, we adopted a two-phase estimation strategy to obtain higher quality blur kernel estimation by jointly applying kernel estimation from salient image structure and iterative support detection (ISD) kernel refinement. Finally, we used the nonblind deconvolution method based on sparse prior knowledge to restore the latent image. Extensive experiments testify to the superiority of the proposed method over state-of-the-art algorithms, both qualitatively and quantitatively.

  4. Analytic elements of smooth shapes

    NASA Astrophysics Data System (ADS)

    Strack, Otto D. L.; Nevison, Patrick R.

    2015-10-01

    We present a method for producing analytic elements of a smooth shape, obtained using conformal mapping. Applications are presented for a case of impermeable analytic elements as well as for head-specified ones. The mathematical operations necessary to use the elements in practical problems can be carried out before modeling of flow problems begins. A catalog of shapes, along with pre-determined coefficients could be established on the basis of the approach presented here, making applications in the field straight forward.

  5. Bergman Kernel from Path Integral

    NASA Astrophysics Data System (ADS)

    Douglas, Michael R.; Klevtsov, Semyon

    2010-01-01

    We rederive the expansion of the Bergman kernel on Kähler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory, and generalize it to supersymmetric quantum mechanics. One physics interpretation of this result is as an expansion of the projector of wave functions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kähler form. This is relevant for the quantum Hall effect in curved space, and for its higher dimensional generalizations. Other applications include the theory of coherent states, the study of balanced metrics, noncommutative field theory, and a conjecture on metrics in black hole backgrounds discussed in [24]. We give a short overview of these various topics. From a conceptual point of view, this expansion is noteworthy as it is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey et al short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry.

  6. Smoothing analysis of HLSII storage ring magnets

    NASA Astrophysics Data System (ADS)

    Wang, Wei; He, Xiao-Ye; Tang, Zheng; Yao, Qiu-Yang

    2016-12-01

    Hefei Light Source (HLS) has been upgraded to improve the quality and stability of the synchrotron light, and the new facility is named HLSII. However, a final accurate adjustment is required to smooth the beam orbit after the initial instalment and alignment of the magnets. We implement a reliable smoothing method for the beam orbit of the HLSII storage ring. In addition to greatly smoothing and stabilizing the beam orbit, this method also doubles the work efficiency and significantly reduces the number of magnets adjusted and the range of the adjustments. Supported by National Natural Science Foundation of China (11275192) and the Upgrade Project of Hefei Light Source

  7. Learning molecular energies using localized graph kernels

    NASA Astrophysics Data System (ADS)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  8. An Adaptive Genetic Association Test Using Double Kernel Machines.

    PubMed

    Zhan, Xiang; Epstein, Michael P; Ghosh, Debashis

    2015-10-01

    Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study.

  9. Classification of Microarray Data Using Kernel Fuzzy Inference System.

    PubMed

    Kumar, Mukesh; Kumar Rath, Santanu

    2014-01-01

    The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature selection method. Kernel functions are used to map original data points into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function ϕ through a mathematical process called the kernel trick. This paper also presents a comparative study for classification using K-FIS along with support vector machine (SVM) for different set of features (genes). Performance parameters available in the literature such as precision, recall, specificity, F-measure, ROC curve, and accuracy are considered to analyze the efficiency of the classification model. From the proposed approach, it is apparent that K-FIS model obtains similar results when compared with SVM model. This is an indication that the proposed approach relies on kernel function.

  10. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  11. On computing the trace of the kernel of the homogeneous Fredholm's equation

    SciTech Connect

    Velazquez-Arcos, J. M.; Vargas, C. A.; Fernandez-Chapou, J. L.; Salas-Brito, A. L.

    2008-10-15

    A method for computing the trace of the kernel of the homogeneous Fredholm's equation for resonant states arising from nonlocal potentials is proposed. We show that this integral formulation is convergent.

  12. Estimation of Chlorophyll-a Concentration in Turbid Lake Using Spectral Smoothing and Derivative Analysis

    PubMed Central

    Cheng, Chunmei; Wei, Yuchun; Sun, Xiaopeng; Zhou, Yu

    2013-01-01

    As a major indicator of lake eutrophication that is harmful to human health, the chlorophyll-a concentration (Chl-a) is often estimated using remote sensing, and one method often used is the spectral derivative algorithm. Direct derivative processing may magnify the noise, thus making spectral smoothing necessary. This study aims to use spectral smoothing as a pretreatment and to test the applicability of the spectral derivative algorithm for Chl-a estimation in Taihu Lake, China, based on the in situ hyperspectral reflectance. Data from July–August of 2004 were used to build the model, and data from July–August of 2005 and March of 2011 were used to validate the model, with Chl-a ranges of 5.0–156.0 mg/m3, 4.0–98.0 mg/m3 and 11.4–35.8 mg/m3, respectively. The derivative model was first used and then compared with the band ratio, three-band and four-band models. The results show that the first-order derivative model at 699 nm had satisfactory accuracy (R2 = 0.75) after kernel regression smoothing and had smaller validation root mean square errors of 15.21 mg/m3 in 2005 and 5.85 mg/m3 in 2011. The distribution map of Chl-a in Taihu Lake based on the HJ1/HSI image showed the actualdistribution trend, indicating that the first-order derivative model after spectral smoothing can be used for Chl-a estimation in turbid lake. PMID:23880727

  13. Estimation of chlorophyll-a concentration in Turbid Lake using spectral smoothing and derivative analysis.

    PubMed

    Cheng, Chunmei; Wei, Yuchun; Sun, Xiaopeng; Zhou, Yu

    2013-07-16

    As a major indicator of lake eutrophication that is harmful to human health, the chlorophyll-a concentration (Chl-a) is often estimated using remote sensing, and one method often used is the spectral derivative algorithm. Direct derivative processing may magnify the noise, thus making spectral smoothing necessary. This study aims to use spectral smoothing as a pretreatment and to test the applicability of the spectral derivative algorithm for Chl-a estimation in Taihu Lake, China, based on the in situ hyperspectral reflectance. Data from July-August of 2004 were used to build the model, and data from July-August of 2005 and March of 2011 were used to validate the model, with Chl-a ranges of 5.0-156.0 mg/m3, 4.0-98.0 mg/m3 and 11.4-35.8 mg/m3, respectively. The derivative model was first used and then compared with the band ratio, three-band and four-band models. The results show that the first-order derivative model at 699 nm had satisfactory accuracy (R2 = 0.75) after kernel regression smoothing and had smaller validation root mean square errors of 15.21 mg/m3 in 2005 and 5.85 mg/m3 in 2011. The distribution map of Chl-a in Taihu Lake based on the HJ1/HSI image showed the actual distribution trend, indicating that the first-order derivative model after spectral smoothing can be used for Chl-a estimation in turbid lake.

  14. The Optimal Degree of Smoothing in Equipercentile Equating with Postsmoothing.

    ERIC Educational Resources Information Center

    Zeng, Lingjia

    1995-01-01

    The effects of different degrees of smoothing on results of equipercentile equating in random groups design using a postsmoothing method based on cubic splines were investigated, and a computer-based procedure was introduced for selecting a desirable degree of smoothing. Results suggest that no particular degree of smoothing was always optimal.…

  15. Fast metabolite identification with Input Output Kernel Regression

    PubMed Central

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-01-01

    Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628

  16. Smoothly deformed light

    NASA Technical Reports Server (NTRS)

    Stenholm, Stig

    1993-01-01

    A single mode cavity is deformed smoothly to change its electromagnetic eigenfrequency. The system is modeled as a simple harmonic oscillator with a varying period. The Wigner function of the problem is obtained exactly by starting with a squeezed initial state. The result is evaluated for a linear change of the cavity length. The approach to the adiabatic limit is investigated. The maximum squeezing is found to occur for smooth change lasting only a fraction of the oscillational period. However, only a factor of two improvement over the adiabatic result proves to be possible. The sudden limit cannot be investigated meaningfully within the model.

  17. Multiscale Support Vector Learning With Projection Operator Wavelet Kernel for Nonlinear Dynamical System Identification.

    PubMed

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2016-02-03

    A giant leap has been made in the past couple of decades with the introduction of kernel-based learning as a mainstay for designing effective nonlinear computational learning algorithms. In view of the geometric interpretation of conditional expectation and the ubiquity of multiscale characteristics in highly complex nonlinear dynamic systems [1]-[3], this paper presents a new orthogonal projection operator wavelet kernel, aiming at developing an efficient computational learning approach for nonlinear dynamical system identification. In the framework of multiresolution analysis, the proposed projection operator wavelet kernel can fulfill the multiscale, multidimensional learning to estimate complex dependencies. The special advantage of the projection operator wavelet kernel developed in this paper lies in the fact that it has a closed-form expression, which greatly facilitates its application in kernel learning. To the best of our knowledge, it is the first closed-form orthogonal projection wavelet kernel reported in the literature. It provides a link between grid-based wavelets and mesh-free kernel-based methods. Simulation studies for identifying the parallel models of two benchmark nonlinear dynamical systems confirm its superiority in model accuracy and sparsity.

  18. Handwritten Chinese character recognition based on SVM with hybrid kernel function

    NASA Astrophysics Data System (ADS)

    Sun, Limin

    2005-10-01

    Offline handwritten chinese character recognition (HCCR) is one of means for quick text input and it has a great demand in the area of file recognition, form processing, machine translation and office automation. However it still is a difficult task for handwritten chinese character recognition to put into practical use because of its large stroke change, writing anomaly, and no stroke ranking information can get, etc. al. An efficient classifier occupies very important position for increasing offline HCCR ratio. Support vector machines offer a theoretically well-founded approach to automated learning of pattern classifiers for mining labeled data sets. But as we know, the performance of SVMs largely depend on the kernel function, different kernel function will produce different SVMs, and may result in different performance. However, there are no theories concerning how to choose good kernel functions based on practical using problem. In this paper we make use of the basic properties of Mercer kernel to construct a hybrid kernel from the existing common kernel, and to find the unknown parameters of the hybrid kernel in data-dependent way by minimizing the upper bound of the VC dimension of the set of functions. Our experiment results show that the proposed method is efficient compared with other classifier for handwritten Chinese character recognition.

  19. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    SciTech Connect

    Khazaee, M; Asl, A Kamali; Geramifar, P

    2015-06-15

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine.

  20. Smoothed Particle Hydrodynamic Simulator

    SciTech Connect

    2016-10-05

    This code is a highly modular framework for developing smoothed particle hydrodynamic (SPH) simulations running on parallel platforms. The compartmentalization of the code allows for rapid development of new SPH applications and modifications of existing algorithms. The compartmentalization also allows changes in one part of the code used by many applications to instantly be made available to all applications.

  1. Approximation of Bivariate Functions via Smooth Extensions

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    For a smooth bivariate function defined on a general domain with arbitrary shape, it is difficult to do Fourier approximation or wavelet approximation. In order to solve these problems, in this paper, we give an extension of the bivariate function on a general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. These smooth extensions have simple and clear representations which are determined by this bivariate function and some polynomials. After that, we expand the smooth, periodic function into a Fourier series or a periodic wavelet series or we expand the smooth, compactly supported function into a wavelet series. Since our extensions are smooth, the obtained Fourier coefficients or wavelet coefficients decay very fast. Since our extension tools are polynomials, the moment theorem shows that a lot of wavelet coefficients vanish. From this, with the help of well-known approximation theorems, using our extension methods, the Fourier approximation and the wavelet approximation of the bivariate function on the general domain with small error are obtained. PMID:24683316

  2. Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1992-01-01

    Recently, Freund and Nachtigal have proposed a novel polynominal-based iteration, the quasi-minimal residual algorithm (QMR), for solving general nonsingular non-Hermitian linear systems. Motivated by the QMR method, we have introduced the general concept of quasi-kernel polynomials, and we have shown that the QMR algorithm is based on a particular instance of quasi-kernel polynomials. In this paper, we continue our study of quasi-kernel polynomials. In particular, we derive bounds for the norms of quasi-kernel polynomials. These results are then applied to obtain convergence theorems both for the QMR method and for a transpose-free variant of QMR, the TFQMR algorithm.

  3. Phenolic compounds and antioxidant activity of kernels and shells of Mexican pecan (Carya illinoinensis).

    PubMed

    de la Rosa, Laura A; Alvarez-Parrilla, Emilio; Shahidi, Fereidoon

    2011-01-12

    The phenolic composition and antioxidant activity of pecan kernels and shells cultivated in three regions of the state of Chihuahua, Mexico, were analyzed. High concentrations of total extractable phenolics, flavonoids, and proanthocyanidins were found in kernels, and 5-20-fold higher concentrations were found in shells. Their concentrations were significantly affected by the growing region. Antioxidant activity was evaluated by ORAC, DPPH•, HO•, and ABTS•-- scavenging (TAC) methods. Antioxidant activity was strongly correlated with the concentrations of phenolic compounds. A strong correlation existed among the results obtained using these four methods. Five individual phenolic compounds were positively identified and quantified in kernels: ellagic, gallic, protocatechuic, and p-hydroxybenzoic acids and catechin. Only ellagic and gallic acids could be identified in shells. Seven phenolic compounds were tentatively identified in kernels by means of MS and UV spectral comparison, namely, protocatechuic aldehyde, (epi)gallocatechin, one gallic acid-glucose conjugate, three ellagic acid derivatives, and valoneic acid dilactone.

  4. Reduction of Aflatoxins in Apricot Kernels by Electronic and Manual Color Sorting

    PubMed Central

    Zivoli, Rosanna; Gambacorta, Lucia; Piemontese, Luca; Solfrizzo, Michele

    2016-01-01

    The efficacy of color sorting on reducing aflatoxin levels in shelled apricot kernels was assessed. Naturally-contaminated kernels were submitted to an electronic optical sorter or blanched, peeled, and manually sorted to visually identify and sort discolored kernels (dark and spotted) from healthy ones. The samples obtained from the two sorting approaches were ground, homogenized, and analysed by HPLC-FLD for their aflatoxin content. A mass balance approach was used to measure the distribution of aflatoxins in the collected fractions. Aflatoxin B1 and B2 were identified and quantitated in all collected fractions at levels ranging from 1.7 to 22,451.5 µg/kg of AFB1 + AFB2, whereas AFG1 and AFG2 were not detected. Excellent results were obtained by manual sorting of peeled kernels since the removal of discolored kernels (2.6%–19.9% of total peeled kernels) removed 97.3%–99.5% of total aflatoxins. The combination of peeling and visual/manual separation of discolored kernels is a feasible strategy to remove 97%–99% of aflatoxins accumulated in naturally-contaminated samples. Electronic optical sorter gave highly variable results since the amount of AFB1 + AFB2 measured in rejected fractions (15%–18% of total kernels) ranged from 13% to 59% of total aflatoxins. An improved immunoaffinity-based HPLC-FLD method having low limits of detection for the four aflatoxins (0.01–0.05 µg/kg) was developed and used to monitor the occurrence of aflatoxins in 47 commercial products containing apricot kernels and/or almonds commercialized in Italy. Low aflatoxin levels were found in 38% of the tested samples and ranged from 0.06 to 1.50 μg/kg for AFB1 and from 0.06 to 1.79 μg/kg for total aflatoxins. PMID:26797635

  5. Reduction of Aflatoxins in Apricot Kernels by Electronic and Manual Color Sorting.

    PubMed

    Zivoli, Rosanna; Gambacorta, Lucia; Piemontese, Luca; Solfrizzo, Michele

    2016-01-19

    The efficacy of color sorting on reducing aflatoxin levels in shelled apricot kernels was assessed. Naturally-contaminated kernels were submitted to an electronic optical sorter or blanched, peeled, and manually sorted to visually identify and sort discolored kernels (dark and spotted) from healthy ones. The samples obtained from the two sorting approaches were ground, homogenized, and analysed by HPLC-FLD for their aflatoxin content. A mass balance approach was used to measure the distribution of aflatoxins in the collected fractions. Aflatoxin B₁ and B₂ were identified and quantitated in all collected fractions at levels ranging from 1.7 to 22,451.5 µg/kg of AFB₁ + AFB₂, whereas AFG₁ and AFG₂ were not detected. Excellent results were obtained by manual sorting of peeled kernels since the removal of discolored kernels (2.6%-19.9% of total peeled kernels) removed 97.3%-99.5% of total aflatoxins. The combination of peeling and visual/manual separation of discolored kernels is a feasible strategy to remove 97%-99% of aflatoxins accumulated in naturally-contaminated samples. Electronic optical sorter gave highly variable results since the amount of AFB₁ + AFB₂ measured in rejected fractions (15%-18% of total kernels) ranged from 13% to 59% of total aflatoxins. An improved immunoaffinity-based HPLC-FLD method having low limits of detection for the four aflatoxins (0.01-0.05 µg/kg) was developed and used to monitor the occurrence of aflatoxins in 47 commercial products containing apricot kernels and/or almonds commercialized in Italy. Low aflatoxin levels were found in 38% of the tested samples and ranged from 0.06 to 1.50 μg/kg for AFB₁ and from 0.06 to 1.79 μg/kg for total aflatoxins.

  6. Robust visual tracking via adaptive kernelized correlation filter

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Wang, Desheng; Liao, Qingmin

    2016-10-01

    Correlation filter based trackers have proved to be very efficient and robust in object tracking with a notable performance competitive with state-of-art trackers. In this paper, we propose a novel object tracking method named Adaptive Kernelized Correlation Filter (AKCF) via incorporating Kernelized Correlation Filter (KCF) with Structured Output Support Vector Machines (SOSVM) learning method in a collaborative and adaptive way, which can effectively handle severe object appearance changes with low computational cost. AKCF works by dynamically adjusting the learning rate of KCF and reversely verifies the intermediate tracking result by adopting online SOSVM classifier. Meanwhile, we bring Color Names in this formulation to effectively boost the performance owing to its rich feature information encoded. Experimental results on several challenging benchmark datasets reveal that our approach outperforms numerous state-of-art trackers.

  7. Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.

    PubMed

    Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D

    2016-04-01

    Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the

  8. Relationship between cyanogenic compounds in kernels, leaves, and roots of sweet and bitter kernelled almonds.

    PubMed

    Dicenta, F; Martínez-Gómez, P; Grané, N; Martín, M L; León, A; Cánovas, J A; Berenguer, V

    2002-03-27

    The relationship between the levels of cyanogenic compounds (amygdalin and prunasin) in kernels, leaves, and roots of 5 sweet-, 5 slightly bitter-, and 5 bitter-kernelled almond trees was determined. Variability was observed among the genotypes for these compounds. Prunasin was found only in the vegetative part (roots and leaves) for all genotypes tested. Amygdalin was detected only in the kernels, mainly in bitter genotypes. In general, bitter-kernelled genotypes had higher levels of prunasin in their roots than nonbitter ones, but the correlation between cyanogenic compounds in the different parts of plants was not high. While prunasin seems to be present in most almond roots (with a variable concentration) only bitter-kernelled genotypes are able to transform it into amygdalin in the kernel. Breeding for prunasin-based resistance to the buprestid beetle Capnodis tenebrionis L. is discussed.

  9. Finite-frequency sensitivity kernels of seismic waves to fault zone structures

    NASA Astrophysics Data System (ADS)

    Allam, A. A.; Tape, C.; Ben-Zion, Y.

    2015-12-01

    We analyse the volumetric sensitivity of fault zone seismic head and trapped waves by constructing finite-frequency sensitivity (Fréchet) kernels for these phases using a suite of idealized and tomographically derived velocity models of fault zones. We first validate numerical calculations by waveform comparisons with analytical results for two simple fault zone models: a vertical bimaterial interface separating two solids of differing elastic properties, and a `vertical sandwich' with a vertical low velocity zone surrounded on both sides by higher velocity media. Establishing numerical accuracy up to 12 Hz, we compute sensitivity kernels for various phases that arise in these and more realistic models. In contrast to direct P body waves, which have little or no sensitivity to the internal fault zone structure, the sensitivity kernels for head waves have sharp peaks with high values near the fault in the faster medium. Surface wave kernels show the broadest spatial distribution of sensitivity, while trapped wave kernels are extremely narrow with sensitivity focused entirely inside the low-velocity fault zone layer. Trapped waves are shown to exhibit sensitivity patterns similar to Love waves, with decreasing width as a function of frequency and multiple Fresnel zones of alternating polarity. In models that include smoothing of the boundaries of the low velocity zone, there is little effect on the trapped wave kernels, which are focused in the central core of the low velocity zone. When the source is located outside a shallow fault zone layer, trapped waves propagate through the surrounding medium with body wave sensitivity before becoming confined. The results provide building blocks for full waveform tomography of fault zone regions combining high-frequency head, trapped, body, and surface waves. Such an imaging approach can constrain fault zone structure across a larger range of scales than has previously been possible.

  10. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  11. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  12. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...

  13. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...

  14. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  15. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more...

  16. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...

  17. Kernel Extended Real-Valued Negative Selection Algorithm (KERNSA)

    DTIC Science & Technology

    2013-06-01

    comparing supervised classification learning algorithms. Neural computation, 10(7):1895–1923, 1998. [23] Thomas Dietterich. Ensemble methods in machine...2003. [70] Bernhard Schölkopf. The kernel trick for distances. Advances in neural information processing systems, pages 301–307, 2001. [71] Bernhard ...pages 371–377. IEEE, 1999. [74] Thomas Stibor and Jonathan Timmis. Comments on real-valued negative selection vs. real-valued positive selection and

  18. Nonequilibrium flows with smooth particle applied mechanics

    SciTech Connect

    Kum, Oyeon

    1995-07-01

    Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separately controlled. The gradient algorithm, based on differentiating the smooth particle expression for (uρ) and (Tρ), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.

  19. A Semi-supervised Heat Kernel Pagerank MBO Algorithm for Data Classification

    DTIC Science & Technology

    2016-07-01

    computation of a different pagerank for every node and [70] involves solving a very large matrix system . We now present a simple, efficient and accurate...20], the authors descibe an algo- rithm solving linear systems with boundary conditions using heat kernel pagerank. The method in [21] is another...local clustering algorithm, which uses a novel way of comput- ing the pagerank very efficiently . An interesting application to heat kernel pagerank is

  20. Learning Discriminative Stein Kernel for SPD Matrices and Its Applications.

    PubMed

    Zhang, Jianjia; Wang, Lei; Zhou, Luping; Li, Wanqing

    2016-05-01

    Stein kernel (SK) has recently shown promising performance on classifying images represented by symmetric positive definite (SPD) matrices. It evaluates the similarity between two SPD matrices through their eigenvalues. In this paper, we argue that directly using the original eigenvalues may be problematic because: 1) eigenvalue estimation becomes biased when the number of samples is inadequate, which may lead to unreliable kernel evaluation, and 2) more importantly, eigenvalues reflect only the property of an individual SPD matrix. They are not necessarily optimal for computing SK when the goal is to discriminate different classes of SPD matrices. To address the two issues, we propose a discriminative SK (DSK), in which an extra parameter vector is defined to adjust the eigenvalues of input SPD matrices. The optimal parameter values are sought by optimizing a proxy of classification performance. To show the generality of the proposed method, three kernel learning criteria that are commonly used in the literature are employed as a proxy. A comprehensive experimental study is conducted on a variety of image classification tasks to compare the proposed DSK with the original SK and other methods for evaluating the similarity between SPD matrices. The results demonstrate that the DSK can attain greater discrimination and better align with classification tasks by altering the eigenvalues. This makes it produce higher classification performance than the original SK and other commonly used methods.