Science.gov

Sample records for kernel smoothing methods

  1. A method of smoothed particle hydrodynamics using spheroidal kernels

    NASA Technical Reports Server (NTRS)

    Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.

    1995-01-01

    We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.

  2. A method of smoothed particle hydrodynamics using spheroidal kernels

    NASA Technical Reports Server (NTRS)

    Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.

    1995-01-01

    We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.

  3. Kernel Smoothing Methods for Non-Poissonian Seismic Hazard Analysis

    NASA Astrophysics Data System (ADS)

    Woo, Gordon

    2017-04-01

    For almost fifty years, the mainstay of probabilistic seismic hazard analysis has been the methodology developed by Cornell, which assumes that earthquake occurrence is a Poisson process, and that the spatial distribution of epicentres can be represented by a set of polygonal source zones, within which seismicity is uniform. Based on Vere-Jones' use of kernel smoothing methods for earthquake forecasting, these methods were adapted in 1994 by the author for application to probabilistic seismic hazard analysis. There is no need for ambiguous boundaries of polygonal source zones, nor for the hypothesis of time independence of earthquake sequences. In Europe, there are many regions where seismotectonic zones are not well delineated, and where there is a dynamic stress interaction between events, so that they cannot be described as independent. From the Amatrice earthquake of 24 August, 2016, the subsequent damaging earthquakes in Central Italy over months were not independent events. Removing foreshocks and aftershocks is not only an ill-defined task, it has a material effect on seismic hazard computation. Because of the spatial dispersion of epicentres, and the clustering of magnitudes for the largest events in a sequence, which might all be around magnitude 6, the specific event causing the highest ground motion can vary from one site location to another. Where significant active faults have been clearly identified geologically, they should be modelled as individual seismic sources. The remaining background seismicity should be modelled as non-Poissonian using statistical kernel smoothing methods. This approach was first applied for seismic hazard analysis at a UK nuclear power plant two decades ago, and should be included within logic-trees for future probabilistic seismic hazard at critical installations within Europe. In this paper, various salient European applications are given.

  4. An adaptive kernel smoothing method for classifying Austrosimulium tillyardianum (Diptera: Simuliidae) larval instars.

    PubMed

    Cen, Guanjun; Yu, Yonghao; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks' rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby's growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods.

  5. An Adaptive Kernel Smoothing Method for Classifying Austrosimulium tillyardianum (Diptera: Simuliidae) Larval Instars

    PubMed Central

    Cen, Guanjun; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks’ rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby’s growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods. PMID:26546689

  6. A high-order fast method for computing convolution integral with smooth kernel

    NASA Astrophysics Data System (ADS)

    Qiang, Ji

    2010-02-01

    In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.

  7. A high-order fast method for computing convolution integral with smooth kernel

    SciTech Connect

    Qiang, Ji

    2009-09-28

    In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h{sup 4}), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.

  8. A non-parametric method for hazard rate estimation in acute myocardial infarction patients: kernel smoothing approach.

    PubMed

    Soltanian, Ali Reza; Hossein, Mahjub

    2012-01-01

    Kernel smoothing method is a non-parametric or graphical method for statistical estimation. In the present study was used a kernel smoothing method for finding the death hazard rates of patients with acute myocardial infarction. By employing non-parametric regression methods, the curve estimation, may have some complexity. In this article, four indices of Epanechnikov, Biquadratic, Triquadratic and Rectangle kernels were used under local and k-nearest neighbors' bandwidth. For comparing the models, were employed mean integrated squared error. To illustrate in the study, was used the dataset of acute myocardial infraction patients in Bushehr port, in the south of Iran. To obtain proper bandwidth, was used generalized cross-validation method. Corresponding to a low bandwidth value, the curve is unreadable and the regression curve is so roughly. In the event of increasing bandwidth value, the distribution has more readable and smooth. In this study, estimate of death hazard rate for the patients based on Epanechnikov kernel under local bandwidth was 1.011 x 10(-11), which had the lowest mean square error compared to k-nearest neighbors bandwidth. We obtained the death hazard rate in 10 and 30 months after the first acute myocardial infraction using Epanechnikov kernelas were 0.0031 and 0.0012, respectively. The Epanechnikov kernel for obtaining death hazard rate of patients with acute myocardial infraction has minimum mean integrated squared error compared to the other kernels. In addition, the mortality hazard rate of acute myocardial infraction in the study was low.

  9. CRKSPH - A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme

    NASA Astrophysics Data System (ADS)

    Frontiere, Nicholas; Raskin, Cody D.; Owen, J. Michael

    2017-03-01

    We present a formulation of smoothed particle hydrodynamics (SPH) that utilizes a first-order consistent reproducing kernel, a smoothing function that exactly interpolates linear fields with particle tracers. Previous formulations using reproducing kernel (RK) interpolation have had difficulties maintaining conservation of momentum due to the fact the RK kernels are not, in general, spatially symmetric. Here, we utilize a reformulation of the fluid equations such that mass, linear momentum, and energy are all rigorously conserved without any assumption about kernel symmetries, while additionally maintaining approximate angular momentum conservation. Our approach starts from a rigorously consistent interpolation theory, where we derive the evolution equations to enforce the appropriate conservation properties, at the sacrifice of full consistency in the momentum equation. Additionally, by exploiting the increased accuracy of the RK method's gradient, we formulate a simple limiter for the artificial viscosity that reduces the excess diffusion normally incurred by the ordinary SPH artificial viscosity. Collectively, we call our suite of modifications to the traditional SPH scheme Conservative Reproducing Kernel SPH, or CRKSPH. CRKSPH retains many benefits of traditional SPH methods (such as preserving Galilean invariance and manifest conservation of mass, momentum, and energy) while improving on many of the shortcomings of SPH, particularly the overly aggressive artificial viscosity and zeroth-order inaccuracy. We compare CRKSPH to two different modern SPH formulations (pressure based SPH and compatibly differenced SPH), demonstrating the advantages of our new formulation when modeling fluid mixing, strong shock, and adiabatic phenomena.

  10. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template.

  11. Kernel current source density method.

    PubMed

    Potworowski, Jan; Jakuczun, Wit; Lȩski, Szymon; Wójcik, Daniel

    2012-02-01

    Local field potentials (LFP), the low-frequency part of extracellular electrical recordings, are a measure of the neural activity reflecting dendritic processing of synaptic inputs to neuronal populations. To localize synaptic dynamics, it is convenient, whenever possible, to estimate the density of transmembrane current sources (CSD) generating the LFP. In this work, we propose a new framework, the kernel current source density method (kCSD), for nonparametric estimation of CSD from LFP recorded from arbitrarily distributed electrodes using kernel methods. We test specific implementations of this framework on model data measured with one-, two-, and three-dimensional multielectrode setups. We compare these methods with the traditional approach through numerical approximation of the Laplacian and with the recently developed inverse current source density methods (iCSD). We show that iCSD is a special case of kCSD. The proposed method opens up new experimental possibilities for CSD analysis from existing or new recordings on arbitrarily distributed electrodes (not necessarily on a grid), which can be obtained in extracellular recordings of single unit activity with multiple electrodes.

  12. Quantification and classification of neuronal responses in kernel-smoothed peristimulus time histograms

    PubMed Central

    Fried, Itzhak; Koch, Christof

    2014-01-01

    Peristimulus time histograms are a widespread form of visualizing neuronal responses. Kernel convolution methods transform these histograms into a smooth, continuous probability density function. This provides an improved estimate of a neuron's actual response envelope. We here develop a classifier, called the h-coefficient, to determine whether time-locked fluctuations in the firing rate of a neuron should be classified as a response or as random noise. Unlike previous approaches, the h-coefficient takes advantage of the more precise response envelope estimation provided by the kernel convolution method. The h-coefficient quantizes the smoothed response envelope and calculates the probability of a response of a given shape to occur by chance. We tested the efficacy of the h-coefficient in a large data set of Monte Carlo simulated smoothed peristimulus time histograms with varying response amplitudes, response durations, trial numbers, and baseline firing rates. Across all these conditions, the h-coefficient significantly outperformed more classical classifiers, with a mean false alarm rate of 0.004 and a mean hit rate of 0.494. We also tested the h-coefficient's performance in a set of neuronal responses recorded in humans. The algorithm behind the h-coefficient provides various opportunities for further adaptation and the flexibility to target specific parameters in a given data set. Our findings confirm that the h-coefficient can provide a conservative and powerful tool for the analysis of peristimulus time histograms with great potential for future development. PMID:25475352

  13. Filters, reproducing kernel, and adaptive meshfree method

    NASA Astrophysics Data System (ADS)

    You, Y.; Chen, J.-S.; Lu, H.

    Reproducing kernel, with its intrinsic feature of moving averaging, can be utilized as a low-pass filter with scale decomposition capability. The discrete convolution of two nth order reproducing kernels with arbitrary support size in each kernel results in a filtered reproducing kernel function that has the same reproducing order. This property is utilized to separate the numerical solution into an unfiltered lower order portion and a filtered higher order portion. As such, the corresponding high-pass filter of this reproducing kernel filter can be used to identify the locations of high gradient, and consequently serves as an operator for error indication in meshfree analysis. In conjunction with the naturally conforming property of the reproducing kernel approximation, a meshfree adaptivity method is also proposed.

  14. Kernel method for corrections to scaling.

    PubMed

    Harada, Kenji

    2015-07-01

    Scaling analysis, in which one infers scaling exponents and a scaling function in a scaling law from given data, is a powerful tool for determining universal properties of critical phenomena in many fields of science. However, there are corrections to scaling in many cases, and then the inference problem becomes ill-posed by an uncontrollable irrelevant scaling variable. We propose a new kernel method based on Gaussian process regression to fix this problem generally. We test the performance of the new kernel method for some example cases. In all cases, when the precision of the example data increases, inference results of the new kernel method correctly converge. Because there is no limitation in the new kernel method for the scaling function even with corrections to scaling, unlike in the conventional method, the new kernel method can be widely applied to real data in critical phenomena.

  15. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX.

    PubMed

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case.

  16. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX*

    PubMed Central

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    2015-01-01

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case. PMID:26283801

  17. Equalizing resolution in smoothed-particle hydrodynamics calculations using self-adaptive sinc kernels

    NASA Astrophysics Data System (ADS)

    García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin

    2014-10-01

    Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.

  18. Using Cochran's Z Statistic to Test the Kernel-Smoothed Item Response Function Differences between Focal and Reference Groups

    ERIC Educational Resources Information Center

    Zheng, Yinggan; Gierl, Mark J.; Cui, Ying

    2010-01-01

    This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…

  19. Using Cochran's Z Statistic to Test the Kernel-Smoothed Item Response Function Differences between Focal and Reference Groups

    ERIC Educational Resources Information Center

    Zheng, Yinggan; Gierl, Mark J.; Cui, Ying

    2010-01-01

    This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…

  20. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    PubMed

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.

  1. ASMOOTH: a simple and efficient algorithm for adaptive kernel smoothing of two-dimensional imaging data

    NASA Astrophysics Data System (ADS)

    Ebeling, H.; White, D. A.; Rangarajan, F. V. N.

    2006-05-01

    An efficient algorithm for adaptive kernel smoothing (AKS) of two-dimensional imaging data has been developed and implemented using the Interactive Data Language (IDL). The functional form of the kernel can be varied (top-hat, Gaussian, etc.) to allow different weighting of the event counts registered within the smoothing region. For each individual pixel, the algorithm increases the smoothing scale until the signal-to-noise ratio (S/N) within the kernel reaches a pre-set value. Thus, noise is suppressed very efficiently, while at the same time real structure, that is, signal that is locally significant at the selected S/N level, is preserved on all scales. In particular, extended features in noise-dominated regions are visually enhanced. The ASMOOTH algorithm differs from other AKS routines in that it allows a quantitative assessment of the goodness of the local signal estimation by producing adaptively smoothed images in which all pixel values share the same S/N above the background. We apply ASMOOTH to both real observational data (an X-ray image of clusters of galaxies obtained with the Chandra X-ray Observatory) and to a simulated data set. We find the ASMOOTHed images to be fair representations of the input data in the sense that the residuals are consistent with pure noise, that is, they possess Poissonian variance and a near-Gaussian distribution around a mean of zero, and are spatially uncorrelated.

  2. Automated nonlinear system modeling with multiple fuzzy neural networks and kernel smoothing.

    PubMed

    Yu, Wen; Li, Xiaoou

    2010-10-01

    This paper, presents a novel identification approach using fuzzy neural networks. It focuses on structure and parameters uncertainties which have been widely explored in the literatures. The main contribution of this paper is that an integrated analytic framework is proposed for automated structure selection and parameter identification. A kernel smoothing technique is used to generate a model structure automatically in a fixed time interval. To cope with structural change, a hysteresis strategy is proposed to guarantee finite times switching and desired performance.

  3. Kernel methods for phenotyping complex plant architecture.

    PubMed

    Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien

    2014-02-07

    The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.

  4. Kernel map compression for speeding the execution of kernel-based methods.

    PubMed

    Arif, Omar; Vela, Patricio A

    2011-06-01

    The use of Mercer kernel methods in statistical learning theory provides for strong learning capabilities, as seen in kernel principal component analysis and support vector machines. Unfortunately, after learning, the computational complexity of execution through a kernel is of the order of the size of the training set, which is quite large for many applications. This paper proposes a two-step procedure for arriving at a compact and computationally efficient execution procedure. After learning in the kernel space, the proposed extension exploits the universal approximation capabilities of generalized radial basis function neural networks to efficiently approximate and replace the projections onto the empirical kernel map used during execution. Sample applications demonstrate significant compression of the kernel representation with graceful performance loss.

  5. The pre-image problem in kernel methods.

    PubMed

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  6. The Kernel Energy Method: Construction of 3 & 4 tuple Kernels from a List of Double Kernel Interactions

    PubMed Central

    Huang, Lulu; Massa, Lou

    2010-01-01

    The Kernel Energy Method (KEM) provides a way to calculate the ab-initio energy of very large biological molecules. The results are accurate, and the computational time reduced. However, by use of a list of double kernel interactions a significant additional reduction of computational effort may be achieved, still retaining ab-initio accuracy. A numerical comparison of the indices that name the known double interactions in question, allow one to list higher order interactions having the property of topological continuity within the full molecule of interest. When, that list of interactions is unpacked, as a kernel expansion, which weights the relative importance of each kernel in an expression for the total molecular energy, high accuracy, and a further significant reduction in computational effort results. A KEM molecular energy calculation based upon the HF/STO3G chemical model, is applied to the protein insulin, as an illustration. PMID:21243065

  7. Kernel Smoothed Profile Likelihood Estimation in the Accelerated Failure Time Frailty Model for Clustered Survival Data

    PubMed Central

    Liu, Bo; Lu, Wenbin; Zhang, Jiajia

    2013-01-01

    Summary Clustered survival data frequently arise in biomedical applications, where event times of interest are clustered into groups such as families. In this article we consider an accelerated failure time frailty model for clustered survival data and develop nonparametric maximum likelihood estimation for it via a kernel smoother aided EM algorithm. We show that the proposed estimator for the regression coefficients is consistent, asymptotically normal and semiparametric efficient when the kernel bandwidth is properly chosen. An EM-aided numerical differentiation method is derived for estimating its variance. Simulation studies evaluate the finite sample performance of the estimator, and it is applied to the Diabetic Retinopathy data set. PMID:24443587

  8. Heat kernel methods for Lifshitz theories

    NASA Astrophysics Data System (ADS)

    Barvinsky, Andrei O.; Blas, Diego; Herrero-Valea, Mario; Nesterov, Dmitry V.; Pérez-Nadal, Guillem; Steinwachs, Christian F.

    2017-06-01

    We study the one-loop covariant effective action of Lifshitz theories using the heat kernel technique. The characteristic feature of Lifshitz theories is an anisotropic scaling between space and time. This is enforced by the existence of a preferred foliation of space-time, which breaks Lorentz invariance. In contrast to the relativistic case, covariant Lifshitz theories are only invariant under diffeomorphisms preserving the foliation structure. We develop a systematic method to reduce the calculation of the effective action for a generic Lifshitz operator to an algorithm acting on known results for relativistic operators. In addition, we present techniques that drastically simplify the calculation for operators with special properties. We demonstrate the efficiency of these methods by explicit applications.

  9. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  10. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  11. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  12. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  13. Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets.

    PubMed

    Wang, Shitong; Wang, Jun; Chung, Fu-lai

    2014-01-01

    Kernel methods such as the standard support vector machine and support vector regression trainings take O(N(3)) time and O(N(2)) space complexities in their naïve implementations, where N is the training set size. It is thus computationally infeasible in applying them to large data sets, and a replacement of the naive method for finding the quadratic programming (QP) solutions is highly desirable. By observing that many kernel methods can be linked up with kernel density estimate (KDE) which can be efficiently implemented by some approximation techniques, a new learning method called fast KDE (FastKDE) is proposed to scale up kernel methods. It is based on establishing a connection between KDE and the QP problems formulated for kernel methods using an entropy-based integrated-squared-error criterion. As a result, FastKDE approximation methods can be applied to solve these QP problems. In this paper, the latest advance in fast data reduction via KDE is exploited. With just a simple sampling strategy, the resulted FastKDE method can be used to scale up various kernel methods with a theoretical guarantee that their performance does not degrade a lot. It has a time complexity of O(m(3)) where m is the number of the data points sampled from the training set. Experiments on different benchmarking data sets demonstrate that the proposed method has comparable performance with the state-of-art method and it is effective for a wide range of kernel methods to achieve fast learning in large data sets.

  14. Protoribosome by quantum kernel energy method.

    PubMed

    Huang, Lulu; Krupkin, Miri; Bashan, Anat; Yonath, Ada; Massa, Lou

    2013-09-10

    Experimental evidence suggests the existence of an RNA molecular prebiotic entity, called by us the "protoribosome," which may have evolved in the RNA world before evolution of the genetic code and proteins. This vestige of the RNA world, which possesses all of the capabilities required for peptide bond formation, seems to be still functioning in the heart of all of the contemporary ribosome. Within the modern ribosome this remnant includes the peptidyl transferase center. Its highly conserved nucleotide sequence is suggestive of its robustness under diverse environmental conditions, and hence on its prebiotic origin. Its twofold pseudosymmetry suggests that this entity could have been a dimer of self-folding RNA units that formed a pocket within which two activated amino acids might be accommodated, similar to the binding mode of modern tRNA molecules that carry amino acids or peptidyl moieties. Using quantum mechanics and crystal coordinates, this work studies the question of whether the putative protoribosome has properties necessary to function as an evolutionary precursor to the modern ribosome. The quantum model used in the calculations is density functional theory--B3LYP/3-21G*, implemented using the kernel energy method to make the computations practical and efficient. It occurs that the necessary conditions that would characterize a practicable protoribosome--namely (i) energetic structural stability and (ii) energetically stable attachment to substrates--are both well satisfied.

  15. Cold-moderator scattering kernel methods

    SciTech Connect

    MacFarlane, R. E.

    1998-01-01

    An accurate representation of the scattering of neutrons by the materials used to build cold sources at neutron scattering facilities is important for the initial design and optimization of a cold source, and for the analysis of experimental results obtained using the cold source. In practice, this requires a good representation of the physics of scattering from the material, a method to convert this into observable quantities (such as scattering cross sections), and a method to use the results in a neutron transport code (such as the MCNP Monte Carlo code). At Los Alamos, the authors have been developing these capabilities over the last ten years. The final set of cold-moderator evaluations, together with evaluations for conventional moderator materials, was released in 1994. These materials have been processed into MCNP data files using the NJOY Nuclear Data Processing System. Over the course of this work, they were able to develop a new module for NJOY called LEAPR based on the LEAP + ADDELT code from the UK as modified by D.J. Picton for cold-moderator calculations. Much of the physics for methane came from Picton`s work. The liquid hydrogen work was originally based on a code using the Young-Koppel approach that went through a number of hands in Europe (including Rolf Neef and Guy Robert). It was generalized and extended for LEAPR, and depends strongly on work by Keinert and Sax of the University of Stuttgart. Thus, their collection of cold-moderator scattering kernels is truly an international effort, and they are glad to be able to return the enhanced evaluations and processing techniques to the international community. In this paper, they give sections on the major cold moderator materials (namely, solid methane, liquid methane, and liquid hydrogen) using each section to introduce the relevant physics for that material and to show typical results.

  16. Intelligent classification methods of grain kernels using computer vision analysis

    NASA Astrophysics Data System (ADS)

    Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo

    2011-06-01

    In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.

  17. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  18. Optimizing spatial filters with kernel methods for BCI applications

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacai; Tang, Jianjun; Yao, Li

    2007-11-01

    Brain Computer Interface (BCI) is a communication or control system in which the user's messages or commands do not depend on the brain's normal output channels. The key step of BCI technology is to find a reliable method to detect the particular brain signals, such as the alpha, beta and mu components in EEG/ECOG trials, and then translate it into usable control signals. In this paper, our objective is to introduce a novel approach that is able to extract the discriminative pattern from the non-stationary EEG signals based on the common spatial patterns(CSP) analysis combined with kernel methods. The basic idea of our Kernel CSP method is performing a nonlinear form of CSP by the use of kernel methods that can efficiently compute the common and distinct components in high dimensional feature spaces related to input space by some nonlinear map. The algorithm described here is tested off-line with dataset I from the BCI Competition 2005. Our experiments show that the spatial filters employed with kernel CSP can effectively extract discriminatory information from single-trial EGOG recorded during imagined movements. The high recognition of linear discriminative rates and computational simplicity of "Kernel Trick" make it a promising method for BCI systems.

  19. A 3D Contact Smoothing Method

    SciTech Connect

    Puso, M A; Laursen, T A

    2002-05-02

    Smoothing of contact surfaces can be used to eliminate the chatter typically seen with node on facet contact and give a better representation of the actual contact surface. The latter affect is well demonstrated for problems with interference fits. In this work we present two methods for the smoothing of contact surfaces for 3D finite element contact. In the first method, we employ Gregory patches to smooth the faceted surface in a node on facet implementation. In the second method, we employ a Bezier interpolation of the faceted surface in a mortar method implementation of contact. As is well known, node on facet approaches can exhibit locking due to the failure of the Babuska-Brezzi condition and in some instances fail the patch test. The mortar method implementation is stable and provides optimal convergence in the energy of error. In the this work we demonstrate the superiority of the smoothed versus the non-smoothed node on facet implementations. We also show where the node on facet method fails and some results from the smoothed mortar method implementation.

  20. Enrichment of the finite element method with reproducing kernel particle method

    SciTech Connect

    Chen, Y.; Liu, W.K.; Uras, R.A.

    1995-07-01

    Based on the reproducing kernel particle method on enrichment procedure is introduced to enhance the effectiveness of the finite element method. The basic concepts for the reproducing kernel particle method are briefly reviewed. By adopting the well-known completeness requirements, a generalized form of the reproducing kernel particle method is developed. Through a combination of these two methods their unique advantages can be utilized. An alternative approach, the multiple field method is also introduced.

  1. Kernel Methods for Mining Instance Data in Ontologies

    NASA Astrophysics Data System (ADS)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  2. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  3. A locally adaptive kernel regression method for facies delineation

    NASA Astrophysics Data System (ADS)

    Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.

    2015-12-01

    Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.

  4. Chebyshev recursion methods: Kernel polynomials and maximum entropy

    SciTech Connect

    Silver, R.N.; Roeder, H.; Voter, A.F.; Kress, J.D.

    1995-10-01

    The authors describe two Chebyshev recursion methods for calculations with very large sparse Hamiltonians, the kernel polynomial method (KPM) and the maximum entropy method (MEM). They are especially applicable to physical properties involving large numbers of eigenstates, which include densities of states, spectral functions, thermodynamics, total energies, as well as forces for molecular dynamics and Monte Carlo simulations. The authors apply Chebyshev methods to the electronic structure of Si, the thermodynamics of Heisenberg antiferromagnets, and a polaron problem.

  5. Multiple predictor smoothing methods for sensitivity analysis.

    SciTech Connect

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  6. Several methods of smoothing motion capture data

    NASA Astrophysics Data System (ADS)

    Qi, Jingjing; Miao, Zhenjiang; Wang, Zhifei; Zhang, Shujun

    2011-06-01

    Human motion capture and editing technologies are widely used in computer animation production. We can acquire original motion data by human motion capture system, and then process it by motion editing system. However, noise embed in original motion data maybe introduced by extracting the target, three-dimensional reconstruction process, optimizing algorithm and devices itself in human motion capture system. The motion data must be modified before used to make videos, otherwise the animation figures will be jerky and their behavior is unnatural. Therefore, motion smoothing is essential. In this paper, we compare and summarize three methods of smoothing original motion capture data.

  7. The Kernel Energy Method: application to a tRNA.

    PubMed

    Huang, Lulu; Massa, Lou; Karle, Jerome

    2006-01-31

    The Kernel Energy Method (KEM) may be used to calculate quantum mechanical molecular energy by the use of several model chemistries. Simplification is obtained by mathematically breaking a large molecule into smaller parts, called kernels. The full molecule is reassembled from calculations carried out on the kernels. KEM is as yet untested for RNA, and such a test is the purpose here. The basic kernel for RNA is a nucleotide that in general may differ from those of DNA. RNA is a single strand rather than the double helix of DNA. KEM energy has been calculated for a tRNA, whose crystal structure is known, and which contains 2,565 atoms. The energy is calculated to be E = -108,995.1668 (a.u.), in the Hartree-Fock approximation, using a limited basis. Interaction energies are found to be consistent with the hydrogen-bonding scheme previously found. In this paper, the range of biochemical molecules, susceptible of quantum studies by means of the KEM, have been broadened to include RNA.

  8. Method for producing smooth inner surfaces

    DOEpatents

    Cooper, Charles A.

    2016-05-17

    The invention provides a method for preparing superconducting cavities, the method comprising causing polishing media to tumble by centrifugal barrel polishing within the cavities for a time sufficient to attain a surface smoothness of less than 15 nm root mean square roughness over approximately a 1 mm.sup.2 scan area. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media bound to a carrier to tumble within the cavities. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media in a slurry to tumble within the cavities.

  9. Single image super-resolution via an iterative reproducing kernel Hilbert space method.

    PubMed

    Deng, Liang-Jian; Guo, Weihong; Huang, Ting-Zhu

    2016-11-01

    Image super-resolution, a process to enhance image resolution, has important applications in satellite imaging, high definition television, medical imaging, etc. Many existing approaches use multiple low-resolution images to recover one high-resolution image. In this paper, we present an iterative scheme to solve single image super-resolution problems. It recovers a high quality high-resolution image from solely one low-resolution image without using a training data set. We solve the problem from image intensity function estimation perspective and assume the image contains smooth and edge components. We model the smooth components of an image using a thin-plate reproducing kernel Hilbert space (RKHS) and the edges using approximated Heaviside functions. The proposed method is applied to image patches, aiming to reduce computation and storage. Visual and quantitative comparisons with some competitive approaches show the effectiveness of the proposed method.

  10. Adaptive Kernel Based Machine Learning Methods

    DTIC Science & Technology

    2012-10-15

    Multiscale collocation methods are developed in [3] for solving a system of integral equations which is a reformulation of the Tikhonov - regularized ...Direct numerical solutions of the Tikhonov regularization equation require one to gener- ate a matrix representation of the composition of the...issue, rather than directly solving the Tikhonov - regularized equation, we propose to solve an equivalent coupled system of integral equations. We apply a

  11. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  12. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  13. Hardness methods for testing maize kernels.

    PubMed

    Fox, Glen; Manley, Marena

    2009-07-08

    Maize is a highly important crop to many countries around the world, through the sale of the maize crop to domestic processors and subsequent production of maize products and also provides a staple food to subsistance farms in undeveloped countries. In many countries, there have been long-term research efforts to develop a suitable hardness method that could assist the maize industry in improving efficiency in processing as well as possibly providing a quality specification for maize growers, which could attract a premium. This paper focuses specifically on hardness and reviews a number of methodologies as well as important biochemical aspects of maize that contribute to maize hardness used internationally. Numerous foods are produced from maize, and hardness has been described as having an impact on food quality. However, the basis of hardness and measurement of hardness are very general and would apply to any use of maize from any country. From the published literature, it would appear that one of the simpler methods used to measure hardness is a grinding step followed by a sieving step, using multiple sieve sizes. This would allow the range in hardness within a sample as well as average particle size and/or coarse/fine ratio to be calculated. Any of these parameters could easily be used as reference values for the development of near-infrared (NIR) spectroscopy calibrations. The development of precise NIR calibrations will provide an excellent tool for breeders, handlers, and processors to deliver specific cultivars in the case of growers and bulk loads in the case of handlers, thereby ensuring the most efficient use of maize by domestic and international processors. This paper also considers previous research describing the biochemical aspects of maize that have been related to maize hardness. Both starch and protein affect hardness, with most research focusing on the storage proteins (zeins). Both the content and composition of the zein fractions affect

  14. Smooth electrode and method of fabricating same

    SciTech Connect

    Weaver, Stanton Earl; Kennerly, Stacey Joy; Aimi, Marco Francesco

    2012-08-14

    A smooth electrode is provided. The smooth electrode includes at least one metal layer having thickness greater than about 1 micron; wherein an average surface roughness of the smooth electrode is less than about 10 nm.

  15. Kernel Methods on Spike Train Space for Neuroscience: A Tutorial

    NASA Astrophysics Data System (ADS)

    Park, Il Memming; Seth, Sohan; Paiva, Antonio R. C.; Li, Lin; Principe, Jose C.

    2013-07-01

    Over the last decade several positive definite kernels have been proposed to treat spike trains as objects in Hilbert space. However, for the most part, such attempts still remain a mere curiosity for both computational neuroscientists and signal processing experts. This tutorial illustrates why kernel methods can, and have already started to, change the way spike trains are analyzed and processed. The presentation incorporates simple mathematical analogies and convincing practical examples in an attempt to show the yet unexplored potential of positive definite functions to quantify point processes. It also provides a detailed overview of the current state of the art and future challenges with the hope of engaging the readers in active participation.

  16. Rotorcraft Smoothing Via Linear Time Periodic Methods

    DTIC Science & Technology

    2007-07-01

    Optimal Control Methodology for Rotor Vibration Smoothing . . 30 vii Page IV. Mathematic Foundations of Linear Time Periodic Systems . . . . 33 4.1 The...62 6.3 The Maximum Likelihood Estimator . . . . . . . . . . . 63 6.4 The Cramer-Rao Inequality . . . . . . . . . . . . . . . . 66 6.4.1 Statistical ...adjustments for vibration reduction. 2.2.2.4 1980’s to late 1990’s. Rotor vibrational reduction methods during the 1980’s began to adopt a mathematical

  17. Kernel-Smoothing Estimation of Item Characteristic Functions for Continuous Personality Items: An Empirical Comparison with the Linear and the Continuous-Response Models

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2004-01-01

    This study used kernel-smoothing procedures to estimate the item characteristic functions (ICFs) of a set of continuous personality items. The nonparametric ICFs were compared with the ICFs estimated (a) by the linear model and (b) by Samejima's continuous-response model. The study was based on a conditioned approach and used an error-in-variables…

  18. Kernel-Smoothing Estimation of Item Characteristic Functions for Continuous Personality Items: An Empirical Comparison with the Linear and the Continuous-Response Models

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2004-01-01

    This study used kernel-smoothing procedures to estimate the item characteristic functions (ICFs) of a set of continuous personality items. The nonparametric ICFs were compared with the ICFs estimated (a) by the linear model and (b) by Samejima's continuous-response model. The study was based on a conditioned approach and used an error-in-variables…

  19. NIRS method for precise identification of Fusarium damaged wheat kernels

    USDA-ARS?s Scientific Manuscript database

    Development of scab resistant wheat varieties may be enhanced by non-destructive evaluation of kernels for Fusarium damaged kernels (FDKs) and deoxynivalenol (DON) levels. Fusarium infection generally affects kernel appearance, but insect damage and other fungi can cause similar symptoms. Also, some...

  20. A graph kernel method for DNA-binding site prediction.

    PubMed

    Yan, Changhui; Wang, Yingfeng

    2014-01-01

    Protein-DNA interactions play important roles in many biological processes. Computational methods that can accurately predict DNA-binding sites on proteins will greatly expedite research on problems involving protein-DNA interactions. This paper presents a method for predicting DNA-binding sites on protein structures. The method represents protein surface patches using labeled graphs and uses a graph kernel method to calculate the similarities between graphs. A new surface patch is predicted to be interface or non-interface patch based on its similarities to known DNA-binding patches and non-DNA-binding patches. The proposed method achieved high accuracy when tested on a representative set of 146 protein-DNA complexes using leave-one-out cross-validation. Then, the method was applied to identify DNA-binding sites on 13 unbound structures of DNA-binding proteins. In each of the unbound structure, the top 1 patch predicted by the proposed method precisely indicated the location of the DNA-binding site. Comparisons with other methods showed that the proposed method was competitive in predicting DNA-binding sites on unbound proteins. The proposed method uses graphs to encode the feature's distribution in the 3-dimensional (3D) space. Thus, compared with other vector-based methods, it has the advantage of taking into account the spatial distribution of features on the proteins. Using an efficient kernel method to compare graphs the proposed method also avoids the demanding computations required for 3D objects comparison. It provides a competitive method for predicting DNA-binding sites without requiring structure alignment.

  1. Kernel weights optimization for error diffusion halftoning method

    NASA Astrophysics Data System (ADS)

    Fedoseev, Victor

    2015-02-01

    This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.

  2. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods

    PubMed Central

    Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  3. An efficient method for correcting the edge artifact due to smoothing.

    PubMed

    Maisog, J M; Chmielowska, J

    1998-01-01

    Spatial smoothing is a common pre-processing step in the analysis of functional brain imaging data. It can increase sensitivity to signals of specific shapes and sizes (Rosenfeld and Kak [1982]: Digital Picture Processing, vol. 2. Orlando, Fla.: Academic; Worsley et al. [1996]: Hum Brain Mapping 4:74-90). Also, some amount of spatial smoothness is required if methods from the theory of Gaussian random fields are to be used (Holmes [1994]: Statistical Issues in Functional Brain Mapping. PhD thesis, University of Glasgow). Smoothing is most often implemented as a convolution of the imaging data with a smoothing kernel, and convolution is most efficiently performed using the Convolution Theorem and the Fast Fourier Transform (Cooley and Tukey [1965]: Math Comput 19:297-301; Priestly [1981]: Spectral Analysis and Time Series. San Diego: Academic; Press et al. [1992]: Numerical Recipes in C: The Art of Scientific Computing, 2nd ed. Cambridge: Cambridge University Press). An undesirable side effect of smoothing is an artifact along the edges of the brain, where brain voxels become smoothed with non-brain voxels. This results in a dark rim which might be mistaken for hypoactivity. In this short methodological paper, we present a method for correcting functional brain images for the edge artifact due to smoothing, while retaining the use of the Convolution Theorem and the Fast Fourier Transform for efficient calculation of convolutions.

  4. Sensitivity kernels for viscoelastic loading based on adjoint methods

    NASA Astrophysics Data System (ADS)

    Al-Attar, David; Tromp, Jeroen

    2014-01-01

    Observations of glacial isostatic adjustment (GIA) allow for inferences to be made about mantle viscosity, ice sheet history and other related parameters. Typically, this inverse problem can be formulated as minimizing the misfit between the given observations and a corresponding set of synthetic data. When the number of parameters is large, solution of such optimization problems can be computationally challenging. A practical, albeit non-ideal, solution is to use gradient-based optimization. Although the gradient of the misfit required in such methods could be calculated approximately using finite differences, the necessary computation time grows linearly with the number of model parameters, and so this is often infeasible. A far better approach is to apply the `adjoint method', which allows the exact gradient to be calculated from a single solution of the forward problem, along with one solution of the associated adjoint problem. As a first step towards applying the adjoint method to the GIA inverse problem, we consider its application to a simpler viscoelastic loading problem in which gravitationally self-consistent ocean loading is neglected. The earth model considered is non-rotating, self-gravitating, compressible, hydrostatically pre-stressed, laterally heterogeneous and possesses a Maxwell solid rheology. We determine adjoint equations and Fréchet kernels for this problem based on a Lagrange multiplier method. Given an objective functional J defined in terms of the surface deformation fields, we show that its first-order perturbation can be written δ J = int _{MS}K_{η }δ ln η dV +int _{t0}^{t1}int _{partial M}K_{dot{σ }} δ dot{σ } dS dt, where δ ln η = δη/η denotes relative viscosity variations in solid regions MS, dV is the volume element, δ dot{σ } is the perturbation to the time derivative of the surface load which is defined on the earth model's surface ∂M and for times [t0, t1] and dS is the surface element on ∂M. The `viscosity

  5. An improved method for rebinning kernels from cylindrical to Cartesian coordinates.

    PubMed

    Rathee, S; McClean, B A; Field, C

    1993-01-01

    This paper describes the errors in rebinning photon dose point spread functions and pencil beam kernels (PBKs) from cylindrical to Cartesian coordinates. An area overlap method, which assumes that the fractional energy deposited per unit volume remains constant within cylindrical voxels, provides large deviations (up to 20%) in rebinned Cartesian voxels while conserving the total energy. A modified area overlap method is presented that allows the fractional energy deposited per unit volume within cylindrical voxels to vary according to an interpolating function. This method rebins the kernels accurately in each Cartesian voxel while conserving the total energy. The dose distributions were computed for a partially blocked beam of uniform fluence using the Cartesian coordinate kernel and the kernels rebinned by both methods. The kernel rebinned by the modified area overlap method provided errors less than 1.7%, while the kernel rebinned by the area overlap method gave errors up to 4.4%.

  6. Estimating the Bias of Local Polynomial Approximation Methods Using the Peano Kernel

    SciTech Connect

    Blair, J.; Machorro, E.; Luttman, A.

    2013-03-01

    The determination of uncertainty of an estimate requires both the variance and the bias of the estimate. Calculating the variance of local polynomial approximation (LPA) estimates is straightforward. We present a method, using the Peano Kernel Theorem, to estimate the bias of LPA estimates and show how this can be used to optimize the LPA parameters in terms of the bias-variance tradeoff. Figures of merit are derived and values calculated for several common methods. The results in the literature are expanded by giving bias error bounds that are valid for all lengths of the smoothing interval, generalizing the currently available asymptotic results that are only valid in the limit as the length of this interval goes to zero.

  7. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  8. Kernel energy method applied to vesicular stomatitis virus nucleoprotein

    PubMed Central

    Huang, Lulu; Massa, Lou; Karle, Jerome

    2009-01-01

    The kernel energy method (KEM) is applied to the vesicular stomatitis virus (VSV) nucleoprotein (PDB ID code 2QVJ). The calculations employ atomic coordinates from the crystal structure at 2.8-Å resolution, except for the hydrogen atoms, whose positions were modeled by using the computer program HYPERCHEM. The calculated KEM ab initio limited basis Hartree-Fock energy for the full 33,175 atom molecule (including hydrogen atoms) is obtained. In the KEM, a full biological molecule is represented by smaller “kernels” of atoms, greatly simplifying the calculations. Collections of kernels are well suited for parallel computation. VSV consists of five similar chains, and we obtain the energy of each chain. Interchain hydrogen bonds contribute to the interaction energy between the chains. These hydrogen bond energies are calculated in Hartree-Fock (HF) and Møller-Plesset perturbation theory to second order (MP2) approximations by using 6–31G** basis orbitals. The correlation energy, included in MP2, is a significant factor in the interchain hydrogen bond energies. PMID:19188588

  9. Earthquake forecasting through a smoothing Kernel and the rate-and-state friction law: Application to the Taiwan region

    NASA Astrophysics Data System (ADS)

    Chan, C.; Wu, Y.

    2011-12-01

    We applied two forecasting models for spatio-temporal distribution of seismicity density rate based on the smoothing Kernel function (SKF) and rate-and-state friction law (RFL) in Taiwan region to test their feasibility. Earthquake catalog from 1973 to 2007 was used to build up a time-independent forecasting model through the SKF. Coulomb stress changes imparted by the M≥4.5 earthquakes from 2008 to 2009 were calculated in order to propose a time-dependent model by the RFL. The distribution of M≥3.0 earthquakes from 2008 to 2009 is forecasted to examine our results. For SKF and RFL models, the percentage of forecasted earthquakes located within the 50% of the study area with high calculated seismicity rate are 82% and 72%, respectively. Results show that both of the models represent good abilities in earthquake forecasting. We further propose another model by combination of these two approaches. The rate could further reach to 84 %. It will be useful for seismicity forecasting in the near future.

  10. A density-adaptive SPH method with kernel gradient correction for modeling explosive welding

    NASA Astrophysics Data System (ADS)

    Liu, M. B.; Zhang, Z. L.; Feng, D. L.

    2017-05-01

    Explosive welding involves processes like the detonation of explosive, impact of metal structures and strong fluid-structure interaction, while the whole process of explosive welding has not been well modeled before. In this paper, a novel smoothed particle hydrodynamics (SPH) model is developed to simulate explosive welding. In the SPH model, a kernel gradient correction algorithm is used to achieve better computational accuracy. A density adapting technique which can effectively treat large density ratio is also proposed. The developed SPH model is firstly validated by simulating a benchmark problem of one-dimensional TNT detonation and an impact welding problem. The SPH model is then successfully applied to simulate the whole process of explosive welding. It is demonstrated that the presented SPH method can capture typical physics in explosive welding including explosion wave, welding surface morphology, jet flow and acceleration of the flyer plate. The welding angle obtained from the SPH simulation agrees well with that from a kinematic analysis.

  11. Integral Transform Methods: A Critical Review of Various Kernels

    NASA Astrophysics Data System (ADS)

    Orlandini, Giuseppina; Turro, Francesco

    2017-03-01

    Some general remarks about integral transform approaches to response functions are made. Their advantage for calculating cross sections at energies in the continuum is stressed. In particular we discuss the class of kernels that allow calculations of the transform by matrix diagonalization. A particular set of such kernels, namely the wavelets, is tested in a model study.

  12. Widely Linear Complex-Valued Kernel Methods for Regression

    NASA Astrophysics Data System (ADS)

    Boloix-Tortosa, Rafael; Murillo-Fuentes, Juan Jose; Santos, Irene; Perez-Cruz, Fernando

    2017-10-01

    Usually, complex-valued RKHS are presented as an straightforward application of the real-valued case. In this paper we prove that this procedure yields a limited solution for regression. We show that another kernel, here denoted as pseudo kernel, is needed to learn any function in complex-valued fields. Accordingly, we derive a novel RKHS to include it, the widely RKHS (WRKHS). When the pseudo-kernel cancels, WRKHS reduces to complex-valued RKHS of previous approaches. We address the kernel and pseudo-kernel design, paying attention to the kernel and the pseudo-kernel being complex-valued. In the experiments included we report remarkable improvements in simple scenarios where real a imaginary parts have different similitude relations for given inputs or cases where real and imaginary parts are correlated. In the context of these novel results we revisit the problem of non-linear channel equalization, to show that the WRKHS helps to design more efficient solutions.

  13. Simulating non-Newtonian flows with the moving particle semi-implicit method with an SPH kernel

    NASA Astrophysics Data System (ADS)

    Xiang, Hao; Chen, Bin

    2015-02-01

    The moving particle semi-implicit (MPS) method and smoothed particle hydrodynamics (SPH) are commonly used mesh-free particle methods for free surface flows. The MPS method has superiority in incompressible flow simulation and simple programing. However, the crude kernel function is not accurate enough for the discretization of the divergence of the shear stress tensor by the particle inconsistency when the MPS method is extended to non-Newtonian flows. This paper presents an improved MPS method with an SPH kernel to simulate non-Newtonian flows. To improve the consistency of the partial derivative, the SPH cubic spline kernel and the Taylor series expansion are combined with the MPS method. This approach is suitable for all non-Newtonian fluids that can be described with τ = μ(|γ|) Δ (where τ is the shear stress tensor, μ is the viscosity, |γ| is the shear rate, and Δ is the strain tensor), e.g., the Casson and Cross fluids. Two examples are simulated including the Newtonian Poiseuille flow and container filling process of the Cross fluid. The results of Poiseuille flow are more accurate than the traditional MPS method, and different filling processes are obtained with good agreement with previous results, which verified the validation of the new algorithm. For the Cross fluid, the jet fracture length can be correlated with We0.28Fr0.78 (We is the Weber number, Fr is the Froude number).

  14. Kernel method based human model for enhancing interactive evolutionary optimization.

    PubMed

    Pei, Yan; Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly.

  15. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    PubMed Central

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  16. Probabilistic seismic hazard assessment of Italy using kernel estimation methods

    NASA Astrophysics Data System (ADS)

    Zuccolo, Elisa; Corigliano, Mirko; Lai, Carlo G.

    2013-07-01

    A representation of seismic hazard is proposed for Italy based on the zone-free approach developed by Woo (BSSA 86(2):353-362, 1996a), which is based on a kernel estimation method governed by concepts of fractal geometry and self-organized seismicity, not requiring the definition of seismogenic zoning. The purpose is to assess the influence of seismogenic zoning on the results obtained for the probabilistic seismic hazard analysis (PSHA) of Italy using the standard Cornell's method. The hazard has been estimated for outcropping rock site conditions in terms of maps and uniform hazard spectra for a selected site, with 10 % probability of exceedance in 50 years. Both spectral acceleration and spectral displacement have been considered as ground motion parameters. Differences in the results of PSHA between the two methods are compared and discussed. The analysis shows that, in areas such as Italy, characterized by a reliable earthquake catalog and in which faults are generally not easily identifiable, a zone-free approach can be considered a valuable tool to address epistemic uncertainty within a logic tree framework.

  17. A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.

    2017-05-01

    Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.

  18. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    PubMed Central

    Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.

    2013-01-01

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we

  19. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    SciTech Connect

    Huang, Jessie Y.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.; Eklund, David; Childress, Nathan L.

    2013-12-15

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm.Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels.Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found

  20. A Comparison of the Kernel Equating Method with Traditional Equating Methods Using SAT[R] Data

    ERIC Educational Resources Information Center

    Liu, Jinghua; Low, Albert C.

    2008-01-01

    This study applied kernel equating (KE) in two scenarios: equating to a very similar population and equating to a very different population, referred to as a distant population, using SAT[R] data. The KE results were compared to the results obtained from analogous traditional equating methods in both scenarios. The results indicate that KE results…

  1. Aflatoxin detection in whole corn kernels using hyperspectral methods

    NASA Astrophysics Data System (ADS)

    Casasent, David; Chen, Xue-Wen

    2004-03-01

    Hyperspectral (HS) data for the inspection of whole corn kernels for aflatoxin is considered. The high-dimensionality of HS data requires feature extraction or selection for good classifier generalization. For fast and inexpensive data collection, only several features (λ responses) can be used. These are obtained by feature selection from the full HS response. A new high dimensionality branch and bound (HDBB) feature selection algorithm is used; it is found to be optimum, fast and very efficient. Initial results indicate that HS data is very promising for aflatoxin detection in whole kernel corn.

  2. Autonomic function assessment in Parkinson's disease patients using the kernel method and entrainment techniques.

    PubMed

    Kamal, Ahmed K

    2007-01-01

    The experimental procedure of lowering and raising a leg while the subject is in the supine position is considered to stimulate and entrain the autonomic nervous system of fifteen untreated patients with Parkinson's disease and fifteen age and sex matched control subjects. The assessment of autonomic function for each group is achieved using an algorithm based on Volterra kernel estimation. By applying this algorithm and considering the process of lowering and raising a leg as stimulus input and the Heart Rate Variability signal (HRV) as output for system identification, a mathematical model is expressed as integral equations. The integral equations are considered and fixed for control subjects and Parkinson's disease patients so that the identification method reduced to the determination of the values within the integral called kernels, resulting in an integral equations whose input-output behavior is nearly identical to that of the system in both healthy subjects and Parkinson's disease patients. The model for each group contains the linear part (first order kernel) and quadratic part (second order kernel). A difference equation model was employed to represent the system for both control subjects and patients with Parkinson's disease. The results show significant difference in first order kernel(impulse response) and second order kernel (mesh diagram) for each group. Using first order kernel and second order kernel, it is possible to assess autonomic function qualitatively and quantitatively in both groups.

  3. A method for anisotropic spatial smoothing of functional magnetic resonance images using distance transformation of a structural image

    NASA Astrophysics Data System (ADS)

    Nam, Haewon; Lee, Dongha; Doo Lee, Jong; Park, Hae-Jeong

    2011-08-01

    Spatial smoothing using isotropic Gaussian kernels to remove noise reduces spatial resolution and increases the partial volume effect of functional magnetic resonance images (fMRI), thereby reducing localization power. To minimize these limitations, we propose a novel anisotropic smoothing method for fMRI data. To extract an anisotropic tensor for each voxel of the functional data, we derived an intensity gradient using the distance transformation of the segmented gray matter of the fMRI-coregistered T1-weighted image. The intensity gradient was then used to determine the anisotropic smoothing kernel at each voxel of the fMRI data. Performance evaluations on both real and simulated data showed that the proposed method had 10% higher statistical power and about 20% higher gray matter localization compared to isotropic smoothing and robustness to the registration errors (up to 4 mm translations and 4° rotations) between T1 structural images and fMRI data. The proposed method also showed higher performance than the anisotropic smoothing with diffusion gradients derived from the fMRI intensity data.

  4. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    DTIC Science & Technology

    2006-01-01

    successfully used by the machine learning community for pattern recognition and image denoising [14]. A Gaussian kernel was used by Cremers et al. [8] for...matrix M, where φi ∈ RNd . Using Singular Value Decomposition ( SVD ), the covariance matrix 1nMM T is decomposed as: UΣUT = 1 n MMT (1) where U is a

  5. Linear and kernel methods for multivariate change detection

    NASA Astrophysics Data System (ADS)

    Canty, Morton J.; Nielsen, Allan A.

    2012-01-01

    The iteratively reweighted multivariate alteration detection (IR-MAD) algorithm may be used both for unsupervised change detection in multi- and hyperspectral remote sensing imagery and for automatic radiometric normalization of multitemporal image sequences. Principal components analysis (PCA), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed to run on massively parallel CUDA-enabled graphics processors, when available, giving an order of magnitude enhancement in computational speed. The software is available from the authors' Web sites.

  6. LoCoH: nonparameteric kernel methods for constructing home ranges and utilization distributions.

    PubMed

    Getz, Wayne M; Fortmann-Roe, Scott; Cross, Paul C; Lyons, Andrew J; Ryan, Sadie J; Wilmers, Christopher C

    2007-02-14

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: "fixed sphere-of-influence," or r-LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an "adaptive sphere-of-influence," or a-LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a), and compare them to the original "fixed-number-of-points," or k-LoCoH (all kernels constructed from k-1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a-LoCoH is generally superior to k- and r-LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).

  7. LoCoH: Non-parameteric kernel methods for constructing home ranges and utilization distributions

    USGS Publications Warehouse

    Getz, Wayne M.; Fortmann-Roe, Scott; Cross, Paul C.; Lyons, Andrew J.; Ryan, Sadie J.; Wilmers, Christopher C.

    2007-01-01

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: ‘‘fixed sphere-of-influence,’’ or r -LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an ‘‘adaptive sphere-of-influence,’’ or a -LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a ), and compare them to the original ‘‘fixed-number-of-points,’’ or k -LoCoH (all kernels constructed from k -1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a -LoCoH is generally superior to k - and r -LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).

  8. LoCoH: Nonparameteric Kernel Methods for Constructing Home Ranges and Utilization Distributions

    PubMed Central

    Getz, Wayne M.; Fortmann-Roe, Scott; Wilmers, Christopher C.

    2007-01-01

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: “fixed sphere-of-influence,” or r-LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an “adaptive sphere-of-influence,” or a-LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a), and compare them to the original “fixed-number-of-points,” or k-LoCoH (all kernels constructed from k-1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a-LoCoH is generally superior to k- and r-LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu). PMID:17299587

  9. Kernel density estimator methods for Monte Carlo radiation transport

    NASA Astrophysics Data System (ADS)

    Banerjee, Kaushik

    In this dissertation, the Kernel Density Estimator (KDE), a nonparametric probability density estimator, is studied and used to represent global Monte Carlo (MC) tallies. KDE is also employed to remove the singularities from two important Monte Carlo tallies, namely point detector and surface crossing flux tallies. Finally, KDE is also applied to accelerate the Monte Carlo fission source iteration for criticality problems. In the conventional MC calculation histograms are used to represent global tallies which divide the phase space into multiple bins. Partitioning the phase space into bins can add significant overhead to the MC simulation and the histogram provides only a first order approximation to the underlying distribution. The KDE method is attractive because it can estimate MC tallies in any location within the required domain without any particular bin structure. Post-processing of the KDE tallies is sufficient to extract detailed, higher order tally information for an arbitrary grid. The quantitative and numerical convergence properties of KDE tallies are also investigated and they are shown to be superior to conventional histograms as well as the functional expansion tally developed by Griesheimer. Monte Carlo point detector and surface crossing flux tallies are two widely used tallies but they suffer from an unbounded variance. As a result, the central limit theorem can not be used for these tallies to estimate confidence intervals. By construction, KDE tallies can be directly used to estimate flux at a point but the variance of this point estimate does not converge as 1/N, which is not unexpected for a point quantity. However, an improved approach is to modify both point detector and surface crossing flux tallies directly by using KDE within a variance reduction approach by taking advantage of the fact that KDE estimates the underlying probability density function. This methodology is demonstrated by several numerical examples and demonstrates that

  10. Development of a single kernel analysis method for detection of 2-acetyl-1-pyrroline in aromatic rice germplasm

    USDA-ARS?s Scientific Manuscript database

    Solid-phase microextraction (SPME) in conjunction with GC/MS was used to distinguish non-aromatic rice (Oryza sativa, L.) kernels from aromatic rice kernels. In this method, single kernels along with 10 µl of 0.1 ng 2,4,6-Trimethylpyridine (TMP) were placed in sealed vials and heated to 80oC for 18...

  11. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    NASA Astrophysics Data System (ADS)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  12. A Comprehensive Benchmark of Kernel Methods to Extract Protein–Protein Interactions from Literature

    PubMed Central

    Tikk, Domonkos; Thomas, Philippe; Palaga, Peter; Hakenberg, Jörg; Leser, Ulf

    2010-01-01

    The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein–protein interactions (PPIs) reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study shows that three

  13. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    PubMed

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1].

  14. Community structure discovery method based on the Gaussian kernel similarity matrix

    NASA Astrophysics Data System (ADS)

    Guo, Chonghui; Zhao, Haipeng

    2012-03-01

    Community structure discovery in complex networks is a popular issue, and overlapping community structure discovery in academic research has become one of the hot spots. Based on the Gaussian kernel similarity matrix and spectral bisection, this paper proposes a new community structure discovery method. First, by adjusting the Gaussian kernel parameter to change the scale of similarity, we can find the corresponding non-overlapping community structure when the value of the modularity is the largest relatively. Second, the changes of the Gaussian kernel parameter would lead to the unstable nodes jumping off, so with a slight change in method of non-overlapping community discovery, we can find the overlapping community nodes. Finally, synthetic data, karate club and political books datasets are used to test the proposed method, comparing with some other community discovery methods, to demonstrate the feasibility and effectiveness of this method.

  15. A Non-smooth Newton Method for Multibody Dynamics

    SciTech Connect

    Erleben, K.; Ortiz, R.

    2008-09-01

    In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.

  16. A Fast Multiple-Kernel Method with Applications to Detect Gene-Environment Interaction

    PubMed Central

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M.; Worrall, Bradford B.; Williams, Stephen R.; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-01-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. While most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multi-kernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multi-factor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the EM algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous “fastKM” algorithm for multi-kernel analysis that is based on a low-rank approximation to the nuisance-effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. PMID:26139508

  17. Density-weighted Nyström method for computing large kernel eigensystems.

    PubMed

    Zhang, Kai; Kwok, James T

    2009-01-01

    The Nyström method is a well-known sampling-based technique for approximating the eigensystem of large kernel matrices. However, the chosen samples in the Nyström method are all assumed to be of equal importance, which deviates from the integral equation that defines the kernel eigenfunctions. Motivated by this observation, we extend the Nyström method to a more general, density-weighted version. We show that by introducing the probability density function as a natural weighting scheme, the approximation of the eigensystem can be greatly improved. An efficient algorithm is proposed to enforce such weighting in practice, which has the same complexity as the original Nyström method and hence is notably cheaper than several other alternatives. Experiments on kernel principal component analysis, spectral clustering, and image segmentation demonstrate the encouraging performance of our algorithm.

  18. A spectral-spatial kernel-based method for hyperspectral imagery classification

    NASA Astrophysics Data System (ADS)

    Li, Li; Ge, Hongwei; Gao, Jianqiang

    2017-02-01

    Spectral-based classification methods have gained increasing attention in hyperspectral imagery classification. Nevertheless, the spectral cannot fully represent the inherent spatial distribution of the imagery. In this paper, a spectral-spatial kernel-based method for hyperspectral imagery classification is proposed. Firstly, the spatial feature was extracted by using area median filtering (AMF). Secondly, the result of the AMF was used to construct spatial feature patch according to different window sizes. Finally, using the kernel technique, the spectral feature and the spatial feature were jointly used for the classification through a support vector machine (SVM) formulation. Therefore, for hyperspectral imagery classification, the proposed method was called spectral-spatial kernel-based support vector machine (SSF-SVM). To evaluate the proposed method, experiments are performed on three hyperspectral images. The experimental results show that an improvement is possible with the proposed technique in most of the real world classification problems.

  19. COMPARISON OF SPARSE CODING AND KERNEL METHODS FOR HISTOPATHOLOGICAL CLASSIFICATION OF GLIOBASTOMA MULTIFORME

    PubMed Central

    Han, Ju; Chang, Hang; Loss, Leandro; Zhang, Kai; Baehner, Fredrick L.; Gray, Joe W.; Spellman, Paul; Parvin, Bahram

    2012-01-01

    This paper compares performance of redundant representation and sparse coding against classical kernel methods for classifying histological sections. Sparse coding has been proven to be an effective technique for restoration, and has recently been extended to classification. The main issue with classification of histology sections is inherent heterogeneity as a result of technical and biological variations. Technical variations originate from sample preparation, fixation, and staining from multiple laboratories, where biological variations originate from tissue content. Image patches are represented with invariant features at local and global scales, where local refers to responses measured with Laplacian of Gaussians, and global refers to measurements in the color space. Experiments are designed to learn dictionaries, through sparse coding, and to train classifiers through kernel methods with normal, necorotic, apoptotic, and tumor with with characteristics of high cellularity. Two different kernel methods of support vector machine (SVM) and kernel discriminant analysis (KDA) are used for comparative analysis. Preliminary investigation on histological samples of Glioblastoma multiforme (GBM) indicates that kernel methods perform as good if not better than sparse coding with redundant representation. PMID:23243485

  20. Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods

    NASA Astrophysics Data System (ADS)

    Liu, Qinya; Tromp, Jeroen

    2008-07-01

    We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.

  1. Postprocessing Fourier spectral methods: The case of smooth solutions

    SciTech Connect

    Garcia-Archilla, B.; Novo, J.; Titi, E.S.

    1998-11-01

    A postprocessing technique to improve the accuracy of Galerkin methods, when applied to dissipative partial differential equations, is examined in the particular case of smooth solutions. Pseudospectral methods are shown to perform poorly. This performance is analyzed and a refined postprocessing technique is proposed.

  2. Early discriminant method of infected kernel based on the erosion effects of laser ultrasonics

    NASA Astrophysics Data System (ADS)

    Fan, Chao

    2015-07-01

    To discriminate the infected kernel of the wheat as early as possible, a new kind of detection method of hidden insects, especially in their egg and larvae stage, was put forward based on the erosion effect of the laser ultrasonic in this paper. The surface of the grain is exposured by the pulsed laser, the energy of which is absorbed and the ultrasonic is excited, and the infected kernel can be recognized by appropriate signal analyzing. Firstly, the detection principle was given based on the classical wave equation and the platform was established. Then, the detected ultrasonic signal was processed both in the time domain and the frequency domain by using FFT and DCT , and six significant features were selected as the characteristic parameters of the signal by the method of stepwise discriminant analysis. Finally, a BP neural network was designed by using these six parameters as the input to classify the infected kernels from the normal ones. Numerous experiments were performed by using twenty wheat varieties, the results shown that the the infected kernels can be recognized effectively, and the false negative error and the false positive error was 12% and 9% respectively, the discriminant method of the infected kernels based on the erosion effect of laser ultrasonics is feasible.

  3. 3D MR image denoising using rough set and kernel PCA method.

    PubMed

    Phophalia, Ashish; Mitra, Suman K

    2017-02-01

    In this paper, we have presented a two stage method, using kernel principal component analysis (KPCA) and rough set theory (RST), for denoising volumetric MRI data. A rough set theory (RST) based clustering technique has been used for voxel based processing. The method groups similar voxels (3D cubes) using class and edge information derived from noisy input. Each clusters thus formed now represented via basis vector. These vectors now projected into kernel space and PCA is performed in the feature space. This work is motivated by idea that under Rician noise MRI data may be non-linear and kernel mapping will help to define linear separator between these clusters/basis vectors thus used for image denoising. We have further investigated various kernels for Rician noise for different noise levels. The best kernel is then selected on the performance basis over PSNR and structure similarity (SSIM) measures. The work has been compared with state-of-the-art methods under various measures for synthetic and real databases. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    PubMed

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  5. An efficient Born normal mode method to compute sensitivity kernels and synthetic seismograms in the Earth

    NASA Astrophysics Data System (ADS)

    Capdeville, Y.

    2005-11-01

    We present an alternative to the classical mode coupling method scheme often used in global seismology to compute synthetic seismograms in laterally heterogeneous earth model and Frechet derivatives for tomographic inverse problem with the normal modes first-order Born approximation. We start from the first-order Born solution in the frequency domain and we use a numerical scheme for the volume integration, which means that we have to compute the effect of a finite number of scattering points and sum them with the appropriate integration weight. For each scattering point, `source to scattering point' and `scattering point to receivers' expressions are separated before applying a Fourier transform to return to the time domain. Doing so, the perturbed displacement is obtained, for each scattering point, as the convolution of a forward wavefield from the source to the scattering point with a backward wavefield from the scattering integration point to the receiver. For one scattering point and for a given number of time steps, the numerical cost of such a scheme grows as (number of receivers + the number of sources) × (corner frequency)2 to be compared to (number of receivers × the number of sources) × (corner frequency)4 when the classical normal mode coupling algorithm is used. Another interesting point is, when used for Frechet kernel, the computing cost is (almost) independent of the number of parameters used for the inversion. This algorithm is similar to the one obtained when solving the adjoint problem. Validation tests with respect to the spectral element method solution both in the Frechet derivative case and as a synthetic seismogram tool shows a good agreement. In the latter case, we show that non-linearity can be significant even at long periods and when using existing smooth global tomographic models.

  6. Kernel methods for HyMap imagery knowledge discovery

    NASA Astrophysics Data System (ADS)

    Camps-Valls, Gustavo; Gomez-Chova, Luis; Calpe-Maravilla, Javier; Soria-Olivas, Emilio; Martin-Guerrero, Jose D.; Moreno, Jose

    2004-02-01

    In this paper, we propose a kernel-based approach for hyperspectral knowledge discovery, which is defined as a process that involves three steps: pre-processing, modeling and analysis of the classifier. Firstly, we select the most representative bands analyzing the surrogate and main splits of a Classification And Regression Trees (CART) approach. This yields three datasets with different reduced input dimensionality (6, 3 and 2 bands, respectively) along with the original one (128 bands). Secondly, we develop several crop cover classifiers for each of them. We use Support Vector Machines (SVM) and analyze its performance in terms of efficiency and robustness, as compared to multilayer perceptrons (MLP) and radial basis functions (RBF) neural networks. Suitability to real-time working conditions, whenever a preprocessing stage is not possible, is evaluated by considering models with and without the CART-based feature selection stage. Finally, we analyze the support vectors distribution in the input space and through Principal Component Analysis (PCA) in order to gain knowledge about the problem. Several conclusions are drawn: (1) SVM yield better outcomes than neural networks; (2) training neural models is unfeasible when working with high dimensional spaces; (3) SVM perform similarly in the four classification scenarios, which indicates that noisy bands are successfully detected and (4) relevant bands for the classification are identified.

  7. Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method

    NASA Astrophysics Data System (ADS)

    Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing

    2017-05-01

    Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach's feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method.

  8. A Fourier-series-based kernel-independent fast multipole method

    SciTech Connect

    Zhang Bo; Huang Jingfang; Pitsianis, Nikos P.; Sun Xiaobai

    2011-07-01

    We present in this paper a new kernel-independent fast multipole method (FMM), named as FKI-FMM, for pairwise particle interactions with translation-invariant kernel functions. FKI-FMM creates, using numerical techniques, sufficiently accurate and compressive representations of a given kernel function over multi-scale interaction regions in the form of a truncated Fourier series. It provides also economic operators for the multipole-to-multipole, multipole-to-local, and local-to-local translations that are typical and essential in the FMM algorithms. The multipole-to-local translation operator, in particular, is readily diagonal and does not dominate in arithmetic operations. FKI-FMM provides an alternative and competitive option, among other kernel-independent FMM algorithms, for an efficient application of the FMM, especially for applications where the kernel function consists of multi-physics and multi-scale components as those arising in recent studies of biological systems. We present the complexity analysis and demonstrate with experimental results the FKI-FMM performance in accuracy and efficiency.

  9. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  10. A 3-dimensional Bergman kernel method with applications to rectangular domains

    NASA Astrophysics Data System (ADS)

    Bock, S.; Falcao, M. I.; Gurlebeck, K.; Malonek, H.

    2006-05-01

    In this paper we revisit the so-called Bergman kernel method--BKM--for solving conformal mapping problems and propose a generalized BKM-approach to extend the theory to three-dimensional mapping problems. A special software package for quaternions was developed for the numerical experiments.

  11. Standard Errors of the Kernel Equating Methods under the Common-Item Design.

    ERIC Educational Resources Information Center

    Liou, Michelle; Cheng, Philip E.; Johnson, Eugene G.

    1997-01-01

    Derived simplified equations to compute the standard error of the frequency estimation method for equating score distributions that are continuized using a uniform or Gaussian kernel function. Results from two empirical studies indicate that these equations work reasonably well for moderate size samples. (SLD)

  12. A simple and fast method for computing the relativistic Compton Scattering Kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Kershaw, David S.; Prasad, Manoj K.; Beason, J. Douglas

    1986-01-01

    The Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution is analytically reduced to a single integral, which can then be rapidly evaluated in a variety of ways. A particularly fast method for numerically computing this single integral is presented. This is, to the authors' knowledge, the first correct computation of the Compton scattering kernel.

  13. Standard Errors of the Kernel Equating Methods under the Common-Item Design.

    ERIC Educational Resources Information Center

    Liou, Michelle; And Others

    This research derives simplified formulas for computing the standard error of the frequency estimation method for equating score distributions that are continuized using a uniform or Gaussian kernel function (P. W. Holland, B. F. King, and D. T. Thayer, 1989; Holland and Thayer, 1987). The simplified formulas are applicable to equating both the…

  14. A graphical approach to optimizing variable-kernel smoothing parameters for improved deformable registration of CT and cone beam CT images

    NASA Astrophysics Data System (ADS)

    Hart, Vern; Burrow, Damon; Li, X. Allen

    2017-08-01

    A systematic method is presented for determining optimal parameters in variable-kernel deformable image registration of cone beam CT and CT images, in order to improve accuracy and convergence for potential use in online adaptive radiotherapy. Assessed conditions included the noise constant (symmetric force demons), the kernel reduction rate, the kernel reduction percentage, and the kernel adjustment criteria. Four such parameters were tested in conjunction with reductions of 5, 10, 15, 20, 30, and 40%. Noise constants ranged from 1.0 to 1.9 for pelvic images in ten prostate cancer patients. A total of 516 tests were performed and assessed using the structural similarity index. Registration accuracy was plotted as a function of iteration number and a least-squares regression line was calculated, which implied an average improvement of 0.0236% per iteration. This baseline was used to determine if a given set of parameters under- or over-performed. The most accurate parameters within this range were applied to contoured images. The mean Dice similarity coefficient was calculated for bladder, prostate, and rectum with mean values of 98.26%, 97.58%, and 96.73%, respectively; corresponding to improvements of 2.3%, 9.8%, and 1.2% over previously reported values for the same organ contours. This graphical approach to registration analysis could aid in determining optimal parameters for Demons-based algorithms. It also establishes expectation values for convergence rates and could serve as an indicator of non-physical warping, which often occurred in cases  >0.6% from the regression line.

  15. Modelling of the control of heart rate by breathing using a kernel method.

    PubMed

    Ahmed, A K; Fakhouri, S Y; Harness, J B; Mearns, A J

    1986-03-07

    The process of the breathing (input) to the heart rate (output) of man is considered for system identification by the input-output relationship, using a mathematical model expressed as integral equations. The integral equation is considered and fixed so that the identification method reduces to the determination of the values within the integral, called kernels, resulting in an integral equation whose input-output behaviour is nearly identical to that of the system. This paper uses an algorithm of kernel identification of the Volterra series which greatly reduces the computational burden and eliminates the restriction of using white Gaussian input as a test signal. A second-order model is the most appropriate for a good estimate of the system dynamics. The model contains the linear part (first-order kernel) and quadratic part (second-order kernel) in parallel, and so allows for the possibility of separation between the linear and non-linear elements of the process. The response of the linear term exhibits the oscillatory input and underdamped nature of the system. The application of breathing as input to the system produces an oscillatory term which may be attributed to the nature of sinus node of the heart being sensitive to the modulating signal the breathing wave. The negative-on diagonal seems to cause the dynamic asymmetry of the total response of the system which opposes the oscillatory nature of the first kernel related to the restraining force present in the respiratory heart rate system. The presence of the positive-off diagonal of the second-order kernel of respiratory control of heart rate is an indication of an escape-like phenomenon in the system.

  16. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    USGS Publications Warehouse

    Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.

    2014-01-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood

  17. Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.

    PubMed

    Echinaka, Yuki; Ozeki, Yukiyasu

    2016-10-01

    The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.

  18. Smoothed Profile Method to Simulate Colloidal Particles in Complex Fluids

    NASA Astrophysics Data System (ADS)

    Yamamoto, Ryoichi; Nakayama, Yasuya; Kim, Kang

    A new direct numerical simulation scheme, called "Smoothed Profile (SP) method," is presented. The SP method, as a direct numerical simulation of particulate flow, provides a way to couple continuum fluid dynamics with rigid-body dynamics through smoothed profile of colloidal particle. Our formulation includes extensions to colloids in multicomponent solvents such as charged colloids in electrolyte solutions. This method enables us to compute the time evolutions of colloidal particles, ions, and host fluids simultaneously by solving Newton, advection-diffusion, and Navier-Stokes equations so that the electro-hydrodynamic couplings can be fully taken into account. The electrophoretic mobilities of charged spherical particles are calculated in several situations. The comparisons with approximation theories show quantitative agreements for dilute dispersions without any empirical parameters.

  19. The Continuized Log-Linear Method: An Alternative to the Kernel Method of Continuization in Test Equating

    ERIC Educational Resources Information Center

    Wang, Tianyou

    2008-01-01

    Von Davier, Holland, and Thayer (2004) laid out a five-step framework of test equating that can be applied to various data collection designs and equating methods. In the continuization step, they presented an adjusted Gaussian kernel method that preserves the first two moments. This article proposes an alternative continuization method that…

  20. A new axial smoothing method based on elastic mapping

    SciTech Connect

    Yang, J.; Huang, S.C.; Czernin, J.; Wolfenden, P.; Dahlbom, M.; Hoh, C.K.; Phelps, M.E.; Lin, K.P.

    1996-12-01

    New positron emission tomography (PET) scanners have higher axial and in-plane spatial resolutions but at the expense of reduced per plane sensitivity, which prevents the higher resolution from being fully realized. Normally, Gaussian-weighted interplane axial smoothing is used to reduce noise. In this study, the authors developed a new algorithm that first elastically maps adjacent planes, and then the mapped images are smoothed axially to reduce the image noise level. Compared to those obtained by the conventional axial-directional smoothing method, the images by the new method have improved signal-to-noise ratio. To quantify the signal-to-noise improvement, both simulated and real cardiac PET images were studied. Various Hanning reconstruction filters with cutoff frequency = 0.5, 0.7, 1.0x Nyquist frequency and Ramp filter were tested on simulated images. Effective in-plane resolution was measured by the effective global Gaussian resolution (EGGR) and noise reduction was evaluated by the cross-correlation coefficient. Results showed that the new method was robust to various noise levels and indicated larger noise reduction or better image feature preservation (i.e., smaller EGGR) than by the conventional method.

  1. Regionally Smoothed Meta-Analysis Methods for GWAS Datasets.

    PubMed

    Begum, Ferdouse; Sharker, Monir H; Sherman, Stephanie L; Tseng, George C; Feingold, Eleanor

    2016-02-01

    Genome-wide association studies are proven tools for finding disease genes, but it is often necessary to combine many cohorts into a meta-analysis to detect statistically significant genetic effects. Often the component studies are performed by different investigators on different populations, using different chips with minimal SNPs overlap. In some cases, raw data are not available for imputation so that only the genotyped single nucleotide polymorphisms (SNPs) results can be used in meta-analysis. Even when SNP sets are comparable, different cohorts may have peak association signals at different SNPs within the same gene due to population differences in linkage disequilibrium or environmental interactions. We hypothesize that the power to detect statistical signals in these situations will improve by using a method that simultaneously meta-analyzes and smooths the signal over nearby markers. In this study, we propose regionally smoothed meta-analysis methods and compare their performance on real and simulated data. © 2015 WILEY PERIODICALS, INC.

  2. REGIONALLY SMOOTHED META-ANALYSIS METHODS FOR GWAS DATASETS

    PubMed Central

    Begum, Ferdouse; Sharker, Monir H.; Sherman, Stephanie L.; Tseng, George C.; Feingold, Eleanor

    2015-01-01

    Genome-wide association studies (GWAS) are proven tools for finding disease genes, but it is often necessary to combine many cohorts into a meta-analysis to detect statistically significant genetic effects. Often the component studies are performed by different investigators on different populations, using different chips with minimal SNPs overlap. In some cases, raw data are not available for imputation so that only the genotyped SNP results can be used in meta-analysis. Even when SNP sets are comparable, different cohorts may have peak association signals at different SNPs within the same gene due to population differences in linkage disequilibrium or environmental interactions. We hypothesize that the power to detect statistical signals in these situations will improve by using a method that simultaneously meta-analyzes and smooths the signal over nearby markers. In this study we propose regionally smoothed meta-analysis (RSM) methods and compare their performance on real and simulated data. PMID:26707090

  3. Chemical method for producing smooth surfaces on silicon wafers

    DOEpatents

    Yu, Conrad

    2003-01-01

    An improved method for producing optically smooth surfaces in silicon wafers during wet chemical etching involves a pre-treatment rinse of the wafers before etching and a post-etching rinse. The pre-treatment with an organic solvent provides a well-wetted surface that ensures uniform mass transfer during etching, which results in optically smooth surfaces. The post-etching treatment with an acetic acid solution stops the etching instantly, preventing any uneven etching that leads to surface roughness. This method can be used to etch silicon surfaces to a depth of 200 .mu.m or more, while the finished surfaces have a surface roughness of only 15-50 .ANG. (RMS).

  4. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    PubMed

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level.

  5. The complex variable reproducing kernel particle method for elasto-plasticity problems

    NASA Astrophysics Data System (ADS)

    Chen, Li; Cheng, Yumin

    2010-05-01

    On the basis of reproducing kernel particle method (RKPM), using complex variable theory, the complex variable reproducing kernel particle method (CVRKPM) is discussed in this paper. The advantage of the CVRKPM is that the correction function of a two-dimensional problem is formed with one-dimensional basis function when the shape function is formed. Then the CVRKPM is applied to solve two-dimensional elasto-plasticity problems. The Galerkin weak form is employed to obtain the discretized system equation, the penalty method is used to apply the essential boundary conditions. And then, the CVRKPM for two-dimensional elasto-plasticity problems is formed, the corresponding formulae are obtained, and the Newton-Raphson method is used in the numerical implementation. Three numerical examples are given to show that this method in this paper is effective for elasto-plasticity analysis.

  6. A new efficient hybrid intelligent method for nonlinear dynamical systems identification: The Wavelet Kernel Fuzzy Neural Network

    NASA Astrophysics Data System (ADS)

    Loussifi, Hichem; Nouri, Khaled; Benhadj Braiek, Naceur

    2016-03-01

    In this paper a hybrid computational intelligent approach of combining kernel methods with wavelet Multi-resolution Analysis (MRA) is presented for fuzzy wavelet network construction and initialization. Mother wavelets are used as activation functions for the neural network structure, and as kernel functions in the machine learning process. By choosing precise values of scale parameters based on the windowed scalogram representation of the Continuous Wavelet Transform (CWT), a set of kernel parameters is taken to construct the proposed Wavelet Kernel based Fuzzy Neural Network (WK-FNN) with an efficient initialization technique based on the use of wavelet kernels in Support Vector Machine for Regression (SVMR). Simulation examples are given to test usability and effectiveness of the proposed hybrid intelligent method in the system identification of dynamic plants and in the prediction of a chaotic time series. It is seen that the proposed WK-FNN achieves higher accuracy and has good performance as compared to other methods.

  7. A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System

    PubMed Central

    Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan

    2015-01-01

    The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods. PMID:26229526

  8. A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System.

    PubMed

    Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan

    2015-01-01

    The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods.

  9. The method of tailored sensitivity kernels for GRACE mass change estimates

    NASA Astrophysics Data System (ADS)

    Groh, Andreas; Horwath, Martin

    2016-04-01

    To infer mass changes (such as mass changes of an ice sheet) from time series of GRACE spherical harmonic solutions, two basic approaches (with many variants) exist: The regional integration approach (or direct approach) is based on surface mass changes (equivalent water height, EWH) from GRACE and integrates those with specific integration kernels. The forward modeling approach (or mascon approach, or inverse approach) prescribes a finite set of mass change patterns and adjusts the amplitudes of those patterns (in a least squares sense) to the GRACE gravity field changes. The present study reviews the theoretical framework of both approaches. We recall that forward modeling approaches ultimately estimate mass changes by linear functionals of the gravity field changes. Therefore, they implicitly apply sensitivity kernels and may be considered as special realizations of the regional integration approach. We show examples for sensitivity kernels intrinsic to forward modeling approaches. We then propose to directly tailor sensitivity kernels (or in other words: mass change estimators) by a formal optimization procedure that minimizes the sum of propagated GRACE solution errors and leakage errors. This approach involves the incorporation of information on the structure of GRACE errors and the structure of those mass change signals that are most relevant for leakage errors. We discuss the realization of this method, as applied within the ESA "Antarctic Ice Sheet CCI (Climate Change Initiative)" project. Finally, results for the Antarctic Ice Sheet in terms of time series of mass changes of individual drainage basins and time series of gridded EWH changes are presented.

  10. A Novel Weighted Kernel PCA-Based Method for Optimization and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Chen, X.; Tong, C. H.

    2016-12-01

    It has been demonstrated that machine learning methods can be successfully applied to uncertainty quantification for geophysical systems through the use of the adjoint method coupled with kernel PCA-based optimization. In addition, it has been shown through weighted linear PCA how optimization with respect to both observation weights and feature space control variables can accelerate convergence of such methods. Linear machine learning methods, however, are inherently limited in their ability to represent features of non-Gaussian stochastic random fields, as they are based on only the first two statistical moments of the original data. Nonlinear spatial relationships and multipoint statistics leading to the tortuosity characteristic of channelized media, for example, are captured only to a limited extent by linear PCA. With the aim of coupling the kernel-based and weighted methods discussed, we present a novel mathematical formulation of kernel PCA, Weighted Kernel Principal Component Analysis (WKPCA), that both captures nonlinear relationships and incorporates the attribution of significance levels to different realizations of the stochastic random field of interest. We also demonstrate how new instantiations retaining defining characteristics of the random field can be generated using Bayesian methods. In particular, we present a novel WKPCA-based optimization method that minimizes a given objective function with respect to both feature space random variables and observation weights through which optimal snapshot significance levels and optimal features are learned. We showcase how WKPCA can be applied to nonlinear optimal control problems involving channelized media, and in particular demonstrate an application of the method to learning the spatial distribution of material parameter values in the context of linear elasticity, and discuss further extensions of the method to stochastic inversion.

  11. Verification and large deformation analysis using the reproducing kernel particle method

    SciTech Connect

    Beckwith, Frank

    2015-09-01

    The reproducing kernel particle method (RKPM) is a meshless method used to solve general boundary value problems using the principle of virtual work. RKPM corrects the kernel approximation by introducing reproducing conditions which force the method to be complete to arbritrary order polynomials selected by the user. Effort in recent years has led to the implementation of RKPM within the Sierra/SM physics software framework. The purpose of this report is to investigate convergence of RKPM for verification and validation purposes as well as to demonstrate the large deformation capability of RKPM in problems where the finite element method is known to experience difficulty. Results from analyses using RKPM are compared against finite element analysis. A host of issues associated with RKPM are identified and a number of potential improvements are discussed for future work.

  12. A kernel method for calculating effective radiative forcing in transient climate simulations

    NASA Astrophysics Data System (ADS)

    Larson, E. J. L.; Portmann, R. W.

    2015-12-01

    Effective radiative forcing (ERF) is calculated as the flux change at the top of the atmosphere, after allowing fast adjustments, due to a forcing agent such as greenhouse gasses or volcanic events. Accurate estimates of the ERF are necessary in order to understand the drivers of climate change. ERF cannot be observed directly and is difficult to estimate from indirect observations due to the complexity of climate responses to individual forcing factors. We present a new method of calculating ERF using a kernel populated from a time series of a model variable (e.g. global mean surface temperature) in a CO2 step change experiment. The top of atmosphere (TOA) radiative imbalance has the best noise tolerance for retrieving the ERF of the model variables we tested. We compare the kernel method with the energy balance method for estimating ERF in the CMIP5 models. The energy balance method uses the regression between the TOA imbalance and temperature change in a CO2 step change experiment to estimate the climate feedback parameter. It then assumes the feedback parameter is constant to calculate the forcing time series. This method is sensitive to the number of years chosen for the regression and the nonlinearity in the regression leads to a bias. We quantify the sensitivities and biases of these methods and compare their estimates of forcing. The kernel method is more accurate for models in which a linear fit is a poor approximation for the relationship between temperature change and TOA imbalance.

  13. Smoothed particle hydrodynamics method from a large eddy simulation perspective

    NASA Astrophysics Data System (ADS)

    Di Mascio, A.; Antuono, M.; Colagrossi, A.; Marrone, S.

    2017-03-01

    The Smoothed Particle Hydrodynamics (SPH) method, often used for the modelling of the Navier-Stokes equations by a meshless Lagrangian approach, is revisited from the point of view of Large Eddy Simulation (LES). To this aim, the LES filtering procedure is recast in a Lagrangian framework by defining a filter that moves with the positions of the fluid particles at the filtered velocity. It is shown that the SPH smoothing procedure can be reinterpreted as a sort of LES Lagrangian filtering, and that, besides the terms coming from the LES convolution, additional contributions (never accounted for in the SPH literature) appear in the equations when formulated in a filtered fashion. Appropriate closure formulas are derived for the additional terms and a preliminary numerical test is provided to show the main features of the proposed LES-SPH model.

  14. a Kernel Method Based on Topic Model for Very High Spatial Resolution (vhsr) Remote Sensing Image Classification

    NASA Astrophysics Data System (ADS)

    Wu, Linmei; Shen, Li; Li, Zhipeng

    2016-06-01

    A kernel-based method for very high spatial resolution remote sensing image classification is proposed in this article. The new kernel method is based on spectral-spatial information and structure information as well, which is acquired from topic model, Latent Dirichlet Allocation model. The final kernel function is defined as K = u1Kspec + u2Kspat + u3Kstru, in which Kspec, Kspat, Kstru are radial basis function (RBF) and u1 + u2 + u3 = 1. In the experiment, comparison with three other kernel methods, including the spectral-based, the spectral- and spatial-based and the spectral- and structure-based method, is provided for a panchromatic QuickBird image of a suburban area with a size of 900 × 900 pixels and spatial resolution of 0.6 m. The result shows that the overall accuracy of the spectral- and structure-based kernel method is 80 %, which is higher than the spectral-based kernel method, as well as the spectral- and spatial-based which accuracy respectively is 67 % and 74 %. What's more, the accuracy of the proposed composite kernel method that jointly uses the spectral, spatial, and structure information is highest among the four methods which is increased to 83 %. On the other hand, the result of the experiment also verifies the validity of the expression of structure information about the remote sensing image.

  15. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  16. Method for smoothing the surface of a protective coating

    DOEpatents

    Sangeeta, D.; Johnson, Curtis Alan; Nelson, Warren Arthur

    2001-01-01

    A method for smoothing the surface of a ceramic-based protective coating which exhibits roughness is disclosed. The method includes the steps of applying a ceramic-based slurry or gel coating to the protective coating surface; heating the slurry/gel coating to remove volatile material; and then further heating the slurry/gel coating to cure the coating and bond it to the underlying protective coating. The slurry/gel coating is often based on yttria-stabilized zirconia, and precursors of an oxide matrix. Related articles of manufacture are also described.

  17. Single corn kernel aflatoxin B1 extraction and analysis method

    USDA-ARS?s Scientific Manuscript database

    Aflatoxins are highly carcinogenic compounds produced by the fungus Aspergillus flavus. Aspergillus flavus is a phytopathogenic fungus that commonly infects crops such as cotton, peanuts, and maize. The goal was to design an effective sample preparation method and analysis for the extraction of afla...

  18. Scalable Kernel Methods and Algorithms for General Sequence Analysis

    ERIC Educational Resources Information Center

    Kuksa, Pavel

    2011-01-01

    Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…

  19. Scalable Kernel Methods and Algorithms for General Sequence Analysis

    ERIC Educational Resources Information Center

    Kuksa, Pavel

    2011-01-01

    Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…

  20. Noninvasive reconstruction of cardiac transmembrane potentials using a kernelized extreme learning method

    NASA Astrophysics Data System (ADS)

    Jiang, Mingfeng; Zhang, Heng; Zhu, Lingyan; Cao, Li; Wang, Yaming; Xia, Ling; Gong, Yinglan

    2015-04-01

    Non-invasively reconstructing the cardiac transmembrane potentials (TMPs) from body surface potentials can act as a regression problem. The support vector regression (SVR) method is often used to solve the regression problem, however the computational complexity of the SVR training algorithm is usually intensive. In this paper, another learning algorithm, termed as extreme learning machine (ELM), is proposed to reconstruct the cardiac transmembrane potentials. Moreover, ELM can be extended to single-hidden layer feed forward neural networks with kernel matrix (kernelized ELM), which can achieve a good generalization performance at a fast learning speed. Based on the realistic heart-torso models, a normal and two abnormal ventricular activation cases are applied for training and testing the regression model. The experimental results show that the ELM method can perform a better regression ability than the single SVR method in terms of the TMPs reconstruction accuracy and reconstruction speed. Moreover, compared with the ELM method, the kernelized ELM method features a good approximation and generalization ability when reconstructing the TMPs.

  1. A Kernel Machine-based fMRI Physiological Noise Removal Method

    PubMed Central

    Song, Xiaomu; Chen, Nan-kuei; Gaur, Pooja

    2013-01-01

    Functional magnetic resonance imaging (fMRI) technique with blood oxygenation level dependent (BOLD) contrast is a powerful tool for noninvasive mapping of brain function under task and resting states. The removal of cardiac- and respiration-induced physiological noise in fMRI data has been a significant challenge as fMRI studies seek to achieve higher spatial resolutions and characterize more subtle neuronal changes. The low temporal sampling rate of most multi-slice fMRI experiments often causes aliasing of physiological noise into the frequency range of BOLD activation signal. In addition, changes of heartbeat and respiration patterns also generate physiological fluctuations that have similar frequencies with BOLD activation. Most existing physiological noise-removal methods either place restrictive limitations on image acquisition or utilize filtering or regression based post-processing algorithms, which cannot distinguish the frequency-overlapping BOLD activation and the physiological noise. In this work, we address the challenge of physiological noise removal via the kernel machine technique, where a nonlinear kernel machine technique, kernel principal component analysis, is used with a specifically identified kernel function to differentiate BOLD signal from the physiological noise of the frequency. The proposed method was evaluated in human fMRI data acquired from multiple task-related and resting state fMRI experiments. A comparison study was also performed with an existing adaptive filtering method. The results indicate that the proposed method can effectively identify and reduce the physiological noise in fMRI data. The comparison study shows that the proposed method can provide comparable or better noise removal performance than the adaptive filtering approach. PMID:24321306

  2. A kernel machine-based fMRI physiological noise removal method.

    PubMed

    Song, Xiaomu; Chen, Nan-kuei; Gaur, Pooja

    2014-02-01

    Functional magnetic resonance imaging (fMRI) technique with blood oxygenation level dependent (BOLD) contrast is a powerful tool for noninvasive mapping of brain function under task and resting states. The removal of cardiac- and respiration-induced physiological noise in fMRI data has been a significant challenge as fMRI studies seek to achieve higher spatial resolutions and characterize more subtle neuronal changes. The low temporal sampling rate of most multi-slice fMRI experiments often causes aliasing of physiological noise into the frequency range of BOLD activation signal. In addition, changes of heartbeat and respiration patterns also generate physiological fluctuations that have similar frequencies with BOLD activation. Most existing physiological noise-removal methods either place restrictive limitations on image acquisition or utilize filtering or regression based post-processing algorithms, which cannot distinguish the frequency-overlapping BOLD activation and the physiological noise. In this work, we address the challenge of physiological noise removal via the kernel machine technique, where a nonlinear kernel machine technique, kernel principal component analysis, is used with a specifically identified kernel function to differentiate BOLD signal from the physiological noise of the frequency. The proposed method was evaluated in human fMRI data acquired from multiple task-related and resting state fMRI experiments. A comparison study was also performed with an existing adaptive filtering method. The results indicate that the proposed method can effectively identify and reduce the physiological noise in fMRI data. The comparison study shows that the proposed method can provide comparable or better noise removal performance than the adaptive filtering approach.

  3. A robust, high-throughput method for computing maize ear, cob, and kernel attributes automatically from images.

    PubMed

    Miller, Nathan D; Haase, Nicholas J; Lee, Jonghyun; Kaeppler, Shawn M; de Leon, Natalia; Spalding, Edgar P

    2017-01-01

    Grain yield of the maize plant depends on the sizes, shapes, and numbers of ears and the kernels they bear. An automated pipeline that can measure these components of yield from easily-obtained digital images is needed to advance our understanding of this globally important crop. Here we present three custom algorithms designed to compute such yield components automatically from digital images acquired by a low-cost platform. One algorithm determines the average space each kernel occupies along the cob axis using a sliding-window Fourier transform analysis of image intensity features. A second counts individual kernels removed from ears, including those in clusters. A third measures each kernel's major and minor axis after a Bayesian analysis of contour points identifies the kernel tip. Dimensionless ear and kernel shape traits that may interrelate yield components are measured by principal components analysis of contour point sets. Increased objectivity and speed compared to typical manual methods are achieved without loss of accuracy as evidenced by high correlations with ground truth measurements and simulated data. Millimeter-scale differences among ear, cob, and kernel traits that ranged more than 2.5-fold across a diverse group of inbred maize lines were resolved. This system for measuring maize ear, cob, and kernel attributes is being used by multiple research groups as an automated Web service running on community high-throughput computing and distributed data storage infrastructure. Users may create their own workflow using the source code that is staged for download on a public repository.

  4. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice

    PubMed Central

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel “trick” concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available. PMID:27555865

  5. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    PubMed

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  6. Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.; Forys, John W., Jr.

    1986-01-01

    Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)

  7. Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.; Forys, John W., Jr.

    1986-01-01

    Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)

  8. Modeling Electrokinetic Flows by the Smoothed Profile Method

    PubMed Central

    Luo, Xian; Beskok, Ali; Karniadakis, George Em

    2010-01-01

    We propose an efficient modeling method for electrokinetic flows based on the Smoothed Profile Method (SPM) [1–4] and spectral element discretizations. The new method allows for arbitrary differences in the electrical conductivities between the charged surfaces and the the surrounding electrolyte solution. The electrokinetic forces are included into the flow equations so that the Poisson-Boltzmann and electric charge continuity equations are cast into forms suitable for SPM. The method is validated by benchmark problems of electroosmotic flow in straight channels and electrophoresis of charged cylinders. We also present simulation results of electrophoresis of charged microtubules, and show that the simulated electrophoretic mobility and anisotropy agree with the experimental values. PMID:20352076

  9. Arima model and exponential smoothing method: A comparison

    NASA Astrophysics Data System (ADS)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  10. Margin-Maximizing Feature Elimination Methods for Linear and Nonlinear Kernel-Based Discriminant Functions

    PubMed Central

    Aksu, Yaman; Miller, David J.; Kesidis, George; Yang, Qing X.

    2012-01-01

    Feature selection for classification in high-dimensional spaces can improve generalization, reduce classifier complexity, and identify important, discriminating feature “markers.” For support vector machine (SVM) classification, a widely used technique is recursive feature elimination (RFE). We demonstrate that RFE is not consistent with margin maximization, central to the SVM learning approach. We thus propose explicit margin-based feature elimination (MFE) for SVMs and demonstrate both improved margin and improved generalization, compared with RFE. Moreover, for the case of a nonlinear kernel, we show that RFE assumes that the squared weight vector 2-norm is strictly decreasing as features are eliminated. We demonstrate this is not true for the Gaussian kernel and, consequently, RFE may give poor results in this case. MFE for nonlinear kernels gives better margin and generalization. We also present an extension which achieves further margin gains, by optimizing only two degrees of freedom—the hyperplane’s intercept and its squared 2-norm—with the weight vector orientation fixed. We finally introduce an extension that allows margin slackness. We compare against several alternatives, including RFE and a linear programming method that embeds feature selection within the classifier design. On high-dimensional gene microarray data sets, University of California at Irvine (UCI) repository data sets, and Alzheimer’s disease brain image data, MFE methods give promising results. PMID:20194055

  11. Predicting colorectal surgical complications using heterogeneous clinical data and kernel methods.

    PubMed

    Soguero-Ruiz, Cristina; Hindberg, Kristian; Mora-Jiménez, Inmaculada; Rojo-Álvarez, José Luis; Skrøvseth, Stein Olav; Godtliebsen, Fred; Mortensen, Kim; Revhaug, Arthur; Lindsetmo, Rolv-Ole; Augestad, Knut Magne; Jenssen, Robert

    2016-06-01

    In this work, we have developed a learning system capable of exploiting information conveyed by longitudinal Electronic Health Records (EHRs) for the prediction of a common postoperative complication, Anastomosis Leakage (AL), in a data-driven way and by fusing temporal population data from different and heterogeneous sources in the EHRs. We used linear and non-linear kernel methods individually for each data source, and leveraging the powerful multiple kernels for their effective combination. To validate the system, we used data from the EHR of the gastrointestinal department at a university hospital. We first investigated the early prediction performance from each data source separately, by computing Area Under the Curve values for processed free text (0.83), blood tests (0.74), and vital signs (0.65), respectively. When exploiting the heterogeneous data sources combined using the composite kernel framework, the prediction capabilities increased considerably (0.92). Finally, posterior probabilities were evaluated for risk assessment of patients as an aid for clinicians to raise alertness at an early stage, in order to act promptly for avoiding AL complications. Machine-learning statistical model from EHR data can be useful to predict surgical complications. The combination of EHR extracted free text, blood samples values, and patient vital signs, improves the model performance. These results can be used as a framework for preoperative clinical decision support. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. A Novel Cortical Thickness Estimation Method based on Volumetric Laplace-Beltrami Operator and Heat Kernel

    PubMed Central

    Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J.; Wang, Yalin

    2015-01-01

    Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the grey matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer’s disease (AD) and mild cognitive impairment (MCI) on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces. PMID:25700360

  13. A novel cortical thickness estimation method based on volumetric Laplace-Beltrami operator and heat kernel.

    PubMed

    Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J; Wang, Yalin

    2015-05-01

    Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the gray matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer's disease (AD) and mild cognitive impairment (MCI) on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces.

  14. A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4

    NASA Technical Reports Server (NTRS)

    Park, Young-Keun; Fahrenthold, Eric P.

    2004-01-01

    An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.

  15. Kernel bandwidth estimation for nonparametric modeling.

    PubMed

    Bors, Adrian G; Nasios, Nikolaos

    2009-12-01

    Kernel density estimation is a nonparametric procedure for probability density modeling, which has found several applications in various fields. The smoothness and modeling ability of the functional approximation are controlled by the kernel bandwidth. In this paper, we describe a Bayesian estimation method for finding the bandwidth from a given data set. The proposed bandwidth estimation method is applied in three different computational-intelligence methods that rely on kernel density estimation: 1) scale space; 2) mean shift; and 3) quantum clustering. The third method is a novel approach that relies on the principles of quantum mechanics. This method is based on the analogy between data samples and quantum particles and uses the SchrOdinger potential as a cost function. The proposed methodology is used for blind-source separation of modulated signals and for terrain segmentation based on topography information.

  16. Kernel reconstruction methods for Doppler broadening — Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    DOE PAGES

    Ducru, Pablo; Josey, Colin; Dibert, Karia; ...

    2017-01-25

    This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of themore » operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin,Tmax]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.« less

  17. Kernel reconstruction methods for Doppler broadening - Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    NASA Astrophysics Data System (ADS)

    Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord

    2017-04-01

    This article establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T - namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin ,Tmax ]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [ 300 K , 3000 K ] with only 9 reference temperatures.

  18. Development of low-frequency kernel-function aerodynamics for comparison with time-dependent finite-difference methods

    NASA Technical Reports Server (NTRS)

    Bland, S. R.

    1982-01-01

    Finite difference methods for unsteady transonic flow frequency use simplified equations in which certain of the time dependent terms are omitted from the governing equations. Kernel functions are derived for two dimensional subsonic flow, and provide accurate solutions of the linearized potential equation with the same time dependent terms omitted. These solutions make possible a direct evaluation of the finite difference codes for the linear problem. Calculations with two of these low frequency kernel functions verify the accuracy of the LTRAN2 and HYTRAN2 finite difference codes. Comparisons of the low frequency kernel function results with the Possio kernel function solution of the complete linear equations indicate the adequacy of the HYTRAN approximation for frequencies in the range of interest for flutter calculations.

  19. A feature selection method based on multiple kernel learning with expression profiles of different types.

    PubMed

    Du, Wei; Cao, Zhongbo; Song, Tianci; Li, Ying; Liang, Yanchun

    2017-01-01

    With the development of high-throughput technology, the researchers can acquire large number of expression data with different types from several public databases. Because most of these data have small number of samples and hundreds or thousands features, how to extract informative features from expression data effectively and robustly using feature selection technique is challenging and crucial. So far, a mass of many feature selection approaches have been proposed and applied to analyse expression data of different types. However, most of these methods only are limited to measure the performances on one single type of expression data by accuracy or error rate of classification. In this article, we propose a hybrid feature selection method based on Multiple Kernel Learning (MKL) and evaluate the performance on expression datasets of different types. Firstly, the relevance between features and classifying samples is measured by using the optimizing function of MKL. In this step, an iterative gradient descent process is used to perform the optimization both on the parameters of Support Vector Machine (SVM) and kernel confidence. Then, a set of relevant features is selected by sorting the optimizing function of each feature. Furthermore, we apply an embedded scheme of forward selection to detect the compact feature subsets from the relevant feature set. We not only compare the classification accuracy with other methods, but also compare the stability, similarity and consistency of different algorithms. The proposed method has a satisfactory capability of feature selection for analysing expression datasets of different types using different performance measurements.

  20. Bioactive compounds in cashew nut (Anacardium occidentale L.) kernels: effect of different shelling methods.

    PubMed

    Trox, Jennifer; Vadivel, Vellingiri; Vetter, Walter; Stuetz, Wolfgang; Scherbaum, Veronika; Gola, Ute; Nohr, Donatus; Biesalski, Hans Konrad

    2010-05-12

    In the present study, the effects of various conventional shelling methods (oil-bath roasting, direct steam roasting, drying, and open pan roasting) as well as a novel "Flores" hand-cracking method on the levels of bioactive compounds of cashew nut kernels were investigated. The raw cashew nut kernels were found to possess appreciable levels of certain bioactive compounds such as beta-carotene (9.57 microg/100 g of DM), lutein (30.29 microg/100 g of DM), zeaxanthin (0.56 microg/100 g of DM), alpha-tocopherol (0.29 mg/100 g of DM), gamma-tocopherol (1.10 mg/100 g of DM), thiamin (1.08 mg/100 g of DM), stearic acid (4.96 g/100 g of DM), oleic acid (21.87 g/100 g of DM), and linoleic acid (5.55 g/100 g of DM). All of the conventional shelling methods including oil-bath roasting, steam roasting, drying, and open pan roasting revealed a significant reduction, whereas the Flores hand-cracking method exhibited similar levels of carotenoids, thiamin, and unsaturated fatty acids in cashew nuts when compared to raw unprocessed samples.

  1. Introducing kernel based morphology as an enhancement method for mass classification on mammography.

    PubMed

    Amirzadi, Azardokht; Azmi, Reza

    2013-04-01

    Since mammography images are in low-contrast, applying enhancement techniques as a pre-processing step are wisely recommended in the classification of the abnormal lesions into benign or malignant. A new kind of structural enhancement is proposed by morphological operator, which introduces an optimal Gaussian Kernel primitive, the kernel parameters are optimized the use of Genetic Algorithm. We also take the advantages of optical density (OD) images to promote the diagnosis rate. The proposed enhancement method is applied on both the gray level (GL) images and their OD values respectively, as a result morphological patterns get bolder on GL images; then, local binary patterns are extracted from this kind of images. Applying the enhancement method on OD images causes more differences between the values therefore a threshold method is applied toremove some background pixels. Those pixels that are more eligible to be mass are remained, and some statistical texture features are extracted from their equivalent GL images. Support vector machine is used for both approaches and the final decision is made by combining these two classifiers. The classification performance rate is evaluated by Az, under the receiver operating characteristic curve. The designed method yields Az = 0.9231, which demonstrates good results.

  2. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM

    PubMed Central

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir

    2016-01-01

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research. PMID:27754428

  3. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2016-10-14

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  4. A Particle-Particle Collision Model for Smoothed Profile Method

    NASA Astrophysics Data System (ADS)

    Mohaghegh, Fazlolah; Mousel, John; Udaykumar, H. S.

    2014-11-01

    Smoothed Profile Method (SPM) is a type of continuous forcing approach that adds the particles to the fluid using a forcing. The fluid-structure interaction is through a diffuse interface which avoids sudden transition from solid to fluid. The SPM simulation as a monolithic approach uses an indicator function field in the whole domain based on the distance from each particle's boundary where the possible particle-particle interaction can occur. A soft sphere potential based on the indicator function field has been defined to add an artificial pressure to the flow pressure in the potential overlapping regions. Thus, a repulsion force is obtained to avoid overlapping. Study of two particles which impulsively start moving in an initially uniform flow shows that the particle in the wake of the other one will have less acceleration leading to frequent collisions. Various Reynolds numbers and initial distances have been chosen to test the robustness of the method. Study of Drafting-Kissing Tumbling of two cylindrical particles shows a deviation from the benchmarks due to lack of rotation modeling. The method is shown to be accurate enough for simulating particle-particle collision and can easily be extended for particle-wall modeling and for non-spherical particles.

  5. A Distributed Learning Method for ℓ 1 -Regularized Kernel Machine over Wireless Sensor Networks.

    PubMed

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-07-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ 1 norm regularization ( ℓ 1 -regularized) is investigated, and a novel distributed learning algorithm for the ℓ 1 -regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost.

  6. A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks

    PubMed Central

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  7. An effective meshfree reproducing kernel method for buckling analysis of cylindrical shells with and without cutouts

    NASA Astrophysics Data System (ADS)

    Sadamoto, S.; Ozdemir, M.; Tanaka, S.; Taniguchi, K.; Yu, T. T.; Bui, T. Q.

    2017-02-01

    The paper is concerned with eigen buckling analysis of curvilinear shells with and without cutouts by an effective meshfree method. In particular, shallow shell, cylinder and perforated cylinder buckling problems are considered. A Galerkin meshfree reproducing kernel (RK) approach is then developed. The present meshfree curvilinear shell model is based on Reissner-Mindlin plate formulation, which allows the transverse shear deformation of the curved shells. There are five degrees of freedom per node (i.e., three displacements and two rotations). In this setting, the meshfree interpolation functions are derived from the RK. A singular kernel is introduced to impose the essential boundary conditions because of the RK shape functions, which do not automatically possess the Kronecker delta property. The stiffness matrix is derived using the stabilized conforming nodal integration technique. A convected coordinate system is introduced into the formulation to deal with the curvilinear surface. More importantly, the RKs taken here are used not only for the interpolation of the curved geometry, but also for the approximation of field variables. Several numerical examples with shallow shells and full cylinder models are considered, and the critical buckling loads and their buckling mode shapes are calculated by the meshfree eigenvalue analysis and examined. To show the accuracy and performance of the developed meshfree method, the computed critical buckling loads and mode shapes are compared with reference solutions based on boundary domain element, finite element and analytical methods.

  8. An effective meshfree reproducing kernel method for buckling analysis of cylindrical shells with and without cutouts

    NASA Astrophysics Data System (ADS)

    Sadamoto, S.; Ozdemir, M.; Tanaka, S.; Taniguchi, K.; Yu, T. T.; Bui, T. Q.

    2017-06-01

    The paper is concerned with eigen buckling analysis of curvilinear shells with and without cutouts by an effective meshfree method. In particular, shallow shell, cylinder and perforated cylinder buckling problems are considered. A Galerkin meshfree reproducing kernel (RK) approach is then developed. The present meshfree curvilinear shell model is based on Reissner-Mindlin plate formulation, which allows the transverse shear deformation of the curved shells. There are five degrees of freedom per node (i.e., three displacements and two rotations). In this setting, the meshfree interpolation functions are derived from the RK. A singular kernel is introduced to impose the essential boundary conditions because of the RK shape functions, which do not automatically possess the Kronecker delta property. The stiffness matrix is derived using the stabilized conforming nodal integration technique. A convected coordinate system is introduced into the formulation to deal with the curvilinear surface. More importantly, the RKs taken here are used not only for the interpolation of the curved geometry, but also for the approximation of field variables. Several numerical examples with shallow shells and full cylinder models are considered, and the critical buckling loads and their buckling mode shapes are calculated by the meshfree eigenvalue analysis and examined. To show the accuracy and performance of the developed meshfree method, the computed critical buckling loads and mode shapes are compared with reference solutions based on boundary domain element, finite element and analytical methods.

  9. A fuzzy pattern matching method based on graph kernel for lithography hotspot detection

    NASA Astrophysics Data System (ADS)

    Nitta, Izumi; Kanazawa, Yuzi; Ishida, Tsutomu; Banno, Koji

    2017-03-01

    In advanced technology nodes, lithography hotspot detection has become one of the most significant issues in design for manufacturability. Recently, machine learning based lithography hotspot detection has been widely investigated, but it has trade-off between detection accuracy and false alarm. To apply machine learning based technique to the physical verification phase, designers require minimizing undetected hotspots to avoid yield degradation. They also need a ranking of similar known patterns with a detected hotspot to prioritize layout pattern to be corrected. To achieve high detection accuracy and to prioritize detected hotspots, we propose a novel lithography hotspot detection method using Delaunay triangulation and graph kernel based machine learning. Delaunay triangulation extracts features of hotspot patterns where polygons locate irregularly and closely one another, and graph kernel expresses inner structure of graphs. Additionally, our method provides similarity between two patterns and creates a list of similar training patterns with a detected hotspot. Experiments results on ICCAD 2012 benchmarks show that our method achieves high accuracy with allowable range of false alarm. We also show the ranking of the similar known patterns with a detected hotspot.

  10. Multiple genetic variant association testing by collapsing and kernel methods with pedigree or population structured data.

    PubMed

    Schaid, Daniel J; McDonnell, Shannon K; Sinnwell, Jason P; Thibodeau, Stephen N

    2013-07-01

    Searching for rare genetic variants associated with complex diseases can be facilitated by enriching for diseased carriers of rare variants by sampling cases from pedigrees enriched for disease, possibly with related or unrelated controls. This strategy, however, complicates analyses because of shared genetic ancestry, as well as linkage disequilibrium among genetic markers. To overcome these problems, we developed broad classes of "burden" statistics and kernel statistics, extending commonly used methods for unrelated case-control data to allow for known pedigree relationships, for autosomes and the X chromosome. Furthermore, by replacing pedigree-based genetic correlation matrices with estimates of genetic relationships based on large-scale genomic data, our methods can be used to account for population-structured data. By simulations, we show that the type I error rates of our developed methods are near the asymptotic nominal levels, allowing rapid computation of P-values. Our simulations also show that a linear weighted kernel statistic is generally more powerful than a weighted "burden" statistic. Because the proposed statistics are rapid to compute, they can be readily used for large-scale screening of the association of genomic sequence data with disease status.

  11. The complex variable reproducing kernel particle method for the analysis of Kirchhoff plates

    NASA Astrophysics Data System (ADS)

    Chen, L.; Cheng, Y. M.; Ma, H. P.

    2015-03-01

    In this paper, the complex variable reproducing kernel particle method (CVRKPM) for the bending problem of arbitrary Kirchhoff plates is presented. The advantage of the CVRKPM is that the shape function of a two-dimensional problem is obtained one-dimensional basis function. The CVRKPM is used to form the approximation function of the deflection of a Kirchhoff plate, the Galerkin weak form of the bending problem of Kirchhoff plates is adopted to obtain the discretized system equations, and the penalty method is employed to enforce the essential boundary conditions, then the corresponding formulae of the CVRKPM for the bending problem of Kirchhoff plates are presented in detail. Several numerical examples of Kirchhoff plates with different geometry and loads are given to demonstrate that the CVRKPM in this paper has higher computational precision and efficiency than the reproducing kernel particle method under the same node distribution. And the influences of the basis function, weight function, scaling factor, node distribution and penalty factor on the computational precision of the CVRKPM in this paper are discussed.

  12. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  13. Efficient recovery-based error estimation for the smoothed finite element method for smooth and singular linear elasticity

    NASA Astrophysics Data System (ADS)

    González-Estrada, Octavio A.; Natarajan, Sundararajan; Ródenas, Juan José; Nguyen-Xuan, Hung; Bordas, Stéphane P. A.

    2013-07-01

    An error control technique aimed to assess the quality of smoothed finite element approximations is presented in this paper. Finite element techniques based on strain smoothing appeared in 2007 were shown to provide significant advantages compared to conventional finite element approximations. In particular, a widely cited strength of such methods is improved accuracy for the same computational cost. Yet, few attempts have been made to directly assess the quality of the results obtained during the simulation by evaluating an estimate of the discretization error. Here we propose a recovery type error estimator based on an enhanced recovery technique. The salient features of the recovery are: enforcement of local equilibrium and, for singular problems a "smooth + singular" decomposition of the recovered stress. We evaluate the proposed estimator on a number of test cases from linear elastic structural mechanics and obtain efficient error estimations whose effectivities, both at local and global levels, are improved compared to recovery procedures not implementing these features.

  14. Smoothed particle hydrodynamics method for evaporating multiphase flows

    NASA Astrophysics Data System (ADS)

    Yang, Xiufeng; Kong, Song-Charng

    2017-09-01

    The smoothed particle hydrodynamics (SPH) method has been increasingly used for simulating fluid flows; however, its ability to simulate evaporating flow requires significant improvements. This paper proposes an SPH method for evaporating multiphase flows. The present SPH method can simulate the heat and mass transfers across the liquid-gas interfaces. The conservation equations of mass, momentum, and energy were reformulated based on SPH, then were used to govern the fluid flow and heat transfer in both the liquid and gas phases. The continuity equation of the vapor species was employed to simulate the vapor mass fraction in the gas phase. The vapor mass fraction at the interface was predicted by the Clausius-Clapeyron correlation. An evaporation rate was derived to predict the mass transfer from the liquid phase to the gas phase at the interface. Because of the mass transfer across the liquid-gas interface, the mass of an SPH particle was allowed to change. Alternative particle splitting and merging techniques were developed to avoid large mass difference between SPH particles of the same phase. The proposed method was tested by simulating three problems, including the Stefan problem, evaporation of a static drop, and evaporation of a drop impacting a hot surface. For the Stefan problem, the SPH results of the evaporation rate at the interface agreed well with the analytical solution. For drop evaporation, the SPH result was compared with the result predicted by a level-set method from the literature. In the case of drop impact on a hot surface, the evolution of the shape of the drop, temperature, and vapor mass fraction were predicted.

  15. On utilizing search methods to select subspace dimensions for kernel-based nonlinear subspace classifiers.

    PubMed

    Kim, Sang-Woon; Oommen, B John

    2005-01-01

    In Kernel-based Nonlinear Subspace (KNS) methods, the subspace dimensions have a strong influence on the performance of the subspace classifier. In order to get a high classification accuracy, a large dimension is generally required. However, if the chosen subspace dimension is too large, it leads to a low performance due to the overlapping of the resultant subspaces and, if it is too small, it increases the classification error due to the poor resulting approximation. The most common approach is of an ad hoc nature, which selects the dimensions based on the so-called cumulative proportion computed from the kernel matrix for each class. In this paper, we propose a new method of systematically and efficiently selecting optimal or near-optimal subspace dimensions for KNS classifiers using a search strategy and a heuristic function termed the Overlapping criterion. The rationale for this function has been motivated in the body of the paper. The task of selecting optimal subspace dimensions is reduced to finding the best ones from a given problem-domain solution space using this criterion as a heuristic function. Thus, the search space can be pruned to very efficiently find the best solution. Our experimental results demonstrate that the proposed mechanism selects the dimensions efficiently without sacrificing the classification accuracy.

  16. A Generalized Grid-Based Fast Multipole Method for Integrating Helmholtz Kernels.

    PubMed

    Parkkinen, Pauli; Losilla, Sergio A; Solala, Eelis; Toivanen, Elias A; Xu, Wen-Hua; Sundholm, Dage

    2017-02-14

    A grid-based fast multipole method (GB-FMM) for optimizing three-dimensional (3D) numerical molecular orbitals in the bubbles and cube double basis has been developed and implemented. The present GB-FMM method is a generalization of our recently published GB-FMM approach for numerically calculating electrostatic potentials and two-electron interaction energies. The orbital optimization is performed by integrating the Helmholtz kernel in the double basis. The steep part of the functions in the vicinity of the nuclei is represented by one-center bubbles functions, whereas the remaining cube part is expanded on an equidistant 3D grid. The integration of the bubbles part is treated by using one-center expansions of the Helmholtz kernel in spherical harmonics multiplied with modified spherical Bessel functions of the first and second kind, analogously to the numerical inward and outward integration approach for calculating two-electron interaction potentials in atomic structure calculations. The expressions and algorithms for massively parallel calculations on general purpose graphics processing units (GPGPU) are described. The accuracy and the correctness of the implementation has been checked by performing Hartree-Fock self-consistent-field calculations (HF-SCF) on H2, H2O, and CO. Our calculations show that an accuracy of 10(-4) to 10(-7) Eh can be reached in HF-SCF calculations on general molecules.

  17. Multi-class Mode of Action Classification of Toxic Compounds Using Logic Based Kernel Methods.

    PubMed

    Lodhi, Huma; Muggleton, Stephen; Sternberg, Mike J E

    2010-09-17

    Toxicity prediction is essential for drug design and development of effective therapeutics. In this paper we present an in silico strategy, to identify the mode of action of toxic compounds, that is based on the use of a novel logic based kernel method. The technique uses support vector machines in conjunction with the kernels constructed from first order rules induced by an Inductive Logic Programming system. It constructs multi-class models by using a divide and conquer reduction strategy that splits multi-classes into binary groups and solves each individual problem recursively hence generating an underlying decision list structure. In order to evaluate the effectiveness of the approach for chemoinformatics problems like predictive toxicology, we apply it to toxicity classification in aquatic systems. The method is used to identify and classify 442 compounds with respect to the mode of action. The experimental results show that the technique successfully classifies toxic compounds and can be useful in assessing environmental risks. Experimental comparison of the performance of the proposed multi-class scheme with the standard multi-class Inductive Logic Programming algorithm and multi-class Support Vector Machine yields statistically significant results and demonstrates the potential power and benefits of the approach in identifying compounds of various toxic mechanisms.

  18. Nonlinear PET parametric image reconstruction with MRI information using kernel method

    NASA Astrophysics Data System (ADS)

    Gong, Kuang; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2017-03-01

    Positron Emission Tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neurology. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information. Previously we have used kernel learning to embed MR information in static PET reconstruction and direct Patlak reconstruction. Here we extend this method to direct reconstruction of nonlinear parameters in a compartment model by using the alternating direction of multiplier method (ADMM) algorithm. Simulation studies show that the proposed method can produce superior parametric images compared with existing methods.

  19. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel

  20. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel

  1. Weighted Wilcoxon-type Smoothly Clipped Absolute Deviation Method

    PubMed Central

    Wang, Lan; Li, Runze

    2009-01-01

    Summary Shrinkage-type variable selection procedures have recently seen increasing applications in biomedical research. However, their performance can be adversely influenced by outliers in either the response or the covariate space. This paper proposes a weighted Wilcoxon-type smoothly clipped absolute deviation (WW-SCAD) method, which deals with robust variable selection and robust estimation simultaneously. The new procedure can be conveniently implemented with the statistical software R. We establish that the WW-SCAD correctly identifies the set of zero coefficients with probability approaching one and estimates the nonzero coefficients with the rate n−1/2. Moreover, with appropriately chosen weights the WW-SCAD is robust with respect to outliers in both the x and y directions. The important special case with constant weights yields an oracle-type estimator with high efficiency at the presence of heavier-tailed random errors. The robustness of the WW-SCAD is partly justified by its asymptotic performance under local shrinking contamination. We propose a BIC-type tuning parameter selector for the WW-SCAD. The performance of the WW-SCAD is demonstrated via simulations and by an application to a study that investigates the effects of personal characteristics and dietary factors on plasma beta-carotene level. PMID:18647294

  2. Multi-feature-based robust face detection and coarse alignment method via multiple kernel learning

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Zhang, Di; He, Jun; Yu, Lejun; Wu, Xuewen

    2015-10-01

    Face detection and alignment are two crucial tasks to face recognition which is a hot topic in the field of defense and security, whatever for the safety of social public, personal property as well as information and communication security. Common approaches toward the treatment of these tasks in recent years are often of three types: template matching-based, knowledge-based and machine learning-based, which are always separate-step, high computation cost or fragile robust. After deep analysis on a great deal of Chinese face images without hats, we propose a novel face detection and coarse alignment method, which is inspired by those three types of methods. It is multi-feature fusion with Simple Multiple Kernel Learning1 (Simple-MKL) algorithm. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve promising results.

  3. Evaluation of Two New Smoothing Methods in Equating: The Cubic B-Spline Presmoothing Method and the Direct Presmoothing Method

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2009-01-01

    This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…

  4. Evaluation of Two New Smoothing Methods in Equating: The Cubic B-Spline Presmoothing Method and the Direct Presmoothing Method

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2009-01-01

    This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…

  5. Immersed Boundary Smooth Extension (IBSE): A high-order method for solving incompressible flows in arbitrary smooth domains

    NASA Astrophysics Data System (ADS)

    Stein, David B.; Guy, Robert D.; Thomases, Becca

    2017-04-01

    The Immersed Boundary method is a simple, efficient, and robust numerical scheme for solving PDE in general domains, yet for fluid problems it only achieves first-order spatial accuracy near embedded boundaries for the velocity field and fails to converge pointwise for elements of the stress tensor. In a previous work we introduced the Immersed Boundary Smooth Extension (IBSE) method, a variation of the IB method that achieves high-order accuracy for elliptic PDE by smoothly extending the unknown solution of the PDE from a given smooth domain to a larger computational domain, enabling the use of simple Cartesian-grid discretizations. In this work, we extend the IBSE method to allow for the imposition of a divergence constraint, and demonstrate high-order convergence for the Stokes and incompressible Navier-Stokes equations: up to third-order pointwise convergence for the velocity field, and second-order pointwise convergence for all elements of the stress tensor. The method is flexible to the underlying discretization: we demonstrate solutions produced using both a Fourier spectral discretization and a standard second-order finite-difference discretization.

  6. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2016-02-25

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  7. Methods and electrolytes for electrodeposition of smooth films

    DOEpatents

    Zhang, Jiguang; Xu, Wu; Graff, Gordon L; Chen, Xilin; Ding, Fei; Shao, Yuyan

    2015-03-17

    Electrodeposition involving an electrolyte having a surface-smoothing additive can result in self-healing, instead of self-amplification, of initial protuberant tips that give rise to roughness and/or dendrite formation on the substrate and/or film surface. For electrodeposition of a first conductive material (C1) on a substrate from one or more reactants in an electrolyte solution, the electrolyte solution is characterized by a surface-smoothing additive containing cations of a second conductive material (C2), wherein cations of C2 have an effective electrochemical reduction potential in the solution lower than that of the reactants.

  8. Parametric kernel-driven active contours for image segmentation

    NASA Astrophysics Data System (ADS)

    Wu, Qiongzhi; Fang, Jiangxiong

    2012-10-01

    We investigated a parametric kernel-driven active contour (PKAC) model, which implicitly transfers kernel mapping and piecewise constant to modeling the image data via kernel function. The proposed model consists of curve evolution functional with three terms: global kernel-driven and local kernel-driven terms, which evaluate the deviation of the mapped image data within each region from the piecewise constant model, and a regularization term expressed as the length of the evolution curves. In the local kernel-driven term, the proposed model can effectively segment images with intensity inhomogeneity by incorporating the local image information. By balancing the weight between the global kernel-driven term and the local kernel-driven term, the proposed model can segment the images with either intensity homogeneity or intensity inhomogeneity. To ensure the smoothness of the level set function and reduce the computational cost, the distance regularizing term is applied to penalize the deviation of the level set function and eliminate the requirement of re-initialization. Compared with the local image fitting model and local binary fitting model, experimental results show the advantages of the proposed method in terms of computational efficiency and accuracy.

  9. A new method of NIR face recognition using kernel projection DCV and neural networks

    NASA Astrophysics Data System (ADS)

    Qiao, Ya; Lu, Yuan; Feng, Yun-song; Li, Feng; Ling, Yongshun

    2013-09-01

    A new face recognition system was proposed, which used active near infrared imaging system (ANIRIS) as face images acquisition equipment, used kernel discriminative common vector (KDCV) as the feature extraction algorithm and used neural network as the recognition method. The ANIRIS was established by 40 NIR LEDs which used as active light source and a HWB800-IR-80 near infrared filter which used together with CCD camera to serve as the imaging detector. Its function of reducing the influence of varying illuminations to recognition rate was discussed. The KDCV feature extraction and neural network recognition parts were realized by Matlab programming. The experiments on HITSZ Lab2 face database and self-built face database show that the average recognition rate reached more than 95%, proving the effectiveness of proposed system.

  10. Impact of beam smoothing method on direct drive target performance for the NIF

    SciTech Connect

    Rothenberg, J.E.; Weber, S.V.

    1997-01-01

    The impact of smoothing method on the performance of a direct drive target is modeled and examined in terms of its 1-mode spectrum. In particular, two classes of smoothing methods are compared, smoothing by spectral dispersion (SSD) and the induced spatial incoherence (ISI) method. It is found that SSD using sinusoidal phase modulation (FM) results in poor smoothing at low 1-modes and therefore inferior target performance at both peak velocity and ignition. This disparity is most notable if the effective imprinting integration time of the target is small. However, using SSD with more generalized phase modulation can result in smoothing at low l-modes which is identical to that obtained with ISI. For either smoothing method, the calculations indicate that at peak velocity the surface perturbations are about 100 times larger than that which leads to nonlinear hydrodynamics. Modeling of the hydrodynamic nonlinearity shows that saturation can reduce the amplified nonuniformities to the level required to achieve ignition for either smoothing method. The low l- mode behavior at ignition is found to be strongly dependent on the induced divergence of the smoothing method. For the NIF parameters the target performance asymptotes for smoothing divergence larger than {approximately}100 {mu}rad.

  11. Discriminating between HuR and TTP binding sites using the k-spectrum kernel method

    PubMed Central

    Goldberg, Debra S.; Dowell, Robin

    2017-01-01

    Background The RNA binding proteins (RBPs) human antigen R (HuR) and Tristetraprolin (TTP) are known to exhibit competitive binding but have opposing effects on the bound messenger RNA (mRNA). How cells discriminate between the two proteins is an interesting problem. Machine learning approaches, such as support vector machines (SVMs), may be useful in the identification of discriminative features. However, this method has yet to be applied to studies of RNA binding protein motifs. Results Applying the k-spectrum kernel to a support vector machine (SVM), we first verified the published binding sites of both HuR and TTP. Additional feature engineering highlighted the U-rich binding preference of HuR and AU-rich binding preference for TTP. Domain adaptation along with multi-task learning was used to predict the common binding sites. Conclusion The distinction between HuR and TTP binding appears to be subtle content features. HuR prefers strongly U-rich sequences whereas TTP prefers AU-rich as with increasing A content, the sequences are more likely to be bound only by TTP. Our model is consistent with competitive binding of the two proteins, particularly at intermediate AU-balanced sequences. This suggests that fine changes in the A/U balance within a untranslated region (UTR) can alter the binding and subsequent stability of the message. Both feature engineering and domain adaptation emphasized the extent to which these proteins recognize similar general sequence features. This work suggests that the k-spectrum kernel method could be useful when studying RNA binding proteins and domain adaptation techniques such as feature augmentation could be employed particularly when examining RBPs with similar binding preferences. PMID:28333956

  12. An Evaluation of Kernel Equating: Parallel Equating with Classical Methods in the SAT Subject Tests[TM] Program. Research Report. ETS RR-09-06

    ERIC Educational Resources Information Center

    Grant, Mary C.; Zhang, Lilly; Damiano, Michele

    2009-01-01

    This study investigated kernel equating methods by comparing these methods to operational equatings for two tests in the SAT Subject Tests[TM] program. GENASYS (ETS, 2007) was used for all equating methods and scaled score kernel equating results were compared to Tucker, Levine observed score, chained linear, and chained equipercentile equating…

  13. Lattice-Boltzmann method combined with smoothed-profile method for particulate suspensions

    NASA Astrophysics Data System (ADS)

    Jafari, Saeed; Yamamoto, Ryoichi; Rahnama, Mohamad

    2011-02-01

    We developed a simulation scheme based on the coupling of the lattice-Boltzmann method with the smoothed-profile method (SPM) to predict the dynamic behavior of colloidal dispersions. The SPM provides a coupling scheme between continuum fluid dynamics and rigid-body dynamics through a smoothed profile of the fluid-particle interface. In this approach, the flow is computed on fixed Eulerian grids which are also used for the particles. Owing to the use of the same grids for simulation of fluid flow and particles, this method is highly efficient. Furthermore, an external boundary is used to impose the no-slip boundary condition at the fluid-particle interface. In addition, the operations in the present method are local; it can be easily programmed for parallel machines. The methodology is validated by comparing with previously published data.

  14. Lattice-Boltzmann method combined with smoothed-profile method for particulate suspensions.

    PubMed

    Jafari, Saeed; Yamamoto, Ryoichi; Rahnama, Mohamad

    2011-02-01

    We developed a simulation scheme based on the coupling of the lattice-Boltzmann method with the smoothed-profile method (SPM) to predict the dynamic behavior of colloidal dispersions. The SPM provides a coupling scheme between continuum fluid dynamics and rigid-body dynamics through a smoothed profile of the fluid-particle interface. In this approach, the flow is computed on fixed Eulerian grids which are also used for the particles. Owing to the use of the same grids for simulation of fluid flow and particles, this method is highly efficient. Furthermore, an external boundary is used to impose the no-slip boundary condition at the fluid-particle interface. In addition, the operations in the present method are local; it can be easily programmed for parallel machines. The methodology is validated by comparing with previously published data.

  15. Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods

    NASA Astrophysics Data System (ADS)

    Hora, Heinrich; Aydin, Meral

    1992-04-01

    The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too.

  16. Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods

    SciTech Connect

    Hora, H. ); Aydin, M. )

    1992-04-15

    The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too.

  17. High-order neural networks and kernel methods for peptide-MHC binding prediction.

    PubMed

    Kuksa, Pavel P; Min, Martin Renqiang; Dugar, Rishabh; Gerstein, Mark

    2015-11-15

    Effective computational methods for peptide-protein binding prediction can greatly help clinical peptide vaccine search and design. However, previous computational methods fail to capture key nonlinear high-order dependencies between different amino acid positions. As a result, they often produce low-quality rankings of strong binding peptides. To solve this problem, we propose nonlinear high-order machine learning methods including high-order neural networks (HONNs) with possible deep extensions and high-order kernel support vector machines to predict major histocompatibility complex-peptide binding. The proposed high-order methods improve quality of binding predictions over other prediction methods. With the proposed methods, a significant gain of up to 25-40% is observed on the benchmark and reference peptide datasets and tasks. In addition, for the first time, our experiments show that pre-training with high-order semi-restricted Boltzmann machines significantly improves the performance of feed-forward HONNs. Moreover, our experiments show that the proposed shallow HONN outperform the popular pre-trained deep neural network on most tasks, which demonstrates the effectiveness of modelling high-order feature interactions for predicting major histocompatibility complex-peptide binding. There is no associated distributable software. renqiang@nec-labs.com or mark.gerstein@yale.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Prediction of posttranslational modification sites from amino acid sequences with kernel methods.

    PubMed

    Xu, Yan; Wang, Xiaobo; Wang, Yongcui; Tian, Yingjie; Shao, Xiaojian; Wu, Ling-Yun; Deng, Naiyang

    2014-03-07

    Post-translational modification (PTM) is the chemical modification of a protein after its translation and one of the later steps in protein biosynthesis for many proteins. It plays an important role which modifies the end product of gene expression and contributes to biological processes and diseased conditions. However, the experimental methods for identifying PTM sites are both costly and time-consuming. Hence computational methods are highly desired. In this work, a novel encoding method PSPM (position-specific propensity matrices) is developed. Then a support vector machine (SVM) with the kernel matrix computed by PSPM is applied to predict the PTM sites. The experimental results indicate that the performance of new method is better or comparable with the existing methods. Therefore, the new method is a useful computational resource for the identification of PTM sites. A unified standalone software PTMPred is developed. It can be used to predict all types of PTM sites if the user provides the training datasets. The software can be freely downloaded from http://www.aporc.org/doc/wiki/PTMPred.

  19. On radiative transfer using synthetic kernel and simplified spherical harmonics methods in linearly anisotropically scattering media

    NASA Astrophysics Data System (ADS)

    Altaç, Zekeriya

    2014-11-01

    The Synthetic Kernel (SKN) method is employed to a 3D absorbing, emitting and linearly anisotropically scattering inhomogeneous medium. Standard SKN approximation is applied only to the diffusive components of the radiative transfer equations. An alternative SKN (S KN*) method is also derived in full 3-D generality by extending the approximation to the direct wall contributions. Complete sets of boundary conditions for both SKN approaches are rigorously obtained. The simplified spherical harmonics (P2N-1 or SP2N-1) and simplified double spherical harmonics (DPN-1 or SDPN-1) equations for linearly anisotropically scattering homogeneous medium are also derived. Resulting full P2N-1 and DPN-1 (or SP2N-1 and SDPN-1) equations are cast as diagonalized second order coupled diffusion-like equations. By this analysis, it is shown that the SKN method is a high-order approximation, and simply by the selection of full or half range Gauss-Legendre quadratures, S KN* equations become identical to P2N-1 or DPN-1 (or SP2N-1 or SDPN-1) equations. Numerical verification of all methods presented is carried out using a 1D participating isotropic slab medium. The SKN method proves to be more accurate than S KN* approximation, but it is analytically more involved. It is shown that the S KN* with proposed BCs converges with increasing order of approximation, and the BCs are applicable to SPN or SDPN methods.

  20. Single kernel method for detection of 2-acetyl-1-pyrroline in aromatic rice germplasm using SPME-GC/MS

    USDA-ARS?s Scientific Manuscript database

    INTRODUCTION Aromatic rice or fragrant rice, (Oryza sativa L.), has a strong popcorn-like aroma due to the presence of a five-membered N-heterocyclic ring compound known as 2-acetyl-1-pyrroline (2-AP). To date, existing methods for detecting this compound in rice require the use of several kernels. ...

  1. Nonlinear hyperspectral unmixing based on constrained multiple kernel NMF

    NASA Astrophysics Data System (ADS)

    Cui, Jiantao; Li, Xiaorun; Zhao, Liaoying

    2014-05-01

    Nonlinear spectral unmixing constitutes an important field of research for hyperspectral imagery. An unsupervised nonlinear spectral unmixing algorithm, namely multiple kernel constrained nonnegative matrix factorization (MKCNMF) is proposed by coupling multiple-kernel selection with kernel NMF. Additionally, a minimum endmemberwise distance constraint and an abundance smoothness constraint are introduced to alleviate the uniqueness problem of NMF in the algorithm. In the MKCNMF, two problems of optimizing matrices and selecting the proper kernel are jointly solved. The performance of the proposed unmixing algorithm is evaluated via experiments based on synthetic and real hyperspectral data sets. The experimental results demonstrate that the proposed method outperforms some existing unmixing algorithms in terms of spectral angle distance (SAD) and abundance fractions.

  2. Application of dose kernel calculation using a simplified Monte Carlo method to treatment plan for scanned proton beams.

    PubMed

    Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo

    2016-03-01

    Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13 times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduction of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine. PACS number(s): 87.55.Gh.

  3. Application of dose kernel calculation using a simplified Monte Carlo method to treatment plan for scanned proton beams.

    PubMed

    Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo

    2016-03-08

    Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13 times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduction of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine.

  4. Computational Experience with the Spectral Smoothing Method for Differentiating Noisy Data

    NASA Astrophysics Data System (ADS)

    Baart, M. L.

    1981-07-01

    When applied to non-exact (noisy) data, numerical methods for calculating derivatives, in particular- derivatives of order higher than the first, based on model, functions fitted to exact data, become unsatisfactory . The spectral smoothing method of Anderssen and Bloomfield, developed to solve this problem, entails calculation of a smoothing parameter and the choice of an optimal-order Sobolev norm that is used as regularizer. This method is used to differentiate, smooth and integrate noisy data. A likelihood function is minimized to determine the smoothing parameter. We present numerical results suggesting that this function can be jointly minimized with respect to the smoothing parameter and the order of the regularizing norm, thus yielding a fully automatic numerical differentiation procedure.

  5. Electro-optical deflectors as a method of beam smoothing for Inertial Confinement Fusion

    SciTech Connect

    Rothenberg, J.E.

    1997-01-01

    The electro-optic deflector is analyzed and compared to smoothing by spectral dispersion for efficacy as a beam smoothing method for ICF. It is found that the electro-optic deflector is inherently somewhat less efficient when compared either on the basis of equal peak phase modulation or equal generated bandwidth.

  6. A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems.

    PubMed

    Ying, Wenjun; Henriquez, Craig S

    2007-12-10

    This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong.

  7. A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2013-01-01

    This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600

  8. Adaptive reproducing kernel particle method for extraction of the cortical surface.

    PubMed

    Xu, Meihe; Thompson, Paul M; Toga, Arthur W

    2006-06-01

    We propose a novel adaptive approach based on the Reproducing Kernel Particle Method (RKPM) to extract the cortical surfaces of the brain from three-dimensional (3-D) magnetic resonance images (MRIs). To formulate the discrete equations of the deformable model, a flexible particle shape function is employed in the Galerkin approximation of the weak form of the equilibrium equations. The proposed support generation method ensures that support of all particles cover the entire computational domains. The deformable model is adaptively adjusted by dilating the shape function and by inserting or merging particles in the high curvature regions or regions stopped by the target boundary. The shape function of the particle with a dilation parameter is adaptively constructed in response to particle insertion or merging. The proposed method offers flexibility in representing highly convolved structures and in refining the deformable models. Self-intersection of the surface, during evolution, is prevented by tracing backward along gradient descent direction from the crest interface of the distance field, which is computed by fast marching. These operations involve a significant computational cost. The initial model for the deformable surface is simple and requires no prior knowledge of the segmented structure. No specific template is required, e.g., an average cortical surface obtained from many subjects. The extracted cortical surface efficiently localizes the depths of the cerebral sulci, unlike some other active surface approaches that penalize regions of high curvature. Comparisons with manually segmented landmark data are provided to demonstrate the high accuracy of the proposed method. We also compare the proposed method to the finite element method, and to a commonly used cortical surface extraction approach, the CRUISE method. We also show that the independence of the shape functions of the RKPM from the underlying mesh enhances the convergence speed of the deformable

  9. Haplotype Kernel Association Test as a Powerful Method to Identify Chromosomal Regions Harboring Uncommon Causal Variants

    PubMed Central

    Lin, Wan-Yu; Yi, Nengjun; Lou, Xiang-Yang; Zhi, Degui; Zhang, Kui; Gao, Guimin; Tiwari, Hemant K.; Liu, Nianjun

    2014-01-01

    For most complex diseases, the fraction of heritability that can be explained by the variants discovered from genome-wide association studies is minor. Although the so-called ‘rare variants’ (minor allele frequency [MAF] < 1%) have attracted increasing attention, they are unlikely to account for much of the ‘missing heritability’ because very few people may carry these rare variants. The genetic variants that are likely to fill in the ‘missing heritability’ include uncommon causal variants (MAF < 5%), which are generally untyped in association studies using tagging single-nucleotide polymorphisms (SNPs) or commercial SNP arrays. Developing powerful statistical methods can help to identify chromosomal regions harboring uncommon causal variants, while bypassing the genome-wide or exome-wide next-generation sequencing. In this work, we propose a haplotype kernel association test (HKAT) that is equivalent to testing the variance component of random effects for distinct haplotypes. With an appropriate weighting scheme given to haplotypes, we can further enhance the ability of HKAT to detect uncommon causal variants. With scenarios simulated according to the population genetics theory, HKAT is shown to be a powerful method for detecting chromosomal regions harboring uncommon causal variants. PMID:23740760

  10. An adaptive segment method for smoothing lidar signal based on noise estimation

    NASA Astrophysics Data System (ADS)

    Wang, Yuzhao; Luo, Pingping

    2014-10-01

    An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.

  11. Alternative methods to smooth the Earth's gravity field

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1981-01-01

    Convolutions on the sphere with corresponding convolution theorems are developed for one and two dimensional functions. Some of these results are used in a study of isotropic smoothing operators or filters. Well known filters in Fourier spectral analysis, such as the rectangular, Gaussian, and Hanning filters, are adapted for data on a sphere. The low-pass filter most often used on gravity data is the rectangular (or Pellinen) filter. However, its spectrum has relatively large sidelobes; and therefore, this filter passes a considerable part of the upper end of the gravity spectrum. The spherical adaptations of the Gaussian and Hanning filters are more efficient in suppressing the high-frequency components of the gravity field since their frequency response functions are strongly field since their frequency response functions are strongly tapered at the high frequencies with no, or small, sidelobes. Formulas are given for practical implementation of these new filters.

  12. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    ERIC Educational Resources Information Center

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  13. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    ERIC Educational Resources Information Center

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  14. Tomography, Adjoint Methods, Time-Reversal, and Banana-Doughnut Kernels

    NASA Astrophysics Data System (ADS)

    Tape, C.; Tromp, J.; Liu, Q.

    2004-12-01

    We demonstrate that Fréchet derivatives for tomographic inversions may be obtained based upon just two calculations for each earthquake: one calculation for the current model and a second, `adjoint', calculation that uses time-reversed signals at the receivers as simultaneous, fictitious sources. For a given model~m, we consider objective functions χ(m) that minimize differences between waveforms, traveltimes, or amplitudes. We show that the Fréchet derivatives of such objective functions may be written in the generic form δ χ=∫ VK_m( {x}) δ ln m( {x}) d3 {x}, where δ ln m=δ m/m denotes the relative model perturbation. The volumetric kernel Km is defined throughout the model volume V and is determined by time-integrated products between spatial and temporal derivatives of the regular displacement field {s} and the adjoint displacement field {s} obtained by using time-reversed signals at the receivers as simultaneous sources. In waveform tomography the time-reversed signal consists of differences between the data and the synthetics, in traveltime tomography it is determined by synthetic velocities, and in amplitude tomography it is controlled by synthetic displacements. For each event, the construction of the kernel Km requires one forward calculation for the regular field {s} and one adjoint calculation involving the fields {s} and {s}. For multiple events the kernels are simply summed. The final summed kernel is controlled by the distribution of events and stations and thus determines image resolution. In the case of traveltime tomography, the kernels Km are weighted combinations of banana-doughnut kernels. We demonstrate also how amplitude anomalies may be inverted for lateral variations in elastic and anelastic structure. The theory is illustrated based upon 2D spectral-element simulations.

  15. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  16. A method for computing the kernel of the downwash integral equation for arbitrary complex frequencies

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.; Rowe, W. S.

    1984-01-01

    For the design of active controls to stabilize flight vehicles, which requires the use of unsteady aerodynamics that are valid for arbitrary complex frequencies, algorithms are derived for evaluating the nonelementary part of the kernel of the integral equation that relates unsteady pressure to downwash. This part of the kernel is separated into an infinite limit integral that is evaluated using Bessel and Struve functions and into a finite limit integral that is expanded in series and integrated termwise in closed form. The developed series expansions gave reliable answers for all complex reduced frequencies and executed faster than exponential approximations for many pressure stations.

  17. Study on preparation method of Zanthoxylum bungeanum seeds kernel oil with zero trans-fatty acids.

    PubMed

    Liu, Tong; Yao, Shi-Yong; Yin, Zhong-Yi; Zheng, Xu-Xu; Shen, Yu

    2016-04-01

    The seed of Zanthoxylum bungeanum (Z. bungeanum) is a by-product of pepper production and rich in unsaturated fatty acid, cellulose, and protein. The seed oil obtained from traditional producing process by squeezing or extracting would be bad quality and could not be used as edible oil. In this paper, a new preparation method of Z. bungeanum seed kernel oil (ZSKO) was developed by comparing the advantages and disadvantages of alkali saponification-cold squeezing, alkali saponification-solvent extraction, and alkali saponification-supercritical fluid extraction with carbon dioxide (SFE-CO2). The results showed that the alkali saponification-cold squeezing could be the optimal preparation method of ZSKO, which contained the following steps: Z. bungeanum seed was pretreated by alkali saponification under the conditions of adding 10 %NaOH (w/w), solution temperature was 80 °C, and saponification reaction time was 45 min, and pretreated seed was separated by filtering, water washing, and overnight drying at 50 °C, then repeated squeezing was taken until no oil generated at 60 °C with 15 % moisture content, and ZSKO was attained finally using centrifuge. The produced ZSKO contained more than 90 % unsaturated fatty acids and no trans-fatty acids and be testified as a good edible oil with low-value level of acid and peroxide. It was demonstrated that the alkali saponification-cold squeezing process could be scaled up and applied to industrialized production of ZSKO.

  18. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  19. Smooth statistical torsion angle potential derived from a large conformational database via adaptive kernel density estimation improves the quality of NMR protein structures.

    PubMed

    Bermejo, Guillermo A; Clore, G Marius; Schwieters, Charles D

    2012-12-01

    Statistical potentials that embody torsion angle probability densities in databases of high-quality X-ray protein structures supplement the incomplete structural information of experimental nuclear magnetic resonance (NMR) datasets. By biasing the conformational search during the course of structure calculation toward highly populated regions in the database, the resulting protein structures display better validation criteria and accuracy. Here, a new statistical torsion angle potential is developed using adaptive kernel density estimation to extract probability densities from a large database of more than 10⁶ quality-filtered amino acid residues. Incorporated into the Xplor-NIH software package, the new implementation clearly outperforms an older potential, widely used in NMR structure elucidation, in that it exhibits simultaneously smoother and sharper energy surfaces, and results in protein structures with improved conformation, nonbonded atomic interactions, and accuracy. Copyright © 2012 The Protein Society.

  20. Comparing Thermal Process Validation Methods for Salmonella Inactivation on Almond Kernels.

    PubMed

    Jeong, Sanghyup; Marks, Bradley P; James, Michael K

    2017-01-01

    Ongoing regulatory changes are increasing the need for reliable process validation methods for pathogen reduction processes involving low-moisture products; however, the reliability of various validation methods has not been evaluated. Therefore, the objective was to quantify accuracy and repeatability of four validation methods (two biologically based and two based on time-temperature models) for thermal pasteurization of almonds. Almond kernels were inoculated with Salmonella Enteritidis phage type 30 or Enterococcus faecium (NRRL B-2354) at ~10(8) CFU/g, equilibrated to 0.24, 0.45, 0.58, or 0.78 water activity (aw), and then heated in a pilot-scale, moist-air impingement oven (dry bulb 121, 149, or 177°C; dew point <33.0, 69.4, 81.6, or 90.6°C; vair = 2.7 m/s) to a target lethality of ~4 log. Almond surface temperatures were measured in two ways, and those temperatures were used to calculate Salmonella inactivation using a traditional (D, z) model and a modified model accounting for process humidity. Among the process validation methods, both methods based on time-temperature models had better repeatability, with replication errors approximately half those of the surrogate ( E. faecium ). Additionally, the modified model yielded the lowest root mean squared error in predicting Salmonella inactivation (1.1 to 1.5 log CFU/g); in contrast, E. faecium yielded a root mean squared error of 1.2 to 1.6 log CFU/g, and the traditional model yielded an unacceptably high error (3.4 to 4.4 log CFU/g). Importantly, the surrogate and modified model both yielded lethality predictions that were statistically equivalent (α = 0.05) to actual Salmonella lethality. The results demonstrate the importance of methodology, aw, and process humidity when validating thermal pasteurization processes for low-moisture foods, which should help processors select and interpret validation methods to ensure product safety.

  1. Volcano Clustering Determination: Bivariate Gaussain vs. Fisher Kernels

    NASA Astrophysics Data System (ADS)

    Canon-Tapia, E.; Mendoza-Borunda, R.

    2012-12-01

    There are several forms in which volcano clustering can be estimated quantitatively, all of which are related to some extent with kernel estimation techniques. Although these methods differ on the definition of the kernel function, the exact form of that function seems to have a relatively minor impact on the estimated spatial probability, especially if compared with the effect of a different parameter known variously as the bandwidth, kernel width, or smoothing factor. This is the case because the kernel function used to study spatial distribution of point-like features is usually symmetric around the evaluation point and independent of the topology of the distribution. While it is reasonable to accept that the exact definition of the kernel function may not be extremely influential if the topology of the data set is not changed drastically, it is unclear to what extent different topologies of the spatial distribution can introduce significant changes in the obtained density functions. An important change in topology in this context is related to the distortion that is introduced when attempting to represent features found on the surface of a sphere that are being projected into a plane. Until now, underlying all studies of volcano distribution is the implicit assumption that the map used for the estimation of the density function does not introduce a significant distortion on the topology of the data base, and that the distribution can be studied by using kernels originally devised for distributions in plane surfaces. In this work, the density distributions obtained by using two types of kernel, one devised for planar and the other for spherical surfaces, are mutually compared. The influence of the smoothing factor in these two kernels is also explored with some detail. The results show that despite their apparent differences, the bivariate Gaussian and Fisher kernels might yield identical results if an appropriate value of the smoothing parameter is selected in

  2. Numerical Convergence In Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zhu, Qirong; Hernquist, Lars; Li, Yuexing

    2015-02-01

    We study the convergence properties of smoothed particle hydrodynamics (SPH) using numerical tests and simple analytic considerations. Our analysis shows that formal numerical convergence is possible in SPH only in the joint limit N → ∞, h → 0, and Nnb → ∞, where N is the total number of particles, h is the smoothing length, and Nnb is the number of neighbor particles within the smoothing volume used to compute smoothed estimates. Previous work has generally assumed that the conditions N → ∞ and h → 0 are sufficient to achieve convergence, while holding Nnb fixed. We demonstrate that if Nnb is held fixed as the resolution is increased, there will be a residual source of error that does not vanish as N → ∞ and h → 0. Formal numerical convergence in SPH is possible only if Nnb is increased systematically as the resolution is improved. Using analytic arguments, we derive an optimal compromise scaling for Nnb by requiring that this source of error balance that present in the smoothing procedure. For typical choices of the smoothing kernel, we find Nnb vpropN 0.5. This means that if SPH is to be used as a numerically convergent method, the required computational cost does not scale with particle number as O(N), but rather as O(N 1 + δ), where δ ≈ 0.5, with a weak dependence on the form of the smoothing kernel.

  3. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2015-12-22

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  4. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2016-06-01

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  5. Smooth connection method of segment test data in road surface profile measurement

    NASA Astrophysics Data System (ADS)

    Duan, Hu-Ming; Ma, Ying; Shi, Feng; Zhang, Kai-Bin; Xie, Fei

    2011-12-01

    It's reviewed that the measurement system of road surface profile and the calculation method of segment road test data have been introduced. Because of there are sudden vertical steps at the connection points of segment data which will influence the application of road surface data in automotive engineering. So a new smooth connection method of segment test data is proposed which revised the sudden vertical steps connection by the Signal Local Baseline Adjustment (SLBA) method. Besides, there is an actual example which mentioned the detailed process of the smooth connection of segment test data by the SLBA method and the adjusting results at these connection points. The application and calculation results show that the SLBA method is simple and has achieved obvious effect in smooth connection of the segment road test data. The method of SLBA can be widely applied to segment road surface data processing or the long period vibration signal processing.

  6. Smooth connection method of segment test data in road surface profile measurement

    NASA Astrophysics Data System (ADS)

    Duan, Hu-Ming; Ma, Ying; Shi, Feng; Zhang, Kai-Bin; Xie, Fei

    2012-01-01

    It's reviewed that the measurement system of road surface profile and the calculation method of segment road test data have been introduced. Because of there are sudden vertical steps at the connection points of segment data which will influence the application of road surface data in automotive engineering. So a new smooth connection method of segment test data is proposed which revised the sudden vertical steps connection by the Signal Local Baseline Adjustment (SLBA) method. Besides, there is an actual example which mentioned the detailed process of the smooth connection of segment test data by the SLBA method and the adjusting results at these connection points. The application and calculation results show that the SLBA method is simple and has achieved obvious effect in smooth connection of the segment road test data. The method of SLBA can be widely applied to segment road surface data processing or the long period vibration signal processing.

  7. Kernel Optimization in Discriminant Analysis

    PubMed Central

    You, Di; Hamsici, Onur C.; Martinez, Aleix M.

    2011-01-01

    Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results using a large number of databases and classifiers demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates. PMID:20820072

  8. Full Waveform Inversion Using Waveform Sensitivity Kernels

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    2013-04-01

    We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver

  9. An accurate method for evaluating the kernel of the integral equation relating lift to downwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  10. A high-order Immersed Boundary method for solving fluid problems on arbitrary smooth domains

    NASA Astrophysics Data System (ADS)

    Stein, David; Guy, Robert; Thomases, Becca

    2015-11-01

    We present a robust, flexible, and high-order Immersed Boundary method for solving the equations of fluid motion on domains with smooth boundaries using FFT-based spectral methods. The solution to the PDE is coupled with an equation for a smooth extension of the unknown solution; high-order accuracy is a natural consequence of this additional global regularity. The method retains much of the simplicity of the original Immersed Boundary method, and enables the use of simple implicit and implicit/explicit timestepping schemes to be used to solve a wide range of problems. We show results for the Stokes, Navier-Stokes, and Oldroyd-B equations.

  11. Comparison of Exponential Smoothing Methods in Forecasting Palm Oil Real Production

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Butar-Butar, I. A.; Rahmat, RF; Andayani, U.; Fahmi, F.

    2017-01-01

    Palm oil has important role for the plantation subsector. Forecasting of the real palm oil production in certain period is needed by plantation companies to maintain their strategic management. This study compared several methods based on exponential smoothing (ES) technique such as single ES, double exponential smoothing holt, triple exponential smoothing, triple exponential smoothing additive and multiplicative to predict the palm oil production. We examined the accuracy of forecasting models of production data and analyzed the characteristics of the models. Programming language R was used with selected constants for double ES (α and β) and triple ES (α, β, and γ) evaluated by the technique of minimizing the root mean squared prediction error (RMSE). Our result showed that triple ES additives had lowest error rate compared to the other models with RMSE of 0.10 with a combination of parameters α = 0.6, β = 0.02, and γ = 0.02.

  12. Abundance estimation of solid and liquid mixtures in hyperspectral imagery with albedo-based and kernel-based methods

    NASA Astrophysics Data System (ADS)

    Rand, Robert S.; Resmini, Ronald G.; Allen, David W.

    2016-09-01

    This study investigates methods for characterizing materials that are mixtures of granular solids, or mixtures of liquids, which may be linear or non-linear. Linear mixtures of materials in a scene are often the result of areal mixing, where the pixel size of a sensor is relatively large so they contain patches of different materials within them. Non-linear mixtures are likely to occur with microscopic mixtures of solids, such as mixtures of powders, or mixtures of liquids, or wherever complex scattering of light occurs. This study considers two approaches for use as generalized methods for un-mixing pixels in a scene that may be linear or non-linear. One method is based on earlier studies that indicate non-linear mixtures in reflectance space are approximately linear in albedo space. This method converts reflectance to single-scattering albedo (SSA) according to Hapke theory assuming bidirectional scattering at nadir look angles and uses a constrained linear model on the computed albedo values. The other method is motivated by the same idea, but uses a kernel that seeks to capture the linear behavior of albedo in non-linear mixtures of materials. The behavior of the kernel method can be highly dependent on the value of a parameter, gamma, which provides flexibility for the kernel method to respond to both linear and non-linear phenomena. Our study pays particular attention to this parameter for responding to linear and non-linear mixtures. Laboratory experiments on both granular solids and liquid solutions are performed with scenes of hyperspectral data.

  13. A new derivative with normal distribution kernel: Theory, methods and applications

    NASA Astrophysics Data System (ADS)

    Atangana, Abdon; Gómez-Aguilar, J. F.

    2017-06-01

    New approach of fractional derivative with a new local kernel is suggested in this paper. The kernel introduced in this work is the well-known normal distribution that is a very common continuous probability distribution. This distribution is very important in statistics and also highly used in natural science and social sciences to portray real-valued random variables whose distributions are not known. Two definitions are suggested namely Atangana-Gómez Averaging in Liouville-Caputo and Riemann-Liouville sense. We presented some relationship with existing integrals transform operators. Numerical approximations for first and second order approximation are derived in detail. Some Applications of the new mathematical tools to describe some real world problems are presented in detail. This is a new door opened the field of statistics, natural and socials sciences.

  14. Nondestructive In Situ Measurement Method for Kernel Moisture Content in Corn Ear

    PubMed Central

    Zhang, Han-Lin; Ma, Qin; Fan, Li-Feng; Zhao, Peng-Fei; Wang, Jian-Xu; Zhang, Xiao-Dong; Zhu, De-Hai; Huang, Lan; Zhao, Dong-Jie; Wang, Zhong-Yi

    2016-01-01

    Moisture content is an important factor in corn breeding and cultivation. A corn breed with low moisture at harvest is beneficial for mechanical operations, reduces drying and storage costs after harvesting and, thus, reduces energy consumption. Nondestructive measurement of kernel moisture in an intact corn ear allows us to select corn varieties with seeds that have high dehydration speeds in the mature period. We designed a sensor using a ring electrode pair for nondestructive measurement of the kernel moisture in a corn ear based on a high-frequency detection circuit. Through experiments using the effective scope of the electrodes’ electric field, we confirmed that the moisture in the corn cob has little effect on corn kernel moisture measurement. Before the sensor was applied in practice, we investigated temperature and conductivity effects on the output impedance. Results showed that the temperature was linearly related to the output impedance (both real and imaginary parts) of the measurement electrodes and the detection circuit’s output voltage. However, the conductivity has a non-monotonic dependence on the output impedance (both real and imaginary parts) of the measurement electrodes and the output voltage of the high-frequency detection circuit. Therefore, we reduced the effect of conductivity on the measurement results through measurement frequency selection. Corn moisture measurement results showed a quadric regression between corn ear moisture and the imaginary part of the output impedance, and there is also a quadric regression between corn kernel moisture and the high-frequency detection circuit output voltage at 100 MHz. In this study, two corn breeds were measured using our sensor and gave R2 values for the quadric regression equation of 0.7853 and 0.8496. PMID:27999404

  15. A simple method for computing the relativistic Compton scattering kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Prasad, M. K.; Kershaw, D. S.; Beason, J. D.

    1986-01-01

    Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.

  16. Nondestructive In Situ Measurement Method for Kernel Moisture Content in Corn Ear.

    PubMed

    Zhang, Han-Lin; Ma, Qin; Fan, Li-Feng; Zhao, Peng-Fei; Wang, Jian-Xu; Zhang, Xiao-Dong; Zhu, De-Hai; Huang, Lan; Zhao, Dong-Jie; Wang, Zhong-Yi

    2016-12-20

    Moisture content is an important factor in corn breeding and cultivation. A corn breed with low moisture at harvest is beneficial for mechanical operations, reduces drying and storage costs after harvesting and, thus, reduces energy consumption. Nondestructive measurement of kernel moisture in an intact corn ear allows us to select corn varieties with seeds that have high dehydration speeds in the mature period. We designed a sensor using a ring electrode pair for nondestructive measurement of the kernel moisture in a corn ear based on a high-frequency detection circuit. Through experiments using the effective scope of the electrodes' electric field, we confirmed that the moisture in the corn cob has little effect on corn kernel moisture measurement. Before the sensor was applied in practice, we investigated temperature and conductivity effects on the output impedance. Results showed that the temperature was linearly related to the output impedance (both real and imaginary parts) of the measurement electrodes and the detection circuit's output voltage. However, the conductivity has a non-monotonic dependence on the output impedance (both real and imaginary parts) of the measurement electrodes and the output voltage of the high-frequency detection circuit. Therefore, we reduced the effect of conductivity on the measurement results through measurement frequency selection. Corn moisture measurement results showed a quadric regression between corn ear moisture and the imaginary part of the output impedance, and there is also a quadric regression between corn kernel moisture and the high-frequency detection circuit output voltage at 100 MHz. In this study, two corn breeds were measured using our sensor and gave R² values for the quadric regression equation of 0.7853 and 0.8496.

  17. A Pragmatic Smoothing Method for Improving the Quality of the Results in Atomic Spectroscopy

    NASA Astrophysics Data System (ADS)

    Bennun, Leonardo

    2017-07-01

    A new smoothing method for the improvement on the identification and quantification of spectral functions based on the previous knowledge of the signals that are expected to be quantified, is presented. These signals are used as weighted coefficients in the smoothing algorithm. This smoothing method was conceived to be applied in atomic and nuclear spectroscopies preferably to these techniques where net counts are proportional to acquisition time, such as particle induced X-ray emission (PIXE) and other X-ray fluorescence spectroscopic methods, etc. This algorithm, when properly applied, does not distort the form nor the intensity of the signal, so it is well suited for all kind of spectroscopic techniques. This method is extremely effective at reducing high-frequency noise in the signal much more efficient than a single rectangular smooth of the same width. As all of smoothing techniques, the proposed method improves the precision of the results, but in this case we found also a systematic improvement on the accuracy of the results. We still have to evaluate the improvement on the quality of the results when this method is applied over real experimental results. We expect better characterization of the net area quantification of the peaks, and smaller Detection and Quantification Limits. We have applied this method to signals that obey Poisson statistics, but with the same ideas and criteria, it could be applied to time series. In a general case, when this algorithm is applied over experimental results, also it would be required that the sought characteristic functions, required for this weighted smoothing method, should be obtained from a system with strong stability. If the sought signals are not perfectly clean, this method should be carefully applied

  18. Bayesian Kernel Mixtures for Counts.

    PubMed

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  19. An extensive analysis of disease-gene associations using network integration and fast kernel-based gene prioritization methods.

    PubMed

    Valentini, Giorgio; Paccanaro, Alberto; Caniza, Horacio; Romero, Alfonso E; Re, Matteo

    2014-06-01

    In the context of "network medicine", gene prioritization methods represent one of the main tools to discover candidate disease genes by exploiting the large amount of data covering different types of functional relationships between genes. Several works proposed to integrate multiple sources of data to improve disease gene prioritization, but to our knowledge no systematic studies focused on the quantitative evaluation of the impact of network integration on gene prioritization. In this paper, we aim at providing an extensive analysis of gene-disease associations not limited to genetic disorders, and a systematic comparison of different network integration methods for gene prioritization. We collected nine different functional networks representing different functional relationships between genes, and we combined them through both unweighted and weighted network integration methods. We then prioritized genes with respect to each of the considered 708 medical subject headings (MeSH) diseases by applying classical guilt-by-association, random walk and random walk with restart algorithms, and the recently proposed kernelized score functions. The results obtained with classical random walk algorithms and the best single network achieved an average area under the curve (AUC) across the 708 MeSH diseases of about 0.82, while kernelized score functions and network integration boosted the average AUC to about 0.89. Weighted integration, by exploiting the different "informativeness" embedded in different functional networks, outperforms unweighted integration at 0.01 significance level, according to the Wilcoxon signed rank sum test. For each MeSH disease we provide the top-ranked unannotated candidate genes, available for further bio-medical investigation. Network integration is necessary to boost the performances of gene prioritization methods. Moreover the methods based on kernelized score functions can further enhance disease gene ranking results, by adopting both

  20. A new method for evaluation of the resistance to rice kernel cracking based on moisture absorption in brown rice under controlled conditions

    PubMed Central

    Hayashi, Takeshi; Kobayashi, Asako; Tomita, Katsura; Shimizu, Toyohiro

    2015-01-01

    We developed and evaluated the effectiveness of a new method to detect differences among rice cultivars in their resistance to kernel cracking. The method induces kernel cracking under laboratory controlled condition by moisture absorption to brown rice. The optimal moisture absorption conditions were determined using two japonica cultivars, ‘Nipponbare’ as a cracking-resistant cultivar and ‘Yamahikari’ as a cracking-susceptible cultivar: 12% initial moisture content of the brown rice, a temperature of 25°C, a duration of 5 h, and only a single absorption treatment. We then evaluated the effectiveness of these conditions using 12 japonica cultivars. The proportion of cracked kernels was significantly correlated with the mean 10-day maximum temperature after heading. In addition, the correlation between the proportions of cracked kernels in the 2 years of the study was higher than that for values obtained using the traditional late harvest method. The new moisture absorption method could stably evaluate the resistance to kernel cracking, and will help breeders to develop future cultivars with less cracking of the kernels. PMID:26719740

  1. Power Series Approximation for the Correlation Kernel Leading to Kohn-Sham Methods Combining Accuracy, Computational Efficiency, and General Applicability

    NASA Astrophysics Data System (ADS)

    Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas

    2016-09-01

    A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.

  2. Diffusion Kernels on Statistical Manifolds

    DTIC Science & Technology

    2004-01-16

    International Press, 1994. Michael Spivak . Differential Geometry, volume 1. Publish or Perish, 1979. 36 Chengxiang Zhai and John Lafferty. A study of smoothing...construction of information diffusion kernels, since these concepts are not widely used in machine learning. We refer to Spivak (1979) for details and further

  3. A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems

    SciTech Connect

    Zhang Guiyong; Liu Guirong

    2010-05-21

    In the framework of a weakened weak (W{sup 2}) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W{sup 2} formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H{sup 1} space, but in a G{sup 1} space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H{sup 1} space and G{sup 1} space can be viewed as a space of functions with weakened weak (W{sup 2}) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W{sup 2} formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W{sup 2} formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much

  4. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  5. Kernel principal component analysis residual diagnosis (KPCARD): An automated method for cosmic ray artifact removal in Raman spectra.

    PubMed

    Li, Boyan; Calvet, Amandine; Casamayou-Boucau, Yannick; Ryder, Alan G

    2016-03-24

    A new, fully automated, rapid method, referred to as kernel principal component analysis residual diagnosis (KPCARD), is proposed for removing cosmic ray artifacts (CRAs) in Raman spectra, and in particular for large Raman imaging datasets. KPCARD identifies CRAs via a statistical analysis of the residuals obtained at each wavenumber in the spectra. The method utilizes the stochastic nature of CRAs; therefore, the most significant components in principal component analysis (PCA) of large numbers of Raman spectra should not contain any CRAs. The process worked by first implementing kernel PCA (kPCA) on all the Raman mapping data and second accurately estimating the inter- and intra-spectrum noise to generate two threshold values. CRA identification was then achieved by using the threshold values to evaluate the residuals for each spectrum and assess if a CRA was present. CRA correction was achieved by spectral replacement where, the nearest neighbor (NN) spectrum, most spectroscopically similar to the CRA contaminated spectrum and principal components (PCs) obtained by kPCA were both used to generate a robust, best curve fit to the CRA contaminated spectrum. This best fit spectrum then replaced the CRA contaminated spectrum in the dataset. KPCARD efficacy was demonstrated by using simulated data and real Raman spectra collected from solid-state materials. The results showed that KPCARD was fast (<1 min per 8400 spectra), accurate, precise, and suitable for the automated correction of very large (>1 million) Raman datasets.

  6. A Novel Method for Modeling Neumann and Robin Boundary Conditions in Smoothed Particle Hydrodynamics

    SciTech Connect

    Ryan, Emily M.; Tartakovsky, Alexandre M.; Amon, Cristina

    2010-08-26

    In this paper we present an improved method for handling Neumann or Robin boundary conditions in smoothed particle hydrodynamics. The Neumann and Robin boundary conditions are common to many physical problems (such as heat/mass transfer), and can prove challenging to model in volumetric modeling techniques such as smoothed particle hydrodynamics (SPH). A new SPH method for diffusion type equations subject to Neumann or Robin boundary conditions is proposed. The new method is based on the continuum surface force model [1] and allows an efficient implementation of the Neumann and Robin boundary conditions in the SPH method for geometrically complex boundaries. The paper discusses the details of the method and the criteria needed to apply the model. The model is used to simulate diffusion and surface reactions and its accuracy is demonstrated through test cases for boundary conditions describing different surface reactions.

  7. Free vibration analysis of thin plates using Hermite reproducing kernel Galerkin meshfree method with sub-domain stabilized conforming integration

    NASA Astrophysics Data System (ADS)

    Wang, Dongdong; Lin, Zhenting

    2010-10-01

    A Hermite reproducing kernel (HRK) Galerkin meshfree formulation is presented for free vibration analysis of thin plates. In the HRK approximation the plate deflection is approximated by the deflection as well as slope nodal variables. The nth order reproducing conditions are imposed simultaneously on both the deflectional and rotational degrees of freedom. The resulting meshfree shape function turns out to have a much smaller necessary support size than its standard reproducing kernel counterpart. Obviously this reduction of minimum support size will accelerate the computation of meshfree shape function. To meet the bending exactness in the static sense and to remain the spatial stability the domain integration for stiffness as well as mass matrix is consistently carried out by using the sub-domain stabilized conforming integration (SSCI). Subsequently the proposed formulation is applied to study the free vibration of various benchmark thin plate problems. Numerical results uniformly reveal that the present method produces favorable solutions compared to those given by the high order Gauss integration (GI)-based Galerkin meshfree formulation. Moreover the effect of sub-domain refinement for the domain integration is also investigated.

  8. The implementation of binned Kernel density estimation to determine open clusters' proper motions: validation of the method

    NASA Astrophysics Data System (ADS)

    Priyatikanto, R.; Arifyanto, M. I.

    2015-01-01

    Stellar membership determination of an open cluster is an important process to do before further analysis. Basically, there are two classes of membership determination method: parametric and non-parametric. In this study, an alternative of non-parametric method based on Binned Kernel Density Estimation that accounts measurements errors (simply called BKDE- e) is proposed. This method is applied upon proper motions data to determine cluster's membership kinematically and estimate the average proper motions of the cluster. Monte Carlo simulations show that the average proper motions determination using this proposed method is statistically more accurate than ordinary Kernel Density Estimator (KDE). By including measurement errors in the calculation, the mode location from the resulting density estimate is less sensitive to non-physical or stochastic fluctuation as compared to ordinary KDE that excludes measurement errors. For the typical mean measurement error of 7 mas/yr, BKDE- e suppresses the potential of miscalculation by a factor of two compared to KDE. With median accuracy of about 93 %, BKDE- e method has comparable accuracy with respect to parametric method (modified Sanders algorithm). Application to real data from The Fourth USNO CCD Astrograph Catalog (UCAC4), especially to NGC 2682 is also performed. The mode of member stars distribution on Vector Point Diagram is located at μ α cos δ=-9.94±0.85 mas/yr and μ δ =-4.92±0.88 mas/yr. Although the BKDE- e performance does not overtake parametric approach, it serves a new view of doing membership analysis, expandable to astrometric and photometric data or even in binary cluster search.

  9. A method for smoothing segmented lung boundary in chest CT images

    NASA Astrophysics Data System (ADS)

    Yim, Yeny; Hong, Helen

    2007-03-01

    To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.

  10. Robust signal reconstruction for condition monitoring of industrial components via a modified Auto Associative Kernel Regression method

    NASA Astrophysics Data System (ADS)

    Baraldi, Piero; Di Maio, Francesco; Turati, Pietro; Zio, Enrico

    2015-08-01

    In this work, we propose a modification of the traditional Auto Associative Kernel Regression (AAKR) method which enhances the signal reconstruction robustness, i.e., the capability of reconstructing abnormal signals to the values expected in normal conditions. The modification is based on the definition of a new procedure for the computation of the similarity between the present measurements and the historical patterns used to perform the signal reconstructions. The underlying conjecture for this is that malfunctions causing variations of a small number of signals are more frequent than those causing variations of a large number of signals. The proposed method has been applied to real normal condition data collected in an industrial plant for energy production. Its performance has been verified considering synthetic and real malfunctioning. The obtained results show an improvement in the early detection of abnormal conditions and the correct identification of the signals responsible of triggering the detection.

  11. Modeling particle-laden turbulent flows with two-way coupling using a high-order kernel density function method

    NASA Astrophysics Data System (ADS)

    Smith, Timothy; Lu, Xiaoyi; Ranjan, Reetesh; Pantano, Carlos

    2016-11-01

    We describe a two-way coupled turbulent dispersed flow computational model using a high-order kernel density function (KDF) method. The carrier-phase solution is obtained using a high-order spatial and temporal incompressible Navier-Stokes solver while the KDF dispersed-phase solver uses the high-order Legendre WENO method. The computational approach is used to model carrier-phase turbulence modulation by the dispersed phase, and particle dispersion by turbulence as a function of momentum coupling strength (particle loading) and number of KDF basis functions. The use of several KDF's allows the model to capture statistical effects of particle trajectory crossing to high degree. Details of the numerical implementation and the coupling between the incompressible flow and dispersed-phase solvers will be discussed, and results at a range of Reynolds numbers will be presented. This work was supported by the National Science Foundation under Grant DMS-1318161.

  12. Melnikov Method for a Three-Zonal Planar Hybrid Piecewise-Smooth System and Application

    NASA Astrophysics Data System (ADS)

    Li, Shuangbao; Ma, Wensai; Zhang, Wei; Hao, Yuxin

    In this paper, we extend the well-known Melnikov method for smooth systems to a class of planar hybrid piecewise-smooth systems, defined in three domains separated by two switching manifolds x = a and x = b. The dynamics in each domain is governed by a smooth system. When an orbit reaches the separation lines, then a reset map describing an impacting rule applies instantaneously before the orbit enters into another domain. We assume that the unperturbed system has a continuum of periodic orbits transversally crossing the separation lines. Then, we wish to study the persistence of the periodic orbits under an autonomous perturbation and the reset map. To achieve this objective, we first choose four appropriate switching sections and build a Poincaré map, after that, we present a displacement function and carry on the Taylor expansion of the displacement function to the first-order in the perturbation parameter ɛ near ɛ = 0. We denote the first coefficient in the expansion as the first-order Melnikov function whose zeros provide us the persistence of periodic orbits under perturbation. Finally, we study periodic orbits of a concrete planar hybrid piecewise-smooth system by the obtained Melnikov function.

  13. Analysis of the incomplete Galerkin method for modelling of smoothly-irregular transition between planar waveguides

    NASA Astrophysics Data System (ADS)

    Divakov, D.; Sevastianov, L.; Nikolaev, N.

    2017-01-01

    The paper deals with a numerical solution of the problem of waveguide propagation of polarized light in smoothly-irregular transition between closed regular waveguides using the incomplete Galerkin method. This method consists in replacement of variables in the problem of reduction of the Helmholtz equation to the system of differential equations by the Kantorovich method and in formulation of the boundary conditions for the resulting system. The formulation of the boundary problem for the ODE system is realized in computer algebra system Maple. The stated boundary problem is solved using Maples libraries of numerical methods.

  14. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    SciTech Connect

    Huang, J; Followill, D; Howell, R; Liu, X; Mirkovic, D; Stingo, F; Kry, S

    2015-06-15

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titanium and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus

  15. An Evaluation of the Kernel Equating Method: A Special Study with Pseudotests Constructed from Real Test Data. Research Report. ETS RR-06-02

    ERIC Educational Resources Information Center

    von Davier, Alina A.; Holland, Paul W.; Livingston, Samuel A.; Casabianca, Jodi; Grant, Mary C.; Martin, Kathleen

    2006-01-01

    This study examines how closely the kernel equating (KE) method (von Davier, Holland, & Thayer, 2004a) approximates the results of other observed-score equating methods--equipercentile and linear equatings. The study used pseudotests constructed of item responses from a real test to simulate three equating designs: an equivalent groups (EG)…

  16. Kernel phase and kernel amplitude in Fizeau imaging

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin J. S.

    2016-12-01

    Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent history of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.

  17. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment

    PubMed Central

    Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong

    2016-01-01

    In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405

  18. A strategy to couple the material point method (MPM) and smoothed particle hydrodynamics (SPH) computational techniques

    NASA Astrophysics Data System (ADS)

    Raymond, Samuel J.; Jones, Bruce; Williams, John R.

    2016-12-01

    A strategy is introduced to allow coupling of the material point method (MPM) and smoothed particle hydrodynamics (SPH) for numerical simulations. This new strategy partitions the domain into SPH and MPM regions, particles carry all state variables and as such no special treatment is required for the transition between regions. The aim of this work is to derive and validate the coupling methodology between MPM and SPH. Such coupling allows for general boundary conditions to be used in an SPH simulation without further augmentation. Additionally, as SPH is a purely particle method, and MPM is a combination of particles and a mesh. This coupling also permits a smooth transition from particle methods to mesh methods, where further coupling to mesh methods could in future provide an effective farfield boundary treatment for the SPH method. The coupling technique is introduced and described alongside a number of simulations in 1D and 2D to validate and contextualize the potential of using these two methods in a single simulation. The strategy shown here is capable of fully coupling the two methods without any complicated algorithms to transform information from one method to another.

  19. Source Region Identification Using Kernel Smoothing

    EPA Science Inventory

    As described in this paper, Nonparametric Wind Regression is a source-to-receptor source apportionment model that can be used to identify and quantify the impact of possible source regions of pollutants as defined by wind direction sectors. It is described in detail with an exam...

  20. Source Region Identification Using Kernel Smoothing

    EPA Science Inventory

    As described in this paper, Nonparametric Wind Regression is a source-to-receptor source apportionment model that can be used to identify and quantify the impact of possible source regions of pollutants as defined by wind direction sectors. It is described in detail with an exam...

  1. NUMERICAL CONVERGENCE IN SMOOTHED PARTICLE HYDRODYNAMICS

    SciTech Connect

    Zhu, Qirong; Li, Yuexing; Hernquist, Lars

    2015-02-10

    We study the convergence properties of smoothed particle hydrodynamics (SPH) using numerical tests and simple analytic considerations. Our analysis shows that formal numerical convergence is possible in SPH only in the joint limit N → ∞, h → 0, and N{sub nb} → ∞, where N is the total number of particles, h is the smoothing length, and N{sub nb} is the number of neighbor particles within the smoothing volume used to compute smoothed estimates. Previous work has generally assumed that the conditions N → ∞ and h → 0 are sufficient to achieve convergence, while holding N{sub nb} fixed. We demonstrate that if N{sub nb} is held fixed as the resolution is increased, there will be a residual source of error that does not vanish as N → ∞ and h → 0. Formal numerical convergence in SPH is possible only if N{sub nb} is increased systematically as the resolution is improved. Using analytic arguments, we derive an optimal compromise scaling for N{sub nb} by requiring that this source of error balance that present in the smoothing procedure. For typical choices of the smoothing kernel, we find N{sub nb} ∝N {sup 0.5}. This means that if SPH is to be used as a numerically convergent method, the required computational cost does not scale with particle number as O(N), but rather as O(N {sup 1} {sup +} {sup δ}), where δ ≈ 0.5, with a weak dependence on the form of the smoothing kernel.

  2. An immersed boundary method for smoothed particle hydrodynamics of self-propelled swimmers

    NASA Astrophysics Data System (ADS)

    Hieber, S. E.; Koumoutsakos, P.

    2008-10-01

    We present a novel particle method, combining remeshed Smoothed Particle Hydrodynamics with Immersed Boundary and Level Set techniques for the simulation of flows past complex deforming geometries. The present method retains the Lagrangian adaptivity of particle methods and relies on the remeshing of particle locations in order to ensure the accuracy of the method. In fact this remeshing step enables the introduction of Immersed Boundary Techniques used in grid based methods. The method is applied to simulations of flows of isothermal and compressible fluids past steady and unsteady solid boundaries that are described using a particle Level Set formulation. The method is validated with two and three-dimensional benchmark problems of flows past cylinders and spheres and it is shown to be well suited to simulations of large scale simulations using tens of millions of particles, on flow-structure interaction problems as they pertain to self-propelled anguilliform swimmers.

  3. Analysis and Implementation of Particle-to-Particle (P2P) Graphics Processor Unit (GPU) Kernel for Black-Box Adaptive Fast Multipole Method

    DTIC Science & Technology

    2015-06-01

    ARL-TR-7315 ● JUNE 2015 US Army Research Laboratory Analysis and Implementation of Particle -to- Particle (P2P) Graphics Processor... Particle -to- Particle (P2P) Graphics Processor Unit (GPU) Kernel for Black-Box Adaptive Fast Multipole Method by Richard H Haney and Dale Shires...

  4. Incomplete iterations in multistep backward difference methods for parabolic problems with smooth and nonsmooth data

    SciTech Connect

    Bramble, J. H.; Pasciak, J. E.; Sammon, P. H.; Thomee, V.

    1989-04-01

    Backward difference methods for the discretization of parabolic boundary value problems are considered in this paper. In particular, we analyze the case when the backward difference equations are only solved 'approximately' by a preconditioned iteration. We provide an analysis which shows that these methods remain stable and accurate if a suitable number of iterations (often independent of the spatial discretization and time step size) are used. Results are provided for the smooth as well as nonsmooth initial data cases. Finally, the results of numerical experiments illustrating the algorithms' performance on model problems are given.

  5. Image reconstruction for 3D light microscopy with a regularized linear method incorporating a smoothness prior

    NASA Astrophysics Data System (ADS)

    Preza, Chrysanthe; Miller, Michael I.; Conchello, Jose-Angel

    1993-07-01

    We have shown that the linear least-squares (LLS) estimate of the intensities of a 3-D object obtained from a set of optical sections is unstable due to the inversion of small and zero-valued eigenvalues of the point-spread function (PSF) operator. The LLS solution was regularized by constraining it to lie in a subspace spanned by the eigenvectors corresponding to a selected number of the largest eigenvalues. In this paper we extend the regularized LLS solution to a maximum a posteriori (MAP) solution induced by a prior formed from a 'Good's like' smoothness penalty. This approach also yields a regularized linear estimator which reduces noise as well as edge artifacts in the reconstruction. The advantage of the linear MAP (LMAP) estimate over the current regularized LLS (RLLS) is its ability to regularize the inverse problem by smoothly penalizing components in the image associated with small eigenvalues. Computer simulations were performed using a theoretical PSF and a simple phantom to compare the two regularization techniques. It is shown that the reconstructions using the smoothness prior, give superior variance and bias results compared to the RLLS reconstructions. Encouraging reconstructions obtained with the LMAP method from real microscopical images of a 10 micrometers fluorescent bead, and a four-cell Volvox embryo are shown.

  6. A Fast Variational Method for the Construction of Resolution Adaptive C-Smooth Molecular Surfaces.

    PubMed

    Bajaj, Chandrajit L; Xu, Guoliang; Zhang, Qin

    2009-05-01

    We present a variational approach to smooth molecular (proteins, nucleic acids) surface constructions, starting from atomic coordinates, as available from the protein and nucleic-acid data banks. Molecular dynamics (MD) simulations traditionally used in understanding protein and nucleic-acid folding processes, are based on molecular force fields, and require smooth models of these molecular surfaces. To accelerate MD simulations, a popular methodology is to employ coarse grained molecular models, which represent clusters of atoms with similar physical properties by psuedo- atoms, resulting in coarser resolution molecular surfaces. We consider generation of these mixed-resolution or adaptive molecular surfaces. Our approach starts from deriving a general form second order geometric partial differential equation in the level-set formulation, by minimizing a first order energy functional which additionally includes a regularization term to minimize the occurrence of chemically infeasible molecular surface pockets or tunnel-like artifacts. To achieve even higher computational efficiency, a fast cubic B-spline C(2) interpolation algorithm is also utilized. A narrow band, tri-cubic B-spline level-set method is then used to provide C(2) smooth and resolution adaptive molecular surfaces.

  7. Generalized hidden-mapping ridge regression, knowledge-leveraged inductive transfer learning for neural networks, fuzzy systems and kernel methods.

    PubMed

    Deng, Zhaohong; Choi, Kup-Sze; Jiang, Yizhang; Wang, Shitong

    2014-12-01

    Inductive transfer learning has attracted increasing attention for the training of effective model in the target domain by leveraging the information in the source domain. However, most transfer learning methods are developed for a specific model, such as the commonly used support vector machine, which makes the methods applicable only to the adopted models. In this regard, the generalized hidden-mapping ridge regression (GHRR) method is introduced in order to train various types of classical intelligence models, including neural networks, fuzzy logical systems and kernel methods. Furthermore, the knowledge-leverage based transfer learning mechanism is integrated with GHRR to realize the inductive transfer learning method called transfer GHRR (TGHRR). Since the information from the induced knowledge is much clearer and more concise than that from the data in the source domain, it is more convenient to control and balance the similarity and difference of data distributions between the source and target domains. The proposed GHRR and TGHRR algorithms have been evaluated experimentally by performing regression and classification on synthetic and real world datasets. The results demonstrate that the performance of TGHRR is competitive with or even superior to existing state-of-the-art inductive transfer learning algorithms.

  8. The multiscale restriction smoothed basis method for fractured porous media (F-MsRSB)

    NASA Astrophysics Data System (ADS)

    Shah, Swej; Møyner, Olav; Tene, Matei; Lie, Knut-Andreas; Hajibeygi, Hadi

    2016-08-01

    A novel multiscale method for multiphase flow in heterogeneous fractured porous media is devised. The discrete fine-scale system is described using an embedded fracture modeling approach, in which the heterogeneous rock (matrix) and highly-conductive fractures are represented on independent grids. Given this fine-scale discrete system, the method first partitions the fine-scale volumetric grid representing the matrix and the lower-dimensional grids representing fractures into independent coarse grids. Then, basis functions for matrix and fractures are constructed by restricted smoothing, which gives a flexible and robust treatment of complex geometrical features and heterogeneous coefficients. From the basis functions one constructs a prolongation operator that maps between the coarse- and fine-scale systems. The resulting method allows for general coupling of matrix and fracture basis functions, giving efficient treatment of a large variety of fracture conductivities. In addition, basis functions can be adaptively updated using efficient global smoothing strategies to account for multiphase flow effects. The method is conservative and because it is described and implemented in algebraic form, it is straightforward to employ it to both rectilinear and unstructured grids. Through a series of challenging test cases for single and multiphase flow, in which synthetic and realistic fracture maps are combined with heterogeneous petrophysical matrix properties, we validate the method and conclude that it is an efficient and accurate approach for simulating flow in complex, large-scale, fractured media.

  9. A Kernel Density Estimator-Based Maximum A Posteriori Image Reconstruction Method for Dynamic Emission Tomography Imaging.

    PubMed

    Ihsani, Alvin; Farncombe, Troy H

    2016-05-01

    A novel maximum a posteriori (MAP) method for dynamic single-photon emission computed tomography image reconstruction is proposed. The prior probability is modeled as a multivariate kernel density estimator (KDE), effectively modeling the prior probability non-parametrically, with the aim of reducing the effects of artifacts arising from inconsistencies in projection measurements in low-count regimes where projections are dominated by noise. The proposed prior spatially and temporally limits the variation of time-activity functions (TAFs) and attracts similar TAFs together. The similarity between TAFs is determined by the spatial and range scaling parameters of the KDE-like prior. The resulting iterative image reconstruction method is evaluated using two simulated phantoms, namely the extended cardiac-torso (XCAT) heart phantom and a simulated Mini-Deluxe Phantom. The phantoms were chosen to observe the effects of the proposed prior on the TAFs based on the vicinity and abutments of regions with different activities. Our results show the effectiveness of the proposed iterative reconstruction method, especially in low-count regimes, which provides better uniformity within each region of activity, significant reduction of spatiotemporal variations caused by noise, and sharper separation between different regions of activity than expectation maximization and an MAP method employing a more traditional Gibbs prior.

  10. A Kernel Density Estimator-Based Maximum A Posteriori Image Reconstruction Method for Dynamic Emission Tomography Imaging.

    PubMed

    Ihsani, Alvin; Farncombe, Troy

    2016-03-25

    A novel maximum a posteriori (MAP) method for dynamic SPECT image reconstruction is proposed. The prior probability is modelled as a multivariate kernel density estimator (KDE), effectively modelling the prior probability nonparametrically, with the aim of reducing the effects of artifacts arising from inconsistencies in projection measurements in lowcount regimes where projections are dominated by noise. The proposed prior spatially and temporally limits the variation of time-activity functions (TAF) and "attracts" similar TAFs together. The similarity between TAFs is determined by the spatial and range scaling parameters of the KDE-like prior. The resulting iterative image reconstruction method is evaluated using two simulated phantoms, namely the XCAT heart phantom and a simulated Mini-Deluxe PhantomTM. The phantoms were chosen to observe the effects of the proposed prior on the TAFs based on the vicinity and abutments of regions with different activities. Our results show the effectiveness of the proposed iterative reconstruction method, especially in low-count regimes, which provides better uniformity within each region of activity, significant reduction of spatio-temporal variations caused by noise, and sharper separation between different regions of activity than expectation maximization and a MAP method employing a more "traditional" Gibbs prior.

  11. A Kernel Machine Method for Detecting Effects of Interaction Between Multidimensional Variable Sets: An Imaging Genetics Application

    PubMed Central

    Ge, Tian; Nichols, Thomas E.; Ghosh, Debashis; Mormino, Elizabeth C.

    2015-01-01

    Measurements derived from neuroimaging data can serve as markers of disease and/or healthy development, are largely heritable, and have been increasingly utilized as (intermediate) phenotypes in genetic association studies. To date, imaging genetic studies have mostly focused on discovering isolated genetic effects, typically ignoring potential interactions with non-genetic variables such as disease risk factors, environmental exposures, and epigenetic markers. However, identifying significant interaction effects is critical for revealing the true relationship between genetic and phenotypic variables, and shedding light on disease mechanisms. In this paper, we present a general kernel machine based method for detecting effects of interaction between multidimensional variable sets. This method can model the joint and epistatic effect of a collection of single nucleotide polymorphisms (SNPs), accommodate multiple factors that potentially moderate genetic influences, and test for nonlinear interactions between sets of variables in a flexible framework. As a demonstration of application, we applied the method to data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) to detect the effects of the interactions between candidate Alzheimer's disease (AD) risk genes and a collection of cardiovascular disease (CVD) risk factors, on hippocampal volume measurements derived from structural brain magnetic resonance imaging (MRI) scans. Our method identified that two genes, CR1 and EPHA1, demonstrate significant interactions with CVD risk factors on hippocampal volume, suggesting that CR1 and EPHA1 may play a role in influencing AD-related neurodegeneration in the presence of CVD risks. PMID:25600633

  12. Weighted Bergman kernels and virtual Bergman kernels

    NASA Astrophysics Data System (ADS)

    Roos, Guy

    2005-12-01

    We introduce the notion of "virtual Bergman kernel" and apply it to the computation of the Bergman kernel of "domains inflated by Hermitian balls", in particular when the base domain is a bounded symmetric domain.

  13. Total phenolics, antioxidant activity, and functional properties of 'Tommy Atkins' mango peel and kernel as affected by drying methods.

    PubMed

    Sogi, Dalbir Singh; Siddiq, Muhammad; Greiby, Ibrahim; Dolan, Kirk D

    2013-12-01

    Mango processing produces significant amount of waste (peels and kernels) that can be utilized for the production of value-added ingredients for various food applications. Mango peel and kernel were dried using different techniques, such as freeze drying, hot air, vacuum and infrared. Freeze dried mango waste had higher antioxidant properties than those from other techniques. The ORAC values of peel and kernel varied from 418-776 and 1547-1819 μmol TE/g db. The solubility of freeze dried peel and kernel powder was the highest. The water and oil absorption index of mango waste powders ranged between 1.83-6.05 and 1.66-3.10, respectively. Freeze dried powders had the lowest bulk density values among different techniques tried. The cabinet dried waste powders can be potentially used in food products to enhance their nutritional and antioxidant properties.

  14. Adaptive smoothing of valleys in DEMs using TIN interpolation from ridgeline elevations: An application to morphotectonic aspect analysis

    NASA Astrophysics Data System (ADS)

    Jordan, Gyozo

    2007-05-01

    This paper presents a smoothing method that eliminates valleys of various Strahler-order drainage lines from a digital elevation model (DEM), thus enabling the recovery of local and regional trends in a terrain. A novel method for automated extraction of high-density channel network is developed to identify ridgelines defined as the watershed boundaries of channel segments. A DEM using TIN interpolation is calculated based on elevations of digitally extracted ridgelines. This removes first-order watersheds from the DEM. Higher levels of DEM smoothing can be achieved by the application of the method to ridgelines of higher-order channels. The advantage of the proposed smoothing method over traditional smoothing methods of moving kernel, trend and spectral methods is that it does not require pre-definition of smoothing parameters, such as kernel or trend parameters, and thus it follows topography in an adaptive way. Another advantage is that smoothing is controlled by the physical-hydrological properties of the terrain, as opposed to mathematical filters. Level of smoothing depends on ridgeline geometry and density, and the applied user-defined channel order. The method requires digital extraction of a high-density channel and ridgeline network. The advantage of the smoothing method over traditional methods is demonstrated through a case study of the Kali Basin test site in Hungary. The smoothing method is used in this study for aspect generalisation for morphotectonic investigations in a small watershed.

  15. The CACAO Method for Smoothing, Gap Filling, and Characterizing Seasonal Anomalies in Satellite Time Series

    NASA Technical Reports Server (NTRS)

    Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.

    2013-01-01

    Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.

  16. Evaluation of different methods for monitoring incipient carious lesions in smooth surfaces under fluoride varnish therapy.

    PubMed

    Ferreira, Jainara Maria Soares; Silva, Milton Fernando Andrade; Oliveira, Andressa Feitosa Bezerra; Sampaio, Fábio Correia

    2008-07-01

    There are only a few studies relating visual inspection methods and laser fluorescence when monitoring regression of incipient carious lesions. The purpose of this study was to monitor incipient carious lesions in smooth surfaces under varnish fluoride therapy using visual inspection methods and laser fluorescence (LF). Active white spot lesions (n = 111) in upper front teeth of 36 children were selected. The children were subjected to four or eight applications of fluoride varnish in weekly intervals. The visual systems were activity (A) and maximum dimension in millimetres (D). They were applied together with LF readings (L) in the beginning of the study (W1), in the 5th week (W5), and in the 9th (W9) week. The mean (SD) of L values in W5 and W9 were 5.6 (3.8) and 4.5 (3.3), respectively; both were significantly different from the initial score of 7.4 (5.1) in W1. There was a positive correlation between D and L in W5 (r = 0.25) and W9 (r = 0.36; P < 0.05). The mean (SD) values of L were lower following the activity criteria. Our findings support the finding that incipient carious lesions in smooth surfaces under fluoride therapy can be monitored by laser fluorescence and visual inspection methods.

  17. Method of adiabatic modes in studying problems of smoothly irregular open waveguide structures

    SciTech Connect

    Sevastianov, L. A.; Egorov, A. A.; Sevastyanov, A. L.

    2013-02-15

    Basic steps in developing an original method of adiabatic modes that makes it possible to solve the direct and inverse problems of simulating and designing three-dimensional multilayered smoothly irregular open waveguide structures are described. A new element in the method is that an approximate solution of Maxwell's equations is made to obey 'inclined' boundary conditions at the interfaces between themedia being considered. These boundary conditions take into account the obliqueness of planes tangent to nonplanar boundaries between the media and lead to new equations for coupled vector quasiwaveguide hybrid adiabatic modes. Solutions of these equations describe the phenomenon of 'entanglement' of two linear polarizations of an irregular multilayered waveguide, the appearance of a new mode in an entangled state, and the effect of rotation of the polarization plane of quasiwaveguide modes. The efficiency of the method is demonstrated by considering the example of numerically simulating a thin-film generalized waveguide Lueneburg lens.

  18. Immersed smoothed finite element method for fluid-structure interaction simulation of aortic valves

    NASA Astrophysics Data System (ADS)

    Yao, Jianyao; Liu, G. R.; Narmoneva, Daria A.; Hinton, Robert B.; Zhang, Zhi-Qian

    2012-12-01

    This paper presents a novel numerical method for simulating the fluid-structure interaction (FSI) problems when blood flows over aortic valves. The method uses the immersed boundary/element method and the smoothed finite element method and hence it is termed as IS-FEM. The IS-FEM is a partitioned approach and does not need a body-fitted mesh for FSI simulations. It consists of three main modules: the fluid solver, the solid solver and the FSI force solver. In this work, the blood is modeled as incompressible viscous flow and solved using the characteristic-based-split scheme with FEM for spacial discretization. The leaflets of the aortic valve are modeled as Mooney-Rivlin hyperelastic materials and solved using smoothed finite element method (or S-FEM). The FSI force is calculated on the Lagrangian fictitious fluid mesh that is identical to the moving solid mesh. The octree search and neighbor-to-neighbor schemes are used to detect efficiently the FSI pairs of fluid and solid cells. As an example, a 3D idealized model of aortic valve is modeled, and the opening process of the valve is simulated using the proposed IS-FEM. Numerical results indicate that the IS-FEM can serve as an efficient tool in the study of aortic valve dynamics to reveal the details of stresses in the aortic valves, the flow velocities in the blood, and the shear forces on the interfaces. This tool can also be applied to animal models studying disease processes and may ultimately translate to a new adaptive methods working with magnetic resonance images, leading to improvements on diagnostic and prognostic paradigms, as well as surgical planning, in the care of patients.

  19. Face-based smoothed finite element method for real-time simulation of soft tissue

    NASA Astrophysics Data System (ADS)

    Mendizabal, Andrea; Bessard Duparc, Rémi; Bui, Huu Phuoc; Paulus, Christoph J.; Peterlik, Igor; Cotin, Stéphane

    2017-03-01

    In soft tissue surgery, a tumor and other anatomical structures are usually located using the preoperative CT or MR images. However, due to the deformation of the concerned tissues, this information suffers from inaccuracy when employed directly during the surgery. In order to account for these deformations in the planning process, the use of a bio-mechanical model of the tissues is needed. Such models are often designed using the finite element method (FEM), which is, however, computationally expensive, in particular when a high accuracy of the simulation is required. In our work, we propose to use a smoothed finite element method (S-FEM) in the context of modeling of the soft tissue deformation. This numerical technique has been introduced recently to overcome the overly stiff behavior of the standard FEM and to improve the solution accuracy and the convergence rate in solid mechanics problems. In this paper, a face-based smoothed finite element method (FS-FEM) using 4-node tetrahedral elements is presented. We show that in some cases, the method allows for reducing the number of degrees of freedom, while preserving the accuracy of the discretization. The method is evaluated on a simulation of a cantilever beam loaded at the free end and on a simulation of a 3D cube under traction and compression forces. Further, it is applied to the simulation of the brain shift and of the kidney's deformation. The results demonstrate that the method outperforms the standard FEM in a bending scenario and that has similar accuracy as the standard FEM in the simulations of the brain-shift and of the kidney's deformation.

  20. A method for the accurate and smooth approximation of standard thermodynamic functions

    NASA Astrophysics Data System (ADS)

    Coufal, O.

    2013-01-01

    A method is proposed for the calculation of approximations of standard thermodynamic functions. The method is consistent with the physical properties of standard thermodynamic functions. This means that the approximation functions are, in contrast to the hitherto used approximations, continuous and smooth in every temperature interval in which no phase transformations take place. The calculation algorithm was implemented by the SmoothSTF program in the C++ language which is part of this paper. Program summaryProgram title:SmoothSTF Catalogue identifier: AENH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3807 No. of bytes in distributed program, including test data, etc.: 131965 Distribution format: tar.gz Programming language: C++. Computer: Any computer with gcc version 4.3.2 compiler. Operating system: Debian GNU Linux 6.0. The program can be run in operating systems in which the gcc compiler can be installed, see http://gcc.gnu.org/install/specific.html. RAM: 256 MB are sufficient for the table of standard thermodynamic functions with 500 lines Classification: 4.9. Nature of problem: Standard thermodynamic functions (STF) of individual substances are given by thermal capacity at constant pressure, entropy and enthalpy. STF are continuous and smooth in every temperature interval in which no phase transformations take place. The temperature dependence of STF as expressed by the table of its values is for further application approximated by temperature functions. In the paper, a method is proposed for calculating approximation functions which, in contrast to the hitherto used approximations, are continuous and smooth in every temperature interval. Solution method: The approximation functions are

  1. Numerical study of a multigrid method with four smoothing methods for the incompressible Navier-Stokes equations in general coordinates

    NASA Technical Reports Server (NTRS)

    Zeng, S.; Wesseling, P.

    1993-01-01

    The performance of a linear multigrid method using four smoothing methods, called SCGS (Symmetrical Coupled GauBeta-Seidel), CLGS (Collective Line GauBeta-Seidel), SILU (Scalar ILU), and CILU (Collective ILU), is investigated for the incompressible Navier-Stokes equations in general coordinates, in association with Galerkin coarse grid approximation. Robustness and efficiency are measured and compared by application to test problems. The numerical results show that CILU is the most robust, SILU the least, with CLGS and SCGS in between. CLGS is the best in efficiency, SCGS and CILU follow, and SILU is the worst.

  2. Reproducing Kernel Particle Method in Plasticity of Pressure-Sensitive Material with Reference to Powder Forming Process

    NASA Astrophysics Data System (ADS)

    Khoei, A. R.; Samimi, M.; Azami, A. R.

    2007-02-01

    In this paper, an application of the reproducing kernel particle method (RKPM) is presented in plasticity behavior of pressure-sensitive material. The RKPM technique is implemented in large deformation analysis of powder compaction process. The RKPM shape function and its derivatives are constructed by imposing the consistency conditions. The essential boundary conditions are enforced by the use of the penalty approach. The support of the RKPM shape function covers the same set of particles during powder compaction, hence no instability is encountered in the large deformation computation. A double-surface plasticity model is developed in numerical simulation of pressure-sensitive material. The plasticity model includes a failure surface and an elliptical cap, which closes the open space between the failure surface and hydrostatic axis. The moving cap expands in the stress space according to a specified hardening rule. The cap model is presented within the framework of large deformation RKPM analysis in order to predict the non-uniform relative density distribution during powder die pressing. Numerical computations are performed to demonstrate the applicability of the algorithm in modeling of powder forming processes and the results are compared to those obtained from finite element simulation to demonstrate the accuracy of the proposed model.

  3. Calculation of smooth potential energy surfaces using local electron correlation methods

    NASA Astrophysics Data System (ADS)

    Mata, Ricardo A.; Werner, Hans-Joachim

    2006-11-01

    The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl- with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barrier heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.

  4. Calculation of smooth potential energy surfaces using local electron correlation methods

    SciTech Connect

    Mata, Ricardo A.; Werner, Hans-Joachim

    2006-11-14

    The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl{sup -} with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barrier heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.

  5. An efficient FSI coupling strategy between Smoothed Particle Hydrodynamics and Finite Element methods

    NASA Astrophysics Data System (ADS)

    Fourey, G.; Hermange, C.; Le Touzé, D.; Oger, G.

    2017-08-01

    An efficient coupling between Smoothed Particle Hydrodynamics (SPH) and Finite Element (FE) methods dedicated to violent fluid-structure interaction (FSI) modeling is proposed in this study. The use of a Lagrangian meshless method for the fluid reduces the complexity of fluid-structure interface handling, especially in presence of complex free surface flows. The paper details the discrete SPH equations and the FSI coupling strategy adopted. Both convergence and robustness of the SPH-FE coupling are performed and discussed. More particularly, the loss and gain in stability is studied according to various coupling parameters, and different coupling algorithms are considered. Investigations are performed on 2D academic and experimental test cases in the order of increasing complexity.

  6. Invariant measures of smooth dynamical systems, generalized functions and summation methods

    NASA Astrophysics Data System (ADS)

    Kozlov, V. V.

    2016-04-01

    We discuss conditions for the existence of invariant measures of smooth dynamical systems on compact manifolds. If there is an invariant measure with continuously differentiable density, then the divergence of the vector field along every solution tends to zero in the Cesàro sense as time increases unboundedly. Here the Cesàro convergence may be replaced, for example, by any Riesz summation method, which can be arbitrarily close to ordinary convergence (but does not coincide with it). We give an example of a system whose divergence tends to zero in the ordinary sense but none of its invariant measures is absolutely continuous with respect to the `standard' Lebesgue measure (generated by some Riemannian metric) on the phase space. We give examples of analytic systems of differential equations on analytic phase spaces admitting invariant measures of any prescribed smoothness (including a measure with integrable density), but having no invariant measures with positive continuous densities. We give a new proof of the classical Bogolyubov-Krylov theorem using generalized functions and the Hahn-Banach theorem. The properties of signed invariant measures are also discussed.

  7. Perturbation theory for anisotropic dielectric interfaces, and application to subpixel smoothing of discretized numerical methods.

    PubMed

    Kottke, Chris; Farjadpour, Ardavan; Johnson, Steven G

    2008-03-01

    We derive a correct first-order perturbation theory in electromagnetism for cases where an interface between two anisotropic dielectric materials is slightly shifted. Most previous perturbative methods give incorrect results for this case, even to lowest order, because of the complicated discontinuous boundary conditions on the electric field at such an interface. Our final expression is simply a surface integral, over the material interface, of the continuous field components from the unperturbed structure. The derivation is based on a "localized" coordinate-transformation technique, which avoids both the problem of field discontinuities and the challenge of constructing an explicit coordinate transformation by taking the limit in which the coordinate perturbation is infinitesimally localized around the boundary. Not only is our result potentially useful in evaluating boundary perturbations, e.g., from fabrication imperfections, in highly anisotropic media such as many metamaterials, but it also has a direct application in numerical electromagnetism. In particular, we show how it leads to a subpixel smoothing scheme to ameliorate staircasing effects in discretized simulations of anisotropic media, in such a way as to greatly reduce the numerical errors compared to other proposed smoothing schemes.

  8. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  9. A New Finite Element Supersonic Kernel Function Method in Lifting Surface Theory. Volume I

    DTIC Science & Technology

    1976-04-01

    procedure, incorporating minor improvements. Two recent integrated potential methods are those by Appa and Jones (Ref. 20) and Giesir - and Kaiman (Ref...used non-uniform characteristic mesh with the vertices as col- location points, Giesing and Kaiman used trapezoidal elemeuts and set the col...approx- imation of the velocity potential must be employed. • In the approaches of Appa and Jones, and Giesing and Kaiman , a matrix equation must be

  10. Application of Holt exponential smoothing and ARIMA method for data population in West Java

    NASA Astrophysics Data System (ADS)

    Supriatna, A.; Susanti, D.; Hertini, E.

    2017-01-01

    One method of time series that is often used to predict data that contains trend is Holt. Holt method using different parameters used in the original data which aims to smooth the trend value. In addition to Holt, ARIMA method can be used on a wide variety of data including data pattern containing a pattern trend. Data actual of population from 1998-2015 contains the trends so can be solved by Holt and ARIMA method to obtain the prediction value of some periods. The best method is measured by looking at the smallest MAPE and MAE error. The result using Holt method is 47.205.749 populations in 2016, 47.535.324 populations in 2017, and 48.041.672 populations in 2018, with MAPE error is 0,469744 and MAE error is 189.731. While the result using ARIMA method is 46.964.682 populations in 2016, 47.342.189 in 2017, and 47.899.696 in 2018, with MAPE error is 0,4380 and MAE is 176.626.

  11. Comparing isotropic and anisotropic smoothing for voxel-based DTI analyses: A simulation study.

    PubMed

    Van Hecke, Wim; Leemans, Alexander; De Backer, Steve; Jeurissen, Ben; Parizel, Paul M; Sijbers, Jan

    2010-01-01

    Voxel-based analysis (VBA) methods are increasingly being used to compare diffusion tensor image (DTI) properties across different populations of subjects. Although VBA has many advantages, its results are highly dependent on several parameter settings, such as those from the coregistration technique applied to align the data, the smoothing kernel, the statistics, and the post-hoc analyses. In particular, to increase the signal-to-noise ratio and to mitigate the adverse effect of residual image misalignments, DTI data are often smoothed before VBA with an isotropic Gaussian kernel with a full width half maximum up to 16 x 16 x 16 mm(3). However, using isotropic smoothing kernels can significantly partial volume or voxel averaging artifacts, adversely affecting the true diffusion properties of the underlying fiber tissue. In this work, we compared VBA results between the isotropic and an anisotropic Gaussian filtering method using a simulated framework. Our results clearly demonstrate an increased sensitivity and specificity of detecting a predefined simulated pathology when the anisotropic smoothing kernel was used. 2009 Wiley-Liss, Inc.

  12. Workshop on advances in smooth particle hydrodynamics

    SciTech Connect

    Wingate, C.A.; Miller, W.A.

    1993-12-31

    This proceedings contains viewgraphs presented at the 1993 workshop held at Los Alamos National Laboratory. Discussed topics include: negative stress, reactive flow calculations, interface problems, boundaries and interfaces, energy conservation in viscous flows, linked penetration calculations, stability and consistency of the SPH method, instabilities, wall heating and conservative smoothing, tensors, tidal disruption of stars, breaking the 10,000,000 particle limit, modelling relativistic collapse, SPH without H, relativistic KSPH avoidance of velocity based kernels, tidal compression and disruption of stars near a supermassive rotation black hole, and finally relativistic SPH viscosity and energy.

  13. Large-scale prediction of disulphide bridges using kernel methods, two-dimensional recursive neural networks, and weighted graph matching.

    PubMed

    Cheng, Jianlin; Saigo, Hiroto; Baldi, Pierre

    2006-03-15

    The formation of disulphide bridges between cysteines plays an important role in protein folding, structure, function, and evolution. Here, we develop new methods for predicting disulphide bridges in proteins. We first build a large curated data set of proteins containing disulphide bridges to extract relevant statistics. We then use kernel methods to predict whether a given protein chain contains intrachain disulphide bridges or not, and recursive neural networks to predict the bonding probabilities of each pair of cysteines in the chain. These probabilities in turn lead to an accurate estimation of the total number of disulphide bridges and to a weighted graph matching problem that can be addressed efficiently to infer the global disulphide bridge connectivity pattern. This approach can be applied both in situations where the bonded state of each cysteine is known, or in ab initio mode where the state is unknown. Furthermore, it can easily cope with chains containing an arbitrary number of disulphide bridges, overcoming one of the major limitations of previous approaches. It can classify individual cysteine residues as bonded or nonbonded with 87% specificity and 89% sensitivity. The estimate for the total number of bridges in each chain is correct 71% of the times, and within one from the true value over 94% of the times. The prediction of the overall disulphide connectivity pattern is exact in about 51% of the chains. In addition to using profiles in the input to leverage evolutionary information, including true (but not predicted) secondary structure and solvent accessibility information yields small but noticeable improvements. Finally, once the system is trained, predictions can be computed rapidly on a proteomic or protein-engineering scale. The disulphide bridge prediction server (DIpro), software, and datasets are available through www.igb.uci.edu/servers/psss.html. (c) 2005 Wiley-Liss, Inc.

  14. A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method

    NASA Astrophysics Data System (ADS)

    Bush, I. J.; Todorov, I. T.; Smith, W.

    2006-09-01

    The Smoothed Particle Mesh Ewald method [U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys. 103 (1995) 8577] for calculating long ranged forces in molecular simulation has been adapted for the parallel molecular dynamics code DL_POLY_3 [I.T. Todorov, W. Smith, Philos. Trans. Roy. Soc. London 362 (2004) 1835], making use of a novel 3D Fast Fourier Transform (DAFT) [I.J. Bush, The Daresbury Advanced Fourier transform, Daresbury Laboratory, 1999] that perfectly matches the Domain Decomposition (DD) parallelisation strategy [W. Smith, Comput. Phys. Comm. 62 (1991) 229; M.R.S. Pinches, D. Tildesley, W. Smith, Mol. Sim. 6 (1991) 51; D. Rapaport, Comput. Phys. Comm. 62 (1991) 217] of the DL_POLY_3 code. In this article we describe software adaptations undertaken to import this functionality and provide a review of its performance.

  15. An Implementation of the Smooth Particle Mesh Ewald Method on GPU Hardware.

    PubMed

    Harvey, M J; De Fabritiis, G

    2009-09-08

    The smooth particle mesh Ewald summation method is widely used to efficiently compute long-range electrostatic force terms in molecular dynamics simulations, and there has been considerable work in developing optimized implementations for a variety of parallel computer architectures. We describe an implementation for Nvidia graphical processing units (GPUs) which are general purpose computing devices with a high degree of intrinsic parallelism and arithmetic performance. We find that, for typical biomolecular simulations (e.g., DHFR, 26K atoms), a single GPU equipped workstation is able to provide sufficient performance to permit simulation rates of ≈50 ns/day when used in conjunction with the ACEMD molecular dynamics package (1) and exhibits an accuracy comparable to that of a reference double-precision CPU implementation.

  16. A multiscale restriction-smoothed basis method for high contrast porous media represented on unstructured grids

    SciTech Connect

    Møyner, Olav Lie, Knut-Andreas

    2016-01-01

    A wide variety of multiscale methods have been proposed in the literature to reduce runtime and provide better scaling for the solution of Poisson-type equations modeling flow in porous media. We present a new multiscale restricted-smoothed basis (MsRSB) method that is designed to be applicable to both rectilinear grids and unstructured grids. Like many other multiscale methods, MsRSB relies on a coarse partition of the underlying fine grid and a set of local prolongation operators (multiscale basis functions) that map unknowns associated with the fine grid cells to unknowns associated with blocks in the coarse partition. These mappings are constructed by restricted smoothing: Starting from a constant, a localized iterative scheme is applied directly to the fine-scale discretization to compute prolongation operators that are consistent with the local properties of the differential operators. The resulting method has three main advantages: First of all, both the coarse and the fine grid can have general polyhedral geometry and unstructured topology. This means that partitions and good prolongation operators can easily be constructed for complex models involving high media contrasts and unstructured cell connections introduced by faults, pinch-outs, erosion, local grid refinement, etc. In particular, the coarse partition can be adapted to geological or flow-field properties represented on cells or faces to improve accuracy. Secondly, the method is accurate and robust when compared to existing multiscale methods and does not need expensive recomputation of local basis functions to account for transient behavior: Dynamic mobility changes are incorporated by continuing to iterate a few extra steps on existing basis functions. This way, the cost of updating the prolongation operators becomes proportional to the amount of change in fluid mobility and one reduces the need for expensive, tolerance-based updates. Finally, since the MsRSB method is formulated on top of a cell

  17. Biological Rhythms Modelisation of Vigilance and Sleep in Microgravity State with COSINOR and Volterra's Kernels Methods

    NASA Astrophysics Data System (ADS)

    Gaudeua de Gerlicz, C.; Golding, J. G.; Bobola, Ph.; Moutarde, C.; Naji, S.

    2008-06-01

    The spaceflight under microgravity cause basically biological and physiological imbalance in human being. Lot of study has been yet release on this topic especially about sleep disturbances and on the circadian rhythms (alternation vigilance-sleep, body, temperature...). Factors like space motion sickness, noise, or excitement can cause severe sleep disturbances. For a stay of longer than four months in space, gradual increases in the planned duration of sleep were reported. [1] The average sleep in orbit was more than 1.5 hours shorter than the during control periods on earth, where sleep averaged 7.9 hours. [2] Alertness and calmness were unregistered yield clear circadian pattern of 24h but with a phase delay of 4h.The calmness showed a biphasic component (12h) mean sleep duration was 6.4 structured by 3-5 non REM/REM cycles. Modelisations of neurophysiologic mechanisms of stress and interactions between various physiological and psychological variables of rhythms have can be yet release with the COSINOR method. [3

  18. Do we really need a large number of particles to simulate bimolecular reactive transport with random walk methods? A kernel density estimation approach

    NASA Astrophysics Data System (ADS)

    Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-12-01

    Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a

  19. Compactness vs. Smoothness: Methods for regularizing fault slip inversions with application to subduction zone earthquakes.

    NASA Astrophysics Data System (ADS)

    Lohman, R. B.; Simons, M.

    2004-12-01

    We examine inversions of geodetic data for fault slip and discuss how inferred results are affected by choices of regularization. The final goal of any slip inversion is to enhance our understanding of the dynamics governing fault zone processes through kinematic descriptions of fault zone behavior at various temporal and spatial scales. Important kinematic observations include ascertaining whether fault slip is correlated with topographic and gravitational anomalies, whether coseismic and postseismic slip occur on complementary or overlapping regions of the fault plane, and how aftershock distributions compare with areas of coseismic and postseismic slip. Fault slip inversions are generally poorly-determined inverse problems requiring some sort of regularization. Attempts to place inversion results in the context of understanding fault zone processes should be accompanied by careful treatment of how the applied regularization affects characteristics of the inferred slip model. Most regularization techniques involve defining a metric that quantifies the solution "simplicity". A frequently employed method defines a "simple" slip distribution as one that is spatially smooth, balancing the fit to the data vs. the spatial complexity of the slip distribution. One problem related to the use of smoothing constraints is the "smearing" of fault slip into poorly-resolved areas on the fault plane. In addition, even if the data is fit well by a point source, the fact that a point source is spatially "rough" will force the inversion to choose a smoother model with slip over a broader area. Therefore, when we interpret the area of inferred slip we must ask whether the slipping area is truly constrained by the data, or whether it could be fit equally well by a more spatially compact source with larger amplitudes of slip. We introduce an alternate regularization technique for fault slip inversions, where we seek an end member model that is the smallest region of fault slip that

  20. Estimation of mass ratio of the total kernels within a sample of in-shell peanuts using RF Impedance Method

    USDA-ARS?s Scientific Manuscript database

    It would be useful to know the total kernel mass within a given mass of peanuts (mass ratio) while the peanuts are bought or being processed. In this work, the possibility of finding this mass ratio while the peanuts were in their shells was investigated. Capacitance, phase angle and dissipation fa...

  1. New Equating Methods and Their Relationships with Levine Observed Score Linear Equating under the Kernel Equating Framework

    ERIC Educational Resources Information Center

    Chen, Haiwen; Holland, Paul

    2010-01-01

    In this paper, we develop a new curvilinear equating for the nonequivalent groups with anchor test (NEAT) design under the assumption of the classical test theory model, that we name curvilinear Levine observed score equating. In fact, by applying both the kernel equating framework and the mean preserving linear transformation of…

  2. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  3. Kernel earth mover's distance for EEG classification.

    PubMed

    Daliri, Mohammad Reza

    2013-07-01

    Here, we propose a new kernel approach based on the earth mover's distance (EMD) for electroencephalography (EEG) signal classification. The EEG time series are first transformed into histograms in this approach. The distance between these histograms is then computed using the EMD in a pair-wise manner. We bring the distances into a kernel form called kernel EMD. The support vector classifier can then be used for the classification of EEG signals. The experimental results on the real EEG data show that the new kernel method is very effective, and can classify the data with higher accuracy than traditional methods.

  4. Methods for simultaneously identifying coherent local clusters with smooth global patterns in gene expression profiles.

    PubMed

    Tien, Yin-Jing; Lee, Yun-Shien; Wu, Han-Ming; Chen, Chun-Houh

    2008-03-20

    The hierarchical clustering tree (HCT) with a dendrogram 1 and the singular value decomposition (SVD) with a dimension-reduced representative map 2 are popular methods for two-way sorting the gene-by-array matrix map employed in gene expression profiling. While HCT dendrograms tend to optimize local coherent clustering patterns, SVD leading eigenvectors usually identify better global grouping and transitional structures. This study proposes a flipping mechanism for a conventional agglomerative HCT using a rank-two ellipse (R2E, an improved SVD algorithm for sorting purpose) seriation by Chen 3 as an external reference. While HCTs always produce permutations with good local behaviour, the rank-two ellipse seriation gives the best global grouping patterns and smooth transitional trends. The resulting algorithm automatically integrates the desirable properties of each method so that users have access to a clustering and visualization environment for gene expression profiles that preserves coherent local clusters and identifies global grouping trends. We demonstrate, through four examples, that the proposed method not only possesses better numerical and statistical properties, it also provides more meaningful biomedical insights than other sorting algorithms. We suggest that sorted proximity matrices for genes and arrays, in addition to the gene-by-array expression matrix, can greatly aid in the search for comprehensive understanding of gene expression structures. Software for the proposed methods can be obtained at http://gap.stat.sinica.edu.tw/Software/GAP.

  5. A particle swarm optimized kernel-based clustering method for crop mapping from multi-temporal polarimetric L-band SAR observations

    NASA Astrophysics Data System (ADS)

    Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza

    2017-06-01

    Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.

  6. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  7. GPUs, a new tool of acceleration in CFD: efficiency and reliability on smoothed particle hydrodynamics methods.

    PubMed

    Crespo, Alejandro C; Dominguez, Jose M; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D

    2011-01-01

    Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability.

  8. GPUs, a New Tool of Acceleration in CFD: Efficiency and Reliability on Smoothed Particle Hydrodynamics Methods

    PubMed Central

    Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.

    2011-01-01

    Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185

  9. Enrollment Forecasting with Double Exponential Smoothing: Two Methods for Objective Weight Factor Selection. AIR Forum 1980 Paper.

    ERIC Educational Resources Information Center

    Gardner, Don E.

    The merits of double exponential smoothing are discussed relative to other types of pattern-based enrollment forecasting methods. The difficulties associated with selecting an appropriate weight factor are discussed, and their potential effects on prediction results are illustrated. Two methods for objectively selecting the "best" weight…

  10. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  11. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  12. A novel method for modeling of complex wall geometries in smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Eitzlmayr, Andreas; Koscher, Gerold; Khinast, Johannes

    2014-10-01

    Smoothed particle hydrodynamics (SPH) has become increasingly important during recent decades. Its meshless nature, inherent representation of convective transport and ability to simulate free surface flows make SPH particularly promising with regard to simulations of industrial mixing devices for high-viscous fluids, which often have complex rotating geometries and partially filled regions (e.g., twin-screw extruders). However, incorporating the required geometries remains a challenge in SPH since the most obvious and most common ways to model solid walls are based on particles (i.e., boundary particles and ghost particles), which leads to complications with arbitrarily-curved wall surfaces. To overcome this problem, we developed a systematic method for determining an adequate interaction between SPH particles and a continuous wall surface based on the underlying SPH equations. We tested our new approach by using the open-source particle simulator "LIGGGHTS" and comparing the velocity profiles to analytical solutions and SPH simulations with boundary particles. Finally, we followed the evolution of a tracer in a twin-cam mixer during the rotation, which was experimentally and numerically studied by several other authors, and ascertained good agreement with our results. This supports the validity of our newly-developed wall interaction method, which constitutes a step forward in SPH simulations of complex geometries.

  13. PDE-based spatial smoothing: a practical demonstration of impacts on MRI brain extraction, tissue segmentation and registration.

    PubMed

    Xing, Xiu-Xia; Zhou, You-Long; Adelstein, Jonathan S; Zuo, Xi-Nian

    2011-06-01

    Spatial smoothing is typically used to denoise magnetic resonance imaging (MRI) data. Gaussian smoothing kernels, associated with heat equations or isotropic diffusion (ISD), are widely adopted for this purpose because of their easy implementation and efficient computation, but despite these advantages, Gaussian smoothing kernels blur the edges, curvature and texture of images. To overcome these issues, researchers have proposed anisotropic diffusion (ASD) and non-local means [i.e., diffusion (NLD)] kernels. However, these new filtering paradigms are rarely applied to MRI analyses. In the current study, using real degraded MRI data, we demonstrated the effect of denoising using ISD, ASD and NLD kernels. Furthermore, we evaluated their impact on three common preprocessing steps of MRI data analysis: brain extraction, segmentation and registration. Results suggest that NLD-based spatial smoothing is most effective at improving the quality of MRI data preprocessing and thus should become the new standard method of smoothing in MRI data processing. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Smoothing splines: Regression, derivatives and deconvolution

    NASA Technical Reports Server (NTRS)

    Rice, J.; Rosenblatt, M.

    1982-01-01

    The statistical properties of a cubic smoothing spline and its derivative are analyzed. It is shown that unless unnatural boundary conditions hold, the integrated squared bias is dominated by local effects near the boundary. Similar effects are shown to occur in the regularized solution of a translation-kernel intergral equation. These results are derived by developing a Fourier representation for a smoothing spline.

  15. Fusion and kernel type selection in adaptive image retrieval

    NASA Astrophysics Data System (ADS)

    Doloc-Mihu, Anca; Raghavan, Vijay V.

    2007-04-01

    In this work we investigate the relationships between features representing images, fusion schemes for these features and kernel types used in an Web-based Adaptive Image Retrieval System. Using the Kernel Rocchio learning method, several kernels having polynomial and Gaussian forms are applied to general images represented by annotations and by color histograms in RGB and HSV color spaces. We propose different fusion schemes, which incorporate kernel selector component(s). We perform experiments to study the relationships between a concatenated vector and several kernel types. Experimental results show that an appropriate kernel could significantly improve the performance of the retrieval system.

  16. A comparison of spatial smoothing methods for small area estimation with sampling weights.

    PubMed

    Mercer, Laina; Wakefield, Jon; Chen, Cici; Lumley, Thomas

    2014-05-01

    Small area estimation (SAE) is an important endeavor in many fields and is used for resource allocation by both public health and government organizations. Often, complex surveys are carried out within areas, in which case it is common for the data to consist only of the response of interest and an associated sampling weight, reflecting the design. While it is appealing to use spatial smoothing models, and many approaches have been suggested for this endeavor, it is rare for spatial models to incorporate the weighting scheme, leaving the analysis potentially subject to bias. To examine the properties of various approaches to estimation we carry out a simulation study, looking at bias due to both non-response and non-random sampling. We also carry out SAE of smoking prevalence in Washington State, at the zip code level, using data from the 2006 Behavioral Risk Factor Surveillance System. The computation times for the methods we compare are short, and all approaches are implemented in R using currently available packages.

  17. Development of a Smooth Trajectory Maneuver Method to Accommodate the Ares I Flight Control Constraints

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Schmitt, Terri L.; Hanson, John M.

    2008-01-01

    Six degree-of-freedom (DOF) launch vehicle trajectories are designed to follow an optimized 3-DOF reference trajectory. A vehicle has a finite amount of control power that it can allocate to performing maneuvers. Therefore, the 3-DOF trajectory must be designed to refrain from using 100% of the allowable control capability to perform maneuvers, saving control power for handling off-nominal conditions, wind gusts and other perturbations. During the Ares I trajectory analysis, two maneuvers were found to be hard for the control system to implement; a roll maneuver prior to the gravity turn and an angle of attack maneuver immediately after the J-2X engine start-up. It was decided to develop an approach for creating smooth maneuvers in the optimized reference trajectories that accounts for the thrust available from the engines. A feature of this method is that no additional angular velocity in the direction of the maneuver has been added to the vehicle after the maneuver completion. This paper discusses the equations behind these new maneuvers and their implementation into the Ares I trajectory design cycle. Also discussed is a possible extension to adjusting closed-loop guidance.

  18. Using two soft computing methods to predict wall and bed shear stress in smooth rectangular channels

    NASA Astrophysics Data System (ADS)

    Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein

    2017-03-01

    Two soft computing methods were extended in order to predict the mean wall and bed shear stress in open channels. The genetic programming (GP) and Genetic Algorithm Artificial Neural Network (GAA) were investigated to determine the accuracy of these models in estimating wall and bed shear stress. The GP and GAA model results were compared in terms of testing dataset in order to find the best model. In modeling both bed and wall shear stress, the GP model performed better with RMSE of 0.0264 and 0.0185, respectively. Then both proposed models were compared with equations for rectangular open channels, trapezoidal channels and ducts. According to the results, the proposed models performed the best in predicting wall and bed shear stress in smooth rectangular channels. The obtained equation for rectangular channels could estimate values closer to experimental data, but the equations for ducts had poor, inaccurate results in predicting wall and bed shear stress. The equation presented for trapezoidal channels did not have acceptable accuracy in predicting wall and bed shear stress either.

  19. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  20. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  1. Semisupervised kernel matrix learning by kernel propagation.

    PubMed

    Hu, Enliang; Chen, Songcan; Zhang, Daoqiang; Yin, Xuesong

    2010-11-01

    The goal of semisupervised kernel matrix learning (SS-KML) is to learn a kernel matrix on all the given samples on which just a little supervised information, such as class label or pairwise constraint, is provided. Despite extensive research, the performance of SS-KML still leaves some space for improvement in terms of effectiveness and efficiency. For example, a recent pairwise constraints propagation (PCP) algorithm has formulated SS-KML into a semidefinite programming (SDP) problem, but its computation is very expensive, which undoubtedly restricts PCPs scalability in practice. In this paper, a novel algorithm, called kernel propagation (KP), is proposed to improve the comprehensive performance in SS-KML. The main idea of KP is first to learn a small-sized sub-kernel matrix (named seed-kernel matrix) and then propagate it into a larger-sized full-kernel matrix. Specifically, the implementation of KP consists of three stages: 1) separate the supervised sample (sub)set X(l) from the full sample set X; 2) learn a seed-kernel matrix on X(l) through solving a small-scale SDP problem; and 3) propagate the learnt seed-kernel matrix into a full-kernel matrix on X . Furthermore, following the idea in KP, we naturally develop two conveniently realizable out-of-sample extensions for KML: one is batch-style extension, and the other is online-style extension. The experiments demonstrate that KP is encouraging in both effectiveness and efficiency compared with three state-of-the-art algorithms and its related out-of-sample extensions are promising too.

  2. Optimized method for isolating highly purified and functional porcine aortic endothelial and smooth muscle cells.

    PubMed

    Beigi, Farideh; Patel, Mitalben; Morales-Garza, Marco A; Winebrenner, Caitlin; Gobin, Andrea S; Chau, Eric; Sampaio, Luiz C; Taylor, Doris A

    2017-11-01

    Numerous protocols exist for isolating aortic endothelial and smooth muscle cells from small animals. However, establishing a protocol for isolating pure cell populations from large animal vessels that are more elastic has been challenging. We developed a simple sequential enzymatic approach to isolate highly purified populations of porcine aortic endothelial and smooth muscle cells. The lumen of a porcine aorta was filled with 25 U/ml dispase solution and incubated at 37°C to dissociate the endothelial cells. The smooth muscle cells were isolated by mincing the tunica media of the treated aorta and incubating the pieces in 0.2% and then 0.1% collagenase type I solution. The isolated endothelial cells stained positive for von Willebrand factor, and 97.2% of them expressed CD31. Early and late passage endothelial cells had a population doubling time of 38 hr and maintained a capacity to take up DiI-Ac-LDL and form tubes in Matrigel®. The isolated smooth muscle cells stained highly positive for alpha-smooth muscle actin, and an impurities assessment showed that only 1.8% were endothelial cells. Population doubling time for the smooth muscle cells was ∼70 hr at passages 3 and 7; and the cells positively responded to endothelin-1, as shown by a 66% increase in the intracellular calcium level. This simple protocol allows for the isolation of highly pure populations of endothelial and smooth muscle cells from porcine aorta that can survive continued passage in culture without losing functionality or becoming overgrown by fibroblasts. © 2017 Wiley Periodicals, Inc.

  3. The context-tree kernel for strings.

    PubMed

    Cuturi, Marco; Vert, Jean-Philippe

    2005-10-01

    We propose a new kernel for strings which borrows ideas and techniques from information theory and data compression. This kernel can be used in combination with any kernel method, in particular Support Vector Machines for string classification, with notable applications in proteomics. By using a Bayesian averaging framework with conjugate priors on a class of Markovian models known as probabilistic suffix trees or context-trees, we compute the value of this kernel in linear time and space while only using the information contained in the spectrum of the considered strings. This is ensured through an adaptation of a compression method known as the context-tree weighting algorithm. Encouraging classification results are reported on a standard protein homology detection experiment, showing that the context-tree kernel performs well with respect to other state-of-the-art methods while using no biological prior knowledge.

  4. Coronary Stent Artifact Reduction with an Edge-Enhancing Reconstruction Kernel – A Prospective Cross-Sectional Study with 256-Slice CT

    PubMed Central

    Tan, Stéphanie; Soulez, Gilles; Diez Martinez, Patricia; Larrivée, Sandra; Stevens, Louis-Mathieu; Goussard, Yves; Mansour, Samer; Chartrand-Lefebvre, Carl

    2016-01-01

    Purpose Metallic artifacts can result in an artificial thickening of the coronary stent wall which can significantly impair computed tomography (CT) imaging in patients with coronary stents. The objective of this study is to assess in vivo visualization of coronary stent wall and lumen with an edge-enhancing CT reconstruction kernel, as compared to a standard kernel. Methods This is a prospective cross-sectional study involving the assessment of 71 coronary stents (24 patients), with blinded observers. After 256-slice CT angiography, image reconstruction was done with medium-smooth and edge-enhancing kernels. Stent wall thickness was measured with both orthogonal and circumference methods, averaging thickness from diameter and circumference measurements, respectively. Image quality was assessed quantitatively using objective parameters (noise, signal to noise (SNR) and contrast to noise (CNR) ratios), as well as visually using a 5-point Likert scale. Results Stent wall thickness was decreased with the edge-enhancing kernel in comparison to the standard kernel, either with the orthogonal (0.97 ± 0.02 versus 1.09 ± 0.03 mm, respectively; p<0.001) or the circumference method (1.13 ± 0.02 versus 1.21 ± 0.02 mm, respectively; p = 0.001). The edge-enhancing kernel generated less overestimation from nominal thickness compared to the standard kernel, both with the orthogonal (0.89 ± 0.19 versus 1.00 ± 0.26 mm, respectively; p<0.001) and the circumference (1.06 ± 0.26 versus 1.13 ± 0.31 mm, respectively; p = 0.005) methods. The edge-enhancing kernel was associated with lower SNR and CNR, as well as higher background noise (all p < 0.001), in comparison to the medium-smooth kernel. Stent visual scores were higher with the edge-enhancing kernel (p<0.001). Conclusion In vivo 256-slice CT assessment of coronary stents shows that the edge-enhancing CT reconstruction kernel generates thinner stent walls, less overestimation from nominal thickness, and better image quality

  5. Kernel regression for fMRI pattern prediction

    PubMed Central

    Chu, Carlton; Ni, Yizhao; Tan, Geoffrey; Saunders, Craig J.; Ashburner, John

    2011-01-01

    This paper introduces two kernel-based regression schemes to decode or predict brain states from functional brain scans as part of the Pittsburgh Brain Activity Interpretation Competition (PBAIC) 2007, in which our team was awarded first place. Our procedure involved image realignment, spatial smoothing, detrending of low-frequency drifts, and application of multivariate linear and non-linear kernel regression methods: namely kernel ridge regression (KRR) and relevance vector regression (RVR). RVR is based on a Bayesian framework, which automatically determines a sparse solution through maximization of marginal likelihood. KRR is the dual-form formulation of ridge regression, which solves regression problems with high dimensional data in a computationally efficient way. Feature selection based on prior knowledge about human brain function was also used. Post-processing by constrained deconvolution and re-convolution was used to furnish the prediction. This paper also contains a detailed description of how prior knowledge was used to fine tune predictions of specific “feature ratings,” which we believe is one of the key factors in our prediction accuracy. The impact of pre-processing was also evaluated, demonstrating that different pre-processing may lead to significantly different accuracies. Although the original work was aimed at the PBAIC, many techniques described in this paper can be generally applied to any fMRI decoding works to increase the prediction accuracy. PMID:20348000

  6. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  7. Methods and energy storage devices utilizing electrolytes having surface-smoothing additives

    DOEpatents

    Xu, Wu; Zhang, Jiguang; Graff, Gordon L; Chen, Xilin; Ding, Fei

    2015-11-12

    Electrodeposition and energy storage devices utilizing an electrolyte having a surface-smoothing additive can result in self-healing, instead of self-amplification, of initial protuberant tips that give rise to roughness and/or dendrite formation on the substrate and anode surface. For electrodeposition of a first metal (M1) on a substrate or anode from one or more cations of M1 in an electrolyte solution, the electrolyte solution is characterized by a surface-smoothing additive containing cations of a second metal (M2), wherein cations of M2 have an effective electrochemical reduction potential in the solution lower than that of the cations of M1.

  8. Coronary Stent Artifact Reduction with an Edge-Enhancing Reconstruction Kernel - A Prospective Cross-Sectional Study with 256-Slice CT.

    PubMed

    Tan, Stéphanie; Soulez, Gilles; Diez Martinez, Patricia; Larrivée, Sandra; Stevens, Louis-Mathieu; Goussard, Yves; Mansour, Samer; Chartrand-Lefebvre, Carl

    2016-01-01

    Metallic artifacts can result in an artificial thickening of the coronary stent wall which can significantly impair computed tomography (CT) imaging in patients with coronary stents. The objective of this study is to assess in vivo visualization of coronary stent wall and lumen with an edge-enhancing CT reconstruction kernel, as compared to a standard kernel. This is a prospective cross-sectional study involving the assessment of 71 coronary stents (24 patients), with blinded observers. After 256-slice CT angiography, image reconstruction was done with medium-smooth and edge-enhancing kernels. Stent wall thickness was measured with both orthogonal and circumference methods, averaging thickness from diameter and circumference measurements, respectively. Image quality was assessed quantitatively using objective parameters (noise, signal to noise (SNR) and contrast to noise (CNR) ratios), as well as visually using a 5-point Likert scale. Stent wall thickness was decreased with the edge-enhancing kernel in comparison to the standard kernel, either with the orthogonal (0.97 ± 0.02 versus 1.09 ± 0.03 mm, respectively; p<0.001) or the circumference method (1.13 ± 0.02 versus 1.21 ± 0.02 mm, respectively; p = 0.001). The edge-enhancing kernel generated less overestimation from nominal thickness compared to the standard kernel, both with the orthogonal (0.89 ± 0.19 versus 1.00 ± 0.26 mm, respectively; p<0.001) and the circumference (1.06 ± 0.26 versus 1.13 ± 0.31 mm, respectively; p = 0.005) methods. The edge-enhancing kernel was associated with lower SNR and CNR, as well as higher background noise (all p < 0.001), in comparison to the medium-smooth kernel. Stent visual scores were higher with the edge-enhancing kernel (p<0.001). In vivo 256-slice CT assessment of coronary stents shows that the edge-enhancing CT reconstruction kernel generates thinner stent walls, less overestimation from nominal thickness, and better image quality scores than the standard kernel.

  9. Optimizing seeding and culture methods to engineer smooth muscle tissue on biodegradable polymer matrices.

    PubMed

    Kim, B S; Putnam, A J; Kulik, T J; Mooney, D J

    1998-01-05

    The engineering of functional smooth muscle (SM) tissue is critical if one hopes to successfully replace the large number of tissues containing an SM component with engineered equivalents. This study reports on the effects of SM cell (SMC) seeding and culture conditions on the cellularity and composition of SM tissues engineered using biodegradable matrices (5 x 5 mm, 2-mm thick) of polyglycolic acid (PGA) fibers. Cells were seeded by injecting a cell suspension into polymer matrices in tissue culture dishes (static seeding), by stirring polymer matrices and a cell suspension in spinner flasks (stirred seeding), or by agitating polymer matrices and a cell suspension in tubes with an orbital shaker (agitated seeding). The density of SMCs adherent to these matrices was a function of cell concentration in the seeding solution, but under all conditions a larger number (approximately 1 order of magnitude) and more uniform distribution of SMCs adherent to the matrices were obtained with dynamic versus static seeding methods. The dynamic seeding methods, as compared to the static method, also ultimately resulted in new tissues that had a higher cellularity, more uniform cell distribution, and greater elastin deposition. The effects of culture conditions were next studied by culturing cell-polymer constructs in a stirred bioreactor versus static culture conditions. The stirred culture of SMC-seeded polymer matrices resulted in tissues with a cell density of 6.4 +/- 0.8 x 10(8) cells/cm3 after 5 weeks, compared to 2.0 +/- 1.1 x 10(8) cells/cm3 with static culture. The elastin and collagen synthesis rates and deposition within the engineered tissues were also increased by culture in the bioreactors. The elastin content after 5-week culture in the stirred bioreactor was 24 +/- 3%, and both the elastin content and the cellularity of these tissues are comparable to those of native SM tissue. New tissues were also created in vivo when dynamically seeded polymer matrices were

  10. Polyhedral elements using an edge-based smoothed finite element method for nonlinear elastic deformations of compressible and nearly incompressible materials

    NASA Astrophysics Data System (ADS)

    Lee, Chan; Kim, Hobeom; Kim, Jungdo; Im, Seyoung

    2017-06-01

    Polyhedral elements with an arbitrary number of nodes or non-planar faces, obtained with an edge-based smoothed finite element method, retain good geometric adaptability and accuracy in solution. This work is intended to extend the polyhedral elements to nonlinear elastic analysis with finite deformations. In order to overcome the volumetric locking problem, a smoothing domain-based selective smoothed finite element method scheme and a three-field-mixed cell-based smoothed finite element method with nodal cells were developed. Using several numerical examples, their performance and the accuracy of their solutions were examined, and their effectiveness for practical applications was demonstrated as well.

  11. High-order Eulerian incompressible smoothed particle hydrodynamics with transition to Lagrangian free-surface motion

    NASA Astrophysics Data System (ADS)

    Lind, S. J.; Stansby, P. K.

    2016-12-01

    The incompressible Smoothed Particle Hydrodynamics (ISPH) method is derived in Eulerian form with high-order smoothing kernels to provide increased accuracy for a range of steady and transient internal flows. Periodic transient flows, in particular, demonstrate high-order convergence and accuracies approaching, for example, spectral mesh-based methods. The improved accuracies are achieved through new high-order Gaussian kernels applied over regular particle distributions with time stepping formally up to 2nd order for transient flows. The Eulerian approach can be easily extended to model free surface flows by merging from Eulerian to Lagrangian regions in an Arbitrary-Lagrangian-Eulerian (ALE) fashion, and a demonstration with periodic wave propagation is presented. In the long term, it is envisaged that the method will greatly increase the accuracy and efficiency of SPH methods, while retaining the flexibility of SPH in modelling free surface and multiphase flows.

  12. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    DOE PAGES

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...

    2016-08-10

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) tomore » evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less

  13. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    NASA Astrophysics Data System (ADS)

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-01

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  14. Adaptive wiener image restoration kernel

    DOEpatents

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  15. Application of Smoothing Methods for Determining of the Effecting Factors on the Survival Rate of Gastric Cancer Patients

    PubMed Central

    Noorkojuri, Hoda; Hajizadeh, Ebrahim; Baghestani, Ahmadreza; Pourhoseingholi, Mohamadamin

    2013-01-01

    Background Smoothing methods are widely used to analyze epidemiologic data, particularly in the area of environmental health where non-linear relationships are not uncommon. This study focused on three different smoothing methods in Cox models: penalized splines, restricted cubic splines and fractional polynomials. Objectives The aim of this study was to assess the effects of prognostic factors on survival of patients with gastric cancer using the smoothing methods in Cox model and Cox proportional hazards. Also, all models were compared to each other in order to find the best one. Materials and Methods We retrospectively studied 216 patients with gastric cancer who were registered in one referral cancer registry center in Tehran, Iran. Age at diagnosis, sex, presence of metastasis, tumor size, histology type, lymph node metastasis, and pathologic stages were entered in to analysis using the Cox proportional hazards model and smoothing methods in Cox model. The SPSS version 18.0 and R version 2.14.1 were used for data analysis. These models compared with Akaike information criterion. Results In this study, The 5 year survival rate was 30%. The Cox proportional hazards, penalized spline and fractional polynomial models let to similar results and Akaike information criterion showed a better performance for these three models comparing to the restricted cubic spline. Also, P-value and likelihood ratio test in restricted cubic spline was greater than other models. Note that the best model is indicated by the lowest Akaike information criterion. Conclusions The use of smoothing methods helps us to eliminate non-linear effects but it is more appropriate to use Cox proportional hazards model in medical data because of its’ ease of interpretation and capability of modeling both continuous and discrete covariates. Also, Cox proportional hazards model and smoothing methods analysis identified that age at diagnosis and tumor size were independent prognostic factors for the

  16. Analysis of maize ( Zea mays ) kernel density and volume using microcomputed tomography and single-kernel near-infrared spectroscopy.

    PubMed

    Gustin, Jeffery L; Jackson, Sean; Williams, Chekeria; Patel, Anokhee; Armstrong, Paul; Peter, Gary F; Settles, A Mark

    2013-11-20

    Maize kernel density affects milling quality of the grain. Kernel density of bulk samples can be predicted by near-infrared reflectance (NIR) spectroscopy, but no accurate method to measure individual kernel density has been reported. This study demonstrates that individual kernel density and volume are accurately measured using X-ray microcomputed tomography (μCT). Kernel density was significantly correlated with kernel volume, air space within the kernel, and protein content. Embryo density and volume did not influence overall kernel density. Partial least-squares (PLS) regression of μCT traits with single-kernel NIR spectra gave stable predictive models for kernel density (R(2) = 0.78, SEP = 0.034 g/cm(3)) and volume (R(2) = 0.86, SEP = 2.88 cm(3)). Density and volume predictions were accurate for data collected over 10 months based on kernel weights calculated from predicted density and volume (R(2) = 0.83, SEP = 24.78 mg). Kernel density was significantly correlated with bulk test weight (r = 0.80), suggesting that selection of dense kernels can translate to improved agronomic performance.

  17. Self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation theorem and the exact-exchange kernel

    SciTech Connect

    Bleiziffer, Patrick Krug, Marcel; Görling, Andreas

    2015-06-28

    A self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation (ACFD) theorem, employing the frequency-dependent exact exchange kernel f{sub x} is presented. The resulting SC-exact-exchange-only (EXX)-ACFD method leads to even more accurate correlation potentials than those obtained within the direct random phase approximation (dRPA). In contrast to dRPA methods, not only the Coulomb kernel but also the exact exchange kernel f{sub x} is taken into account in the EXX-ACFD correlation which results in a method that, unlike dRPA methods, is free of self-correlations, i.e., a method that treats exactly all one-electron systems, like, e.g., the hydrogen atom. The self-consistent evaluation of EXX-ACFD total energies improves the accuracy compared to EXX-ACFD total energies evaluated non-self-consistently with EXX or dRPA orbitals and eigenvalues. Reaction energies of a set of small molecules, for which highly accurate experimental reference data are available, are calculated and compared to quantum chemistry methods like Møller-Plesset perturbation theory of second order (MP2) or coupled cluster methods [CCSD, coupled cluster singles, doubles, and perturbative triples (CCSD(T))]. Moreover, we compare our methods to other ACFD variants like dRPA combined with perturbative corrections such as the second order screened exchange corrections or a renormalized singles correction. Similarly, the performance of our EXX-ACFD methods is investigated for the non-covalently bonded dimers of the S22 reference set and for potential energy curves of noble gas, water, and benzene dimers. The computational effort of the SC-EXX-ACFD method exhibits the same scaling of N{sup 5} with respect to the system size N as the non-self-consistent evaluation of only the EXX-ACFD correlation energy; however, the prefactor increases significantly. Reaction energies from the SC-EXX-ACFD method deviate quite little from EXX-ACFD energies obtained non

  18. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint

    PubMed Central

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2015-01-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575

  19. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint.

    PubMed

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2016-04-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint.

  20. Multivariate spatial nonparametric modelling via kernel processes mixing*

    PubMed Central

    Fuentes, Montserrat; Reich, Brian

    2013-01-01

    SUMMARY In this paper we develop a nonparametric multivariate spatial model that avoids specifying a Gaussian distribution for spatial random effects. Our nonparametric model extends the stick-breaking (SB) prior of Sethuraman (1994), which is frequently used in Bayesian modelling to capture uncertainty in the parametric form of an outcome. The stick-breaking prior is extended here to the spatial setting by assigning each location a different, unknown distribution, and smoothing the distributions in space with a series of space-dependent kernel functions that have a space-varying bandwidth parameter. This results in a flexible non-stationary spatial model, as different kernel functions lead to different relationships between the distributions at nearby locations. This approach is the first to allow both the probabilities and the point mass values of the SB prior to depend on space. Thus, there is no need for replications and we obtain a continuous process in the limit. We extend the model to the multivariate setting by having for each process a different kernel function, but sharing the location of the kernel knots across the different processes. The resulting covariance for the multivariate process is in general nonstationary and nonseparable. The modelling framework proposed here is also computationally efficient because it avoids inverting large matrices and calculating determinants, which often hinders the spatial analysis of large data sets. We study the theoretical properties of the proposed multivariate spatial process. The methods are illustrated using simulated examples and an air pollution application to model components of fine particulate matter. PMID:24347994

  1. Using the Smooth Receiver Operating Curve (ROC) Method for Evaluation and Decision Making in Biometric Systems

    DTIC Science & Technology

    2014-07-01

    Peter A. Flach, Nathalie Japkowicz, Stan Matwin: Smooth Receiver Op- erating Characteristics (smROC) Curves. Machine Learning and Knowledge...strengthen Canada’s ability to anticipate, prevent/mitigate, prepare for, respond to, and recover from natural disasters , serious accidents, crime and...in developing decision making rules for face recognition triaging . From a performance evaluation perspective, the problem can be decomposed into two

  2. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  3. Examination of tear film smoothness on corneae after refractive surgeries using a noninvasive interferometric method

    NASA Astrophysics Data System (ADS)

    Szczesna, Dorota H.; Kulas, Zbigniew; Kasprzak, Henryk T.; Stenevi, Ulf

    2009-11-01

    A lateral shearing interferometer was used to examine the smoothness of the tear film. The information about the distribution and stability of the precorneal tear film is carried out by the wavefront reflected from the surface of tears and coded in interference fringes. Smooth and regular fringes indicate a smooth tear film surface. On corneae after laser in situ keratomileusis (LASIK) or radial keratotomy (RK) surgery, the interference fringes are seldom regular. The fringes are bent on bright lines, which are interpreted as tear film breakups. The high-intensity pattern seems to appear in similar location on the corneal surface after refractive surgery. Our purpose was to extract information about the pattern existing under the interference fringes and calculate its shape reproducibility over time and following eye blinks. A low-pass filter was applied and correlation coefficient was calculated to compare a selected fragment of the template image to each of the following frames in the recorded sequence. High values of the correlation coefficient suggest that irregularities of the corneal epithelium might influence tear film instability and that tear film breakup may be associated with local irregularities of the corneal topography created after the LASIK and RK surgeries.

  4. A steady and oscillatory kernel function method for interfering surfaces in subsonic, transonic and supersonic flow. [prediction analysis techniques for airfoils

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1976-01-01

    The theory, results and user instructions for an aerodynamic computer program are presented. The theory is based on linear lifting surface theory, and the method is the kernel function. The program is applicable to multiple interfering surfaces which may be coplanar or noncoplanar. Local linearization was used to treat nonuniform flow problems without shocks. For cases with imbedded shocks, the appropriate boundary conditions were added to account for the flow discontinuities. The data describing nonuniform flow fields must be input from some other source such as an experiment or a finite difference solution. The results are in the form of small linear perturbations about nonlinear flow fields. The method was applied to a wide variety of problems for which it is demonstrated to be significantly superior to the uniform flow method. Program user instructions are given for easy access.

  5. A new kernel-based fuzzy level set method for automated segmentation of medical images in the presence of intensity inhomogeneity.

    PubMed

    Rastgarpour, Maryam; Shanbehzadeh, Jamshid

    2014-01-01

    Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation.

  6. A New Kernel-Based Fuzzy Level Set Method for Automated Segmentation of Medical Images in the Presence of Intensity Inhomogeneity

    PubMed Central

    Shanbehzadeh, Jamshid

    2014-01-01

    Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation. PMID:24624225

  7. Difference image analysis: automatic kernel design using information criteria

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.

    2016-03-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.

  8. An efficient depth map preprocessing method based on structure-aided domain transform smoothing for 3D view generation

    PubMed Central

    Ma, Liyan; Qiu, Bo; Cui, Mingyue; Ding, Jianwei

    2017-01-01

    Depth image-based rendering (DIBR), which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method. PMID:28407027

  9. An efficient depth map preprocessing method based on structure-aided domain transform smoothing for 3D view generation.

    PubMed

    Liu, Wei; Ma, Liyan; Qiu, Bo; Cui, Mingyue; Ding, Jianwei

    2017-01-01

    Depth image-based rendering (DIBR), which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method.

  10. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  11. Learning bounds for kernel regression using effective data dimensionality.

    PubMed

    Zhang, Tong

    2005-09-01

    Kernel methods can embed finite-dimensional data into infinite-dimensional feature spaces. In spite of the large underlying feature dimensionality, kernel methods can achieve good generalization ability. This observation is often wrongly interpreted, and it has been used to argue that kernel learning can magically avoid the "curse-of-dimensionality" phenomenon encountered in statistical estimation problems. This letter shows that although using kernel representation, one can embed data into an infinite-dimensional feature space; the effective dimensionality of this embedding, which determines the learning complexity of the underlying kernel machine, is usually small. In particular, we introduce an algebraic definition of a scale-sensitive effective dimension associated with a kernel representation. Based on this quantity, we derive upper bounds on the generalization performance of some kernel regression methods. Moreover, we show that the resulting convergent rates are optimal under various circumstances.

  12. Efficient $\\chi ^{2}$ Kernel Linearization via Random Feature Maps.

    PubMed

    Yuan, Xiao-Tong; Wang, Zhenzhen; Deng, Jiankang; Liu, Qingshan

    2016-11-01

    Explicit feature mapping is an appealing way to linearize additive kernels, such as χ(2) kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of χ(2) kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize χ(2) kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to χ(2) multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the χ(2) kernel SVMs at almost no cost of testing accuracy.

  13. Weighted Bergman Kernels and Quantization}

    NASA Astrophysics Data System (ADS)

    Engliš, Miroslav

    Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion for x,y near z, where φ(x,y) is an almost-analytic extension of &\\phi(x)=φ(x,x) and similarly for ψ. Further, . If in addition Ω is of finite type, φ,ψ behave reasonably at the boundary, and - log φ, - log ψ are strictly plurisubharmonic on Ω, we obtain also an analogous asymptotic expansion for the Berezin transform and give applications to the Berezin quantization. Finally, for Ω smoothly bounded and strictly pseudoconvex and φ a smooth strictly plurisubharmonic defining function for Ω, we also obtain results on the Berezin-Toeplitz quantization.

  14. Iterative software kernels

    SciTech Connect

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  15. Learning with Box Kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-04-12

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, since the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given which dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  16. Learning with box kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-11-01

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, because the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given that dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  17. Kernel Affine Projection Algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  18. Modeling the propagation of volcanic debris avalanches by a Smoothed Particle Hydrodynamics method

    NASA Astrophysics Data System (ADS)

    Sosio, Rosanna; Battista Crosta, Giovanni

    2010-05-01

    Hazard from collapses of volcanic edifices threatens million of people which currently live on top of volcanic deposits or around volcanoes prone to fail. Nevertheless, no much effort has been dedicated for the evaluation of the hazard posed by volcanic debris avalanches (e.g. emergency plans, hazard zoning maps). This work focuses at evaluating the exceptional mobility of volcanic debris avalanches for hazard analyses purposes by providing a set of calibrated cases. We model the propagation of eight debris avalanche selected among the best known historical events originated from sector collapses of volcanic edifices. The events have large volumes (ranging from 0.01-0.02 km3 to 25 km3) and are well preserved so that their main features are recognizable from satellite images. The events developed in a variety of settings and condition and they vary with respect to their morphological constrains, materials, styles of failure. The modeling has been performed using a Lagragian numerical method adapted from Smoothed Particle Hydrodynamics to solve the depth averaged quasi-3D equation for motion (McDougall and Hungr 2004). This code has been designed and satisfactorily used to simulate rock and debris avalanches in non-volcanic settings (McDougall and Hungr, 2004). Its use is here extended to model volcanic debris avalanches which may differ from non-volcanic ones by dimensions, water content and by possible thermodynamic effects or degassing caused by active volcanic processes. The resolution of the topographic data is generally low for remote areas like the ones considered in this study, while the pre event topographies are more often not available. The effect of the poor topographic resolution on the final results has been evaluated by replicating the modeling on satellite-derived topographical grids with varying cell size (from 22 m to 90 m). The event reconstructions and the back analyses are based on the observations available from the literature. We test the

  19. Perturbed kernel approximation on homogeneous manifolds

    NASA Astrophysics Data System (ADS)

    Levesley, J.; Sun, X.

    2007-02-01

    Current methods for interpolation and approximation within a native space rely heavily on the strict positive-definiteness of the underlying kernels. If the domains of approximation are the unit spheres in euclidean spaces, then zonal kernels (kernels that are invariant under the orthogonal group action) are strongly favored. In the implementation of these methods to handle real world problems, however, some or all of the symmetries and positive-definiteness may be lost in digitalization due to small random errors that occur unpredictably during various stages of the execution. Perturbation analysis is therefore needed to address the stability problem encountered. In this paper we study two kinds of perturbations of positive-definite kernels: small random perturbations and perturbations by Dunkl's intertwining operators [C. Dunkl, Y. Xu, Orthogonal polynomials of several variables, Encyclopedia of Mathematics and Its Applications, vol. 81, Cambridge University Press, Cambridge, 2001]. We show that with some reasonable assumptions, a small random perturbation of a strictly positive-definite kernel can still provide vehicles for interpolation and enjoy the same error estimates. We examine the actions of the Dunkl intertwining operators on zonal (strictly) positive-definite kernels on spheres. We show that the resulted kernels are (strictly) positive-definite on spheres of lower dimensions.

  20. Kernel PLS Estimation of Single-trial Event-related Potentials

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.

    2004-01-01

    Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.

  1. Kernel PLS Estimation of Single-trial Event-related Potentials

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.

    2004-01-01

    Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.

  2. A nonparametric mean-variance smoothing method to assess Arabidopsis cold stress transcriptional regulator CBF2 overexpression microarray data.

    PubMed

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.

  3. A smooth dissipative particle dynamics method for domains with arbitrary-geometry solid boundaries

    NASA Astrophysics Data System (ADS)

    Gatsonis, Nikolaos A.; Potami, Raffaele; Yang, Jun

    2014-01-01

    A smooth dissipative particle dynamics method with dynamic virtual particle allocation (SDPD-DV) for modeling and simulation of mesoscopic fluids in wall-bounded domains is presented. The physical domain in SDPD-DV may contain external and internal solid boundaries of arbitrary geometries, periodic inlets and outlets, and the fluid region. The SDPD-DV method is realized with fluid particles, boundary particles, and dynamically allocated virtual particles. The internal or external solid boundaries of the domain can be of arbitrary geometry and are discretized with a surface grid. These boundaries are represented by boundary particles with assigned properties. The fluid domain is discretized with fluid particles of constant mass and variable volume. Conservative and dissipative force models due to virtual particles exerted on a fluid particle in the proximity of a solid boundary supplement the original SDPD formulation. The dynamic virtual particle allocation approach provides the density and the forces due to virtual particles. The integration of the SDPD equations is accomplished with a velocity-Verlet algorithm for the momentum and a Runge-Kutta for the entropy equation. The velocity integrator is supplemented by a bounce-forward algorithm in cases where the virtual particle force model is not able to prevent particle penetration. For the incompressible isothermal systems considered in this work, the pressure of a fluid particle is obtained by an artificial compressibility formulation for liquids and the ideal gas law for gases. The self-diffusion coefficient is obtained by an implementation of the generalized Einstein and the Green-Kubo relations. Field properties are obtained by sampling SDPD-DV outputs on a post-processing grid that allows harnessing the particle information on desired spatiotemporal scales. The SDPD-DV method is verified and validated with simulations in bounded and periodic domains that cover the hydrodynamic and mesoscopic regimes for

  4. Invariance kernel of biological regulatory networks.

    PubMed

    Ahmad, Jamil; Roux, Olivier

    2010-01-01

    The analysis of Biological Regulatory Network (BRN) leads to the computing of the set of the possible behaviours of the biological components. These behaviours are seen as trajectories and we are specifically interested in cyclic trajectories since they stand for stability. The set of cycles is given by the so-called invariance kernel of a BRN. This paper presents a method for deriving symbolic formulae for the length, volume and diameter of a cylindrical invariance kernel. These formulae are expressed in terms of delay parameters expressions and give the existence of an invariance kernel and a hint of the number of cyclic trajectories.

  5. An analysis of smoothed particle hydrodynamics

    SciTech Connect

    Swegle, J.W.; Attaway, S.W.; Heinstein, M.W.; Mello, F.J.; Hicks, D.L.

    1994-03-01

    SPH (Smoothed Particle Hydrodynamics) is a gridless Lagrangian technique which is appealing as a possible alternative to numerical techniques currently used to analyze high deformation impulsive loading events. In the present study, the SPH algorithm has been subjected to detailed testing and analysis to determine its applicability in the field of solid dynamics. An important result of the work is a rigorous von Neumann stability analysis which provides a simple criterion for the stability or instability of the method in terms of the stress state and the second derivative of the kernel function. Instability, which typically occurs only for solids in tension, results not from the numerical time integration algorithm, but because the SPH algorithm creates an effective stress with a negative modulus. The analysis provides insight into possible methods for removing the instability. Also, SPH has been coupled into the transient dynamics finite element code PRONTO, and a weighted residual derivation of the SPH equations has been obtained.

  6. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  7. Hybrid smoothed dissipative particle dynamics and immersed boundary method for simulation of red blood cells in flows.

    PubMed

    Ye, Ting; Phan-Thien, Nhan; Lim, Chwee Teck; Peng, Lina; Shi, Huixin

    2017-06-01

    In biofluid flow systems, often the flow problems of fluids of complex structures, such as the flow of red blood cells (RBCs) through complex capillary vessels, need to be considered. The smoothed dissipative particle dynamics (SDPD), a particle-based method, is one of the easy and flexible methods to model such complex structure fluids. It couples the best features of the smoothed particle hydrodynamics (SPH) and dissipative particle dynamics (DPD), with parameters having specific physical meaning (coming from SPH discretization of the Navier-Stokes equations), combined with thermal fluctuations in a mesoscale simulation, in a similar manner to the DPD. On the other hand, the immersed boundary method (IBM), a preferred method for handling fluid-structure interaction problems, has also been widely used to handle the fluid-RBC interaction in RBC simulations. In this paper, we aim to couple SDPD and IBM together to carry out the simulations of RBCs in complex flow problems. First, we develop the SDPD-IBM model in details, including the SDPD model for the evolving fluid flow, the RBC model for calculating RBC deformation force, the IBM for treating fluid-RBC interaction, and the solid boundary treatment model as well. We then conduct the verification and validation of the combined SDPD-IBM method. Finally, we demonstrate the capability of the SDPD-IBM method by simulating the flows of RBCs in rectangular, cylinder, curved, bifurcated, and constricted tubes, respectively.

  8. Hybrid smoothed dissipative particle dynamics and immersed boundary method for simulation of red blood cells in flows

    NASA Astrophysics Data System (ADS)

    Ye, Ting; Phan-Thien, Nhan; Lim, Chwee Teck; Peng, Lina; Shi, Huixin

    2017-06-01

    In biofluid flow systems, often the flow problems of fluids of complex structures, such as the flow of red blood cells (RBCs) through complex capillary vessels, need to be considered. The smoothed dissipative particle dynamics (SDPD), a particle-based method, is one of the easy and flexible methods to model such complex structure fluids. It couples the best features of the smoothed particle hydrodynamics (SPH) and dissipative particle dynamics (DPD), with parameters having specific physical meaning (coming from SPH discretization of the Navier-Stokes equations), combined with thermal fluctuations in a mesoscale simulation, in a similar manner to the DPD. On the other hand, the immersed boundary method (IBM), a preferred method for handling fluid-structure interaction problems, has also been widely used to handle the fluid-RBC interaction in RBC simulations. In this paper, we aim to couple SDPD and IBM together to carry out the simulations of RBCs in complex flow problems. First, we develop the SDPD-IBM model in details, including the SDPD model for the evolving fluid flow, the RBC model for calculating RBC deformation force, the IBM for treating fluid-RBC interaction, and the solid boundary treatment model as well. We then conduct the verification and validation of the combined SDPD-IBM method. Finally, we demonstrate the capability of the SDPD-IBM method by simulating the flows of RBCs in rectangular, cylinder, curved, bifurcated, and constricted tubes, respectively.

  9. Segmentation of 3D microPET images of the rat brain via the hybrid gaussian mixture method with kernel density estimation.

    PubMed

    Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing

    2012-01-01

    Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.

  10. A class of kernel based real-time elastography algorithms.

    PubMed

    Kibria, Md Golam; Hasan, Md Kamrul

    2015-08-01

    In this paper, a novel real-time kernel-based and gradient-based Phase Root Seeking (PRS) algorithm for ultrasound elastography is proposed. The signal-to-noise ratio of the strain image resulting from this method is improved by minimizing the cross-correlation discrepancy between the pre- and post-compression radio frequency signals with an adaptive temporal stretching method and employing built-in smoothing through an exponentially weighted neighborhood kernel in the displacement calculation. Unlike conventional PRS algorithms, displacement due to tissue compression is estimated from the root of the weighted average of the zero-lag cross-correlation phases of the pair of corresponding analytic pre- and post-compression windows in the neighborhood kernel. In addition to the proposed one, the other time- and frequency-domain elastography algorithms (Ara et al., 2013; Hussain et al., 2012; Hasan et al., 2012) proposed by our group are also implemented in real-time using Java where the computations are serially executed or parallely executed in multiple processors with efficient memory management. Simulation results using finite element modeling simulation phantom show that the proposed method significantly improves the strain image quality in terms of elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe) and mean structural similarity (MSSIM) for strains as high as 4% as compared to other reported techniques in the literature. Strain images obtained for the experimental phantom as well as in vivo breast data of malignant or benign masses also show the efficacy of our proposed method over the other reported techniques in the literature. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. RKF-PCA: robust kernel fuzzy PCA.

    PubMed

    Heo, Gyeongyong; Gader, Paul; Frigui, Hichem

    2009-01-01

    Principal component analysis (PCA) is a mathematical method that reduces the dimensionality of the data while retaining most of the variation in the data. Although PCA has been applied in many areas successfully, it suffers from sensitivity to noise and is limited to linear principal components. The noise sensitivity problem comes from the least-squares measure used in PCA and the limitation to linear components originates from the fact that PCA uses an affine transform defined by eigenvectors of the covariance matrix and the mean of the data. In this paper, a robust kernel PCA method that extends the kernel PCA and uses fuzzy memberships is introduced to tackle the two problems simultaneously. We first introduce an iterative method to find robust principal components, called Robust Fuzzy PCA (RF-PCA), which has a connection with robust statistics and entropy regularization. The RF-PCA method is then extended to a non-linear one, Robust Kernel Fuzzy PCA (RKF-PCA), using kernels. The modified kernel used in the RKF-PCA satisfies the Mercer's condition, which means that the derivation of the K-PCA is also valid for the RKF-PCA. Formal analyses and experimental results suggest that the RKF-PCA is an efficient non-linear dimension reduction method and is more noise-robust than the original kernel PCA.

  12. Multiple collaborative kernel tracking.

    PubMed

    Fan, Zhimin; Yang, Ming; Wu, Ying

    2007-07-01

    Those motion parameters that cannot be recovered from image measurements are unobservable in the visual dynamic system. This paper studies this important issue of singularity in the context of kernel-based tracking and presents a novel approach that is based on a motion field representation which employs redundant but sparsely correlated local motion parameters instead of compact but uncorrelated global ones. This approach makes it easy to design fully observable kernel-based motion estimators. This paper shows that these high-dimensional motion fields can be estimated efficiently by the collaboration among a set of simpler local kernel-based motion estimators, which makes the new approach very practical.

  13. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power

  14. Fast image filters as an alternative to reconstruction kernels in computed tomography

    NASA Astrophysics Data System (ADS)

    Flohr, Thomas; Schaller, Stefan; Stadler, Alexander; Brandhuber, Wolfgang; Niethammer, Matthias U.; Klingenbeck-Regn, Klaus W.; Steffen, Peter

    2001-07-01

    In Computed Tomography, axial resolution is determined by the slice collimation and the spiral algorithm, while in-plane resolution is determined by the reconstruction kernel. Both choices select a tradeoff between image resolution (sharpness) and pixel noise. We investigated an alternative approach using default settings for image reconstruction which provide narrow reconstructed slice-width and high in-plane resolution. If smoother images are desired, we filter the original (sharp) images, instead of performing a new reconstruction with a smoother kernel. A suitable filter function in the frequency domain is the ratio of smooth and original (sharp) kernel. Efficient implementation was achieved by a Fourier transform of this ratio to the spatial domain. Separating the 2D spatial filtering into two subsequent 1D filtering stages in x- and y-direction further reduces computational complexity. Using this approach, arbitrarily oriented multi-planar reformats (MPRs) can be treated in exactly the same way as axial images. Due to efficient implementation, interactive modification of the filter settings becomes possible, which completely replace the variety of different reconstruction kernels. We implemented a further promising application of the method to thorax imaging, where different regions of the thorax (lungs and mediastinum) are jointly presented in the same images using different filter settings and different windowing.

  15. A computational method for three-dimensional reconstruction of the microarchitecture of myometrial smooth muscle from histological sections

    PubMed Central

    Lutton, E. Josiah; Lammers, Wim J. E. P.; James, Sean

    2017-01-01

    Background The fibrous structure of the myometrium has previously been characterised at high resolutions in small tissue samples (< 100 mm3) and at low resolutions (∼500 μm per voxel edge) in whole-organ reconstructions. However, no high-resolution visualisation of the myometrium at the organ level has previously been attained. Methods and results We have developed a technique to reconstruct the whole myometrium from serial histological slides, at a resolution of approximately 50 μm per voxel edge. Reconstructions of samples taken from human and rat uteri are presented here, along with histological verification of the reconstructions and detailed investigation of the fibrous structure of these uteri, using a range of tools specifically developed for this analysis. These reconstruction techniques enable the high-resolution rendering of global structure previously observed at lower resolution. Moreover, structures observed previously in small portions of the myometrium can be observed in the context of the whole organ. The reconstructions are in direct correspondence with the original histological slides, which allows the inspection of the anatomical context of any features identified in the three-dimensional reconstructions. Conclusions and significance The methods presented here have been used to generate a faithful representation of myometrial smooth muscle at a resolution of ∼50 μm per voxel edge. Characterisation of the smooth muscle structure of the myometrium by means of this technique revealed a detailed view of previously identified global structures in addition to a global view of the microarchitecture. A suite of visualisation tools allows researchers to interrogate the histological microarchitecture. These methods will be applicable to other smooth muscle tissues to analyse fibrous microarchitecture. PMID:28301486

  16. First order accuracy conservative smooth particle method for elastic-plastic flows with contacts and free surfaces

    NASA Astrophysics Data System (ADS)

    Cherepanov, Roman O.; Gerasimov, Alexander V.

    2016-11-01

    A fully conservative first order accuracy smooth particle method is proposed for elastic-plastic flows. The paper also provides an algorithm for calculating free boundary conditions. A weak variational formulation is used to achieve energy and momentum conservation and to decrease an order of spatial derivatives for the boundary condition calculation, and the Taylor series expansion is used for restoring particle inconsistence and for getting at least the first order of accuracy of spatial derivatives. The approach proposed allows us to avoid the "ghost" particle usage.

  17. Development of smoothed particle hydrodynamics method for analysis of high-speed two-phase flows in hydropower spillways

    NASA Astrophysics Data System (ADS)

    Nakayama, Akihiko; Leong, Lap Yan; Kong, Wei Song

    2017-04-01

    The basic formulation of the smoothed particle hydrodynamics (SPH) has been re-examined for analysis of gas-liquid two-phase flows with large density differences. The improved method has been verified in the calculation of dam-break flow and has been applied to an open-channel flow over steep sloped stepped spillway. In the calculation of the flow over the steps, not only is the trapped air but entrained air bubbles and water droplets are reproduced well. The detailed variation of the time-averaged mean quantities will have to be further examined but overall prediction with relatively small number of particles is done well.

  18. Community detection using Kernel Spectral Clustering with memory

    NASA Astrophysics Data System (ADS)

    Langone, Rocco; Suykens, Johan A. K.

    2013-02-01

    This work is related to the problem of community detection in dynamic scenarios, which for instance arises in the segmentation of moving objects, clustering of telephone traffic data, time-series micro-array data etc. A desirable feature of a clustering model which has to capture the evolution of communities over time is the temporal smoothness between clusters in successive time-steps. In this way the model is able to track the long-term trend and in the same time it smooths out short-term variation due to noise. We use the Kernel Spectral Clustering with Memory effect (MKSC) which allows to predict cluster memberships of new nodes via out-of-sample extension and has a proper model selection scheme. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness as a valid prior knowledge. The latter, in fact, allows the model to cluster the current data well and to be consistent with the recent history. Here we propose a generalization of the MKSC model with an arbitrary memory, not only one time-step in the past. The experiments conducted on toy problems confirm our expectations: the more memory we add to the model, the smoother over time are the clustering results. We also compare with the Evolutionary Spectral Clustering (ESC) algorithm which is a state-of-the art method, and we obtain comparable or better results.

  19. Robotic Intelligence Kernel: Communications

    SciTech Connect

    Walton, Mike C.

    2009-09-16

    The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.

  20. Robotic Intelligence Kernel: Driver

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.

  1. [Methods to smooth mortality indicators: application to analysis of inequalities in mortality in Spanish cities [the MEDEA Project

    PubMed

    Barceló, M Antònia; Saez, Marc; Cano-Serral, Gemma; Martínez-Beneito, Miguel Angel; Martínez, José Miguel; Borrell, Carme; Ocaña-Riola, Ricardo; Montoya, Imanol; Calvo, Montse; López-Abente, Gonzalo; Rodríguez-Sanz, Maica; Toro, Silvia; Alcalá, José Tomás; Saurina, Carme; Sánchez-Villegas, Pablo; Figueiras, Adolfo

    2008-01-01

    Although there is some experience in the study of mortality inequalities in Spanish cities, there are large urban centers that have not yet been investigated using the census tract as the unit of territorial analysis. The coordinated project was designed to fill this gap, with the participation of 10 groups of researchers in Andalusia, Aragon, Catalonia, Galicia, Madrid, Valencia, and the Basque Country. The MEDEA project has four distinguishing features: a) the census tract is used as the basic geographical area; b) statistical methods that include the geographical structure of the region under study are employed for risk estimation; c) data are drawn from three complementary data sources (information on air pollution, information on industrial pollution, and the records of mortality registrars), and d) a coordinated, large-scale analysis, favored by the implantation of coordinated research networks, is carried out. The main objective of the present study was to explain the methods for smoothing mortality indicators in the context of the MEDEA project. This study focusses on the methodology and the results of the Besag, York and Mollié model (BYM) in disease mapping. In the MEDEA project, standardized mortality ratios (SMR), corresponding to 17 large groups of causes of death and 28 specific causes, were smoothed by means of the BYM model; however, in the present study this methodology was applied to mortality due to cancer of the trachea, bronchi and lung in men and women in the city of Barcelona from 1996 to 2003. As a result of smoothing, a different geographical pattern for SMR in both genders was observed. In men, a SMR higher than unity was found in highly deprived areas. In contrast, in women, this pattern was observed in more affluent areas.

  2. A computational method for three-dimensional reconstruction of the microarchitecture of myometrial smooth muscle from histological sections.

    PubMed

    Lutton, E Josiah; Lammers, Wim J E P; James, Sean; van den Berg, Hugo A; Blanks, Andrew M

    2017-01-01

    The fibrous structure of the myometrium has previously been characterised at high resolutions in small tissue samples (< 100 mm3) and at low resolutions (∼500 μm per voxel edge) in whole-organ reconstructions. However, no high-resolution visualisation of the myometrium at the organ level has previously been attained. We have developed a technique to reconstruct the whole myometrium from serial histological slides, at a resolution of approximately 50 μm per voxel edge. Reconstructions of samples taken from human and rat uteri are presented here, along with histological verification of the reconstructions and detailed investigation of the fibrous structure of these uteri, using a range of tools specifically developed for this analysis. These reconstruction techniques enable the high-resolution rendering of global structure previously observed at lower resolution. Moreover, structures observed previously in small portions of the myometrium can be observed in the context of the whole organ. The reconstructions are in direct correspondence with the original histological slides, which allows the inspection of the anatomical context of any features identified in the three-dimensional reconstructions. The methods presented here have been used to generate a faithful representation of myometrial smooth muscle at a resolution of ∼50 μm per voxel edge. Characterisation of the smooth muscle structure of the myometrium by means of this technique revealed a detailed view of previously identified global structures in addition to a global view of the microarchitecture. A suite of visualisation tools allows researchers to interrogate the histological microarchitecture. These methods will be applicable to other smooth muscle tissues to analyse fibrous microarchitecture.

  3. A Robust Method to Generate Mechanically Anisotropic Vascular Smooth Muscle Cell Sheets for Vascular Tissue Engineering.

    PubMed

    Backman, Daniel E; LeSavage, Bauer L; Shah, Shivem B; Wong, Joyce Y

    2017-06-01

    In arterial tissue engineering, mimicking native structure and mechanical properties is essential because compliance mismatch can lead to graft failure and further disease. With bottom-up tissue engineering approaches, designing tissue components with proper microscale mechanical properties is crucial to achieve the necessary macroscale properties in the final implant. This study develops a thermoresponsive cell culture platform for growing aligned vascular smooth muscle cell (VSMC) sheets by photografting N-isopropylacrylamide (NIPAAm) onto micropatterned poly(dimethysiloxane) (PDMS). The grafting process is experimentally and computationally optimized to produce PNIPAAm-PDMS substrates optimal for VSMC attachment. To allow long-term VSMC sheet culture and increase the rate of VSMC sheet formation, PNIPAAm-PDMS surfaces were further modified with 3-aminopropyltriethoxysilane yielding a robust, thermoresponsive cell culture platform for culturing VSMC sheets. VSMC cell sheets cultured on patterned thermoresponsive substrates exhibit cellular and collagen alignment in the direction of the micropattern. Mechanical characterization of patterned, single-layer VSMC sheets reveals increased stiffness in the aligned direction compared to the perpendicular direction whereas nonpatterned cell sheets exhibit no directional dependence. Structural and mechanical anisotropy of aligned, single-layer VSMC sheets makes this platform an attractive microstructural building block for engineering a vascular graft to match the in vivo mechanical properties of native arterial tissue. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Discriminant Kernel Assignment for Image Coding.

    PubMed

    Deng, Yue; Zhao, Yanyu; Ren, Zhiquan; Kong, Youyong; Bao, Feng; Dai, Qionghai

    2017-06-01

    This paper proposes discriminant kernel assignment (DKA) in the bag-of-features framework for image representation. DKA slightly modifies existing kernel assignment to learn width-variant Gaussian kernel functions to perform discriminant local feature assignment. When directly applying gradient-descent method to solve DKA, the optimization may contain multiple time-consuming reassignment implementations in iterations. Accordingly, we introduce a more practical way to locally linearize the DKA objective and the difficult task is cast as a sequence of easier ones. Since DKA only focuses on the feature assignment part, it seamlessly collaborates with other discriminative learning approaches, e.g., discriminant dictionary learning or multiple kernel learning, for even better performances. Experimental evaluations on multiple benchmark datasets verify that DKA outperforms other image assignment approaches and exhibits significant efficiency in feature coding.

  5. Quantum kernel applications in medicinal chemistry.

    PubMed

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.

  6. A Novel Framework for Learning Geometry-Aware Kernels.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo

    2016-05-01

    The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.

  7. Improving convergence in smoothed particle hydrodynamics simulations without pairing instability

    NASA Astrophysics Data System (ADS)

    Dehnen, Walter; Aly, Hossam

    2012-09-01

    The numerical convergence of smoothed particle hydrodynamics (SPH) can be severely restricted by random force errors induced by particle disorder, especially in shear flows, which are ubiquitous in astrophysics. The increase in the number NH of neighbours when switching to more extended smoothing kernels at fixed resolution (using an appropriate definition for the SPH resolution scale) is insufficient to combat these errors. Consequently, trading resolution for better convergence is necessary, but for traditional smoothing kernels this option is limited by the pairing (or clumping) instability. Therefore, we investigate the suitability of the Wendland functions as smoothing kernels and compare them with the traditional B-splines. Linear stability analysis in three dimensions and test simulations demonstrate that the Wendland kernels avoid the pairing instability for all NH, despite having vanishing derivative at the origin (disproving traditional ideas about the origin of this instability; instead, we uncover a relation with the kernel Fourier transform and give an explanation in terms of the SPH density estimator). The Wendland kernels are computationally more convenient than the higher order B-splines, allowing large NH and hence better numerical convergence (note that computational costs rise sublinear with NH). Our analysis also shows that at low NH the quartic spline kernel with NH ≈ 60 obtains much better convergence than the standard cubic spline.

  8. A comparison of methods for smoothing and gap filling time series of remote sensing observations: application to MODIS LAI products

    NASA Astrophysics Data System (ADS)

    Kandasamy, S.; Baret, F.; Verger, A.; Neveux, P.; Weiss, M.

    2012-12-01

    Moderate resolution satellite sensors including MODIS already provide more than 10 yr of observations well suited to describe and understand the dynamics of the Earth surface. However, these time series are incomplete because of cloud cover and associated with significant uncertainties. This study compares eight methods designed to improve the continuity by filling gaps and the consistency by smoothing the time course. It includes methods exploiting the time series as a whole (Iterative caterpillar singular spectrum analysis (ICSSA), empirical mode decomposition (EMD), low pass filtering (LPF) and Whittaker smoother (Whit)) as well as methods working on limited temporal windows of few weeks to few months (Adaptive Savitzky-Golay filter (SGF), temporal smoothing and gap filling (TSGF) and asymmetric Gaussian function (AGF)) in addition to the simple climatological LAI yearly profile (Clim). Methods were applied to MODIS leaf area index product for the period 2000-2008 over 25 sites showing a large range of seasonal patterns. Performances were discussed with emphasis on the balance achieved by each method between accuracy and roughness depending on the fraction of missing observations and the length of the gaps. Results demonstrate that EMD, LPF and AGF methods were failing in case of significant fraction of gaps (%Gap > 20%), while ICSSA, Whit and SGF were always providing estimates for dates with missing data. TSGF (respectively Clim) was able to fill more than 50% of the gaps for sites with more than 60% (resp. 80%) fraction of gaps. However, investigation of the accuracy of the reconstructed values shows that it degrades rapidly for sites with more than 20% missing data, particularly for ICSSA, Whit and SGF. In these conditions, TSGF provides the best performances significantly better than the simple Clim for gaps shorter than about 100 days. The roughness of the reconstructed temporal profiles shows large differences between the several methods, with a decrease

  9. A comparison of methods for smoothing and gap filling time series of remote sensing observations - application to MODIS LAI products

    NASA Astrophysics Data System (ADS)

    Kandasamy, S.; Baret, F.; Verger, A.; Neveux, P.; Weiss, M.

    2013-06-01

    Moderate resolution satellite sensors including MODIS (Moderate Resolution Imaging Spectroradiometer) already provide more than 10 yr of observations well suited to describe and understand the dynamics of earth's surface. However, these time series are associated with significant uncertainties and incomplete because of cloud cover. This study compares eight methods designed to improve the continuity by filling gaps and consistency by smoothing the time course. It includes methods exploiting the time series as a whole (iterative caterpillar singular spectrum analysis (ICSSA), empirical mode decomposition (EMD), low pass filtering (LPF) and Whittaker smoother (Whit)) as well as methods working on limited temporal windows of a few weeks to few months (adaptive Savitzky-Golay filter (SGF), temporal smoothing and gap filling (TSGF), and asymmetric Gaussian function (AGF)), in addition to the simple climatological LAI yearly profile (Clim). Methods were applied to the MODIS leaf area index product for the period 2000-2008 and over 25 sites showed a large range of seasonal patterns. Performances were discussed with emphasis on the balance achieved by each method between accuracy and roughness depending on the fraction of missing observations and the length of the gaps. Results demonstrate that the EMD, LPF and AGF methods were failing because of a significant fraction of gaps (more than 20%), while ICSSA, Whit and SGF were always providing estimates for dates with missing data. TSGF (Clim) was able to fill more than 50% of the gaps for sites with more than 60% (80%) fraction of gaps. However, investigation of the accuracy of the reconstructed values shows that it degrades rapidly for sites with more than 20% missing data, particularly for ICSSA, Whit and SGF. In these conditions, TSGF provides the best performances that are significantly better than the simple Clim for gaps shorter than about 100 days. The roughness of the reconstructed temporal profiles shows large

  10. Smooth particle hydrodynamics method for modeling cavitation-induced fracture of a fluid under shock-wave loading

    NASA Astrophysics Data System (ADS)

    Davydov, M. N.; Kedrinskii, V. K.

    2013-11-01

    It is demonstrated that the method of smoothed particle hydrodynamics can be used to study the flow structure in a cavitating medium with a high concentration of the gas phase and to describe the process of inversion of the two-phase state of this medium: transition from a cavitating fluid to a system consisting of a gas and particles. A numerical analysis of the dynamics of the state of a hemispherical droplet under shock-wave loading shows that focusing of the shock wave reflected from the free surface of the droplet leads to the formation of a dense, but rapidly expanding cavitation cluster at the droplet center. By the time t = 500 µs, the bubbles at the cluster center not only coalesce and form a foam-type structure, but also transform to a gas-particle system, thus, forming an almost free rapidly expanding zone. The mechanism of this process defined previously as an internal "cavitation explosion" of the droplet is validated by means of mathematical modeling of the problem by the smoothed particle hydrodynamics method. The deformation of the cavitating droplet is finalized by its decomposition into individual fragments and particles.

  11. Automatic estimation of sleep level for nap based on conditional probability of sleep stages and an exponential smoothing method.

    PubMed

    Wang, Bei; Wang, Xingyu; Zhang, Tao; Nakamura, Masatoshi

    2013-01-01

    An automatic sleep level estimation method was developed for monitoring and regulation of day time nap sleep. The recorded nap data is separated into continuous 5-second segments. Features are extracted from EEGs, EOGs and EMG. A parameter of sleep level is defined which is estimated based on the conditional probability of sleep stages. An exponential smoothing method is applied for the estimated sleep level. There were totally 12 healthy subjects, with an averaged age of 22 yeas old, participated into the experimental work. Comparing with sleep stage determination, the presented sleep level estimation method showed better performance for nap sleep interpretation. Real time monitoring and regulation of nap is realizable based on the developed technique.

  12. A novel smooth impact drive mechanism actuation method with dual-slider for a compact zoom lens system.

    PubMed

    Lee, Jonghyun; Kwon, Won Sik; Kim, Kyung-Soo; Kim, Soohyun

    2011-08-01

    In this paper, a novel actuation method for a smooth impact drive mechanism that positions dual-slider by a single piezo-element is introduced and applied to a compact zoom lens system. A mode chart that determines the state of the slider at the expansion or shrinkage periods of the piezo-element is presented, and the design guide of a driving input profile is proposed. The motion of dual-slider holding lenses is analyzed at each mode, and proper modes for zoom functions are selected for the purpose of positioning two lenses. Because the proposed actuation method allows independent movement of two lenses by a single piezo-element, the zoom lens system can be designed to be compact. For a feasibility test, a lens system composed of an afocal zoom system and a focusing lens was developed, and the passive auto-focus method was implemented.

  13. New variable selection method using interval segmentation purity with application to blockwise kernel transform support vector machine classification of high-dimensional microarray data.

    PubMed

    Tang, Li-Juan; Du, Wen; Fu, Hai-Yan; Jiang, Jian-Hui; Wu, Hai-Long; Shen, Guo-Li; Yu, Ru-Qin

    2009-08-01

    One problem with discriminant analysis of microarray data is representation of each sample by a large number of genes that are possibly irrelevant, insignificant, or redundant. Methods of variable selection are, therefore, of great significance in microarray data analysis. A new method for key gene selection has been proposed on the basis of interval segmentation purity that is defined as the purity of samples belonging to a certain class in intervals segmented by a mode search algorithm. This method identifies key variables most discriminative for each class, which offers possibility of unraveling the biological implication of selected genes. A salient advantage of the new strategy over existing methods is the capability of selecting genes that, though possibly exhibit a multimodal distribution, are the most discriminative for the classes of interest, considering that the expression levels of some genes may reflect systematic difference in within-class samples derived from different pathogenic mechanisms. On the basis of the key genes selected for individual classes, a support vector machine with block-wise kernel transform is developed for the classification of different classes. The combination of the proposed gene mining approach with support vector machine is demonstrated in cancer classification using two public data sets. The results reveal that significant genes have been identified for each class, and the classification model shows satisfactory performance in training and prediction for both data sets.

  14. Critical Parameters of the In Vitro Method of Vascular Smooth Muscle Cell Calcification

    PubMed Central

    Hortells, Luis; Sosa, Cecilia; Millán, Ángel; Sorribas, Víctor

    2015-01-01

    Background Vascular calcification (VC) is primarily studied using cultures of vascular smooth muscle cells. However, the use of very different protocols and extreme conditions can provide findings unrelated to VC. In this work we aimed to determine the critical experimental parameters that affect calcification in vitro and to determine the relevance to calcification in vivo. Experimental Procedures and Results Rat VSMC calcification in vitro was studied using different concentrations of fetal calf serum, calcium, and phosphate, in different types of culture media, and using various volumes and rates of change. The bicarbonate content of the media critically affected pH and resulted in supersaturation, depending on the concentration of Ca2+ and Pi. Such supersaturation is a consequence of the high dependence of bicarbonate buffers on CO2 vapor pressure and bicarbonate concentration at pHs above 7.40. Such buffer systems cause considerable pH variations as a result of minor experimental changes. The variations are more critical for DMEM and are negligible when the bicarbonate concentration is reduced to ¼. Particle nucleation and growth were observed by dynamic light scattering and electron microscopy. Using 2mM Pi, particles of ~200nm were observed at 24 hours in MEM and at 1 hour in DMEM. These nuclei grew over time, were deposited in the cells, and caused osteogene expression or cell death, depending on the precipitation rate. TEM observations showed that the initial precipitate was amorphous calcium phosphate (ACP), which converts into hydroxyapatite over time. In blood, the scenario is different, because supersaturation is avoided by a tightly controlled pH of 7.4, which prevents the formation of PO43--containing ACP. Conclusions The precipitation of ACP in vitro is unrelated to VC in vivo. The model needs to be refined through controlled pH and the use of additional procalcifying agents other than Pi in order to reproduce calcium phosphate deposition in vivo

  15. A smoothly decoupled particle interface: New methods for coupling explicit and implicit solvent

    PubMed Central

    Wagoner, Jason A.; Pande, Vijay S.

    2011-01-01

    A common theme of studies using molecular simulation is a necessary compromise between computational efficiency and resolution of the forcefield that is used. Significant efforts have been directed at combining multiple levels of granularity within a single simulation in order to maintain the efficiency of coarse-grained models, while using finer resolution in regions where such details are expected to play an important role. A specific example of this paradigm is the development of hybrid solvent models, which explicitly sample the solvent degrees of freedom within a specified domain while utilizing a continuum description elsewhere. Unfortunately, these models are complicated by the presence of structural artifacts at or near the explicit∕implicit boundary. The presence of these artifacts significantly complicates the use of such models, both undermining the accuracy obtained and necessitating the parameterization of effective potentials to counteract the artificial interactions. In this work, we introduce a novel hybrid solvent model that employs a smoothly decoupled particle interface (SDPI), a switching region that gradually transitions from fully interacting particles to a continuum solvent. The resulting SDPI model allows for the use of an implicit solvent model based on a simple theory that needs to only reproduce the behavior of bulk solvent rather than the more complex features of local interactions. In this study, the SDPI model is tested on spherical hybrid domains using a coarse-grained representation of water that includes only Lennard-Jones interactions. The results demonstrate that this model is capable of reproducing solvent configurations absent of boundary artifacts, as if they were taken from full explicit simulations. PMID:21663340

  16. 3-D sensitivity kernels of the Rayleigh wave ellipticity

    NASA Astrophysics Data System (ADS)

    Maupin, Valérie

    2017-10-01

    The ellipticity of the Rayleigh wave at the surface depends on the seismic structure beneath and in the vicinity of the seismological station where it is measured. We derive here the expression and compute the 3-D kernels that describe this dependence with respect to S-wave velocity, P-wave velocity and density. Near-field terms as well as coupling to Love waves are included in the expressions. We show that the ellipticity kernels are the difference between the amplitude kernels of the radial and vertical components of motion. They show maximum values close to the station, but with a complex pattern, even when smoothing in a finite-frequency range is used to remove the oscillatory pattern present in mono-frequency kernels. In order to follow the usual data processing flow, we also compute and analyse the kernels of the ellipticity averaged over incoming wave backazimuth. The kernel with respect to P-wave velocity has the simplest lateral variation and is in good agreement with commonly used 1-D kernels. The kernels with respect to S-wave velocity and density are more complex and we have not been able to find a good correlation between the 3-D and 1-D kernels. Although it is clear that the ellipticity is mostly sensitive to the structure within half-a-wavelength of the station, the complexity of the kernels within this zone prevents simple approximations like a depth dependence times a lateral variation to be useful in the inversion of the ellipticity.

  17. A Parallel Implementation of a Smoothed Particle Hydrodynamics Method on Graphics Hardware Using the Compute Unified Device Architecture

    SciTech Connect

    Wong Unhong; Wong Honcheng; Tang Zesheng

    2010-05-21

    The smoothed particle hydrodynamics (SPH), which is a class of meshfree particle methods (MPMs), has a wide range of applications from micro-scale to macro-scale as well as from discrete systems to continuum systems. Graphics hardware, originally designed for computer graphics, now provide unprecedented computational power for scientific computation. Particle system needs a huge amount of computations in physical simulation. In this paper, an efficient parallel implementation of a SPH method on graphics hardware using the Compute Unified Device Architecture is developed for fluid simulation. Comparing to the corresponding CPU implementation, our experimental results show that the new approach allows significant speedups of fluid simulation through handling huge amount of computations in parallel on graphics hardware.

  18. On the constrained minimization of smooth Kurdyka—Łojasiewicz functions with the scaled gradient projection method

    NASA Astrophysics Data System (ADS)

    Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone

    2016-10-01

    The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.

  19. Quantum State Smoothing

    NASA Astrophysics Data System (ADS)

    Guevara, Ivonne; Wiseman, Howard

    2015-10-01

    Smoothing is an estimation method whereby a classical state (probability distribution for classical variables) at a given time is conditioned on all-time (both earlier and later) observations. Here we define a smoothed quantum state for a partially monitored open quantum system, conditioned on an all-time monitoring-derived record. We calculate the smoothed distribution for a hypothetical unobserved record which, when added to the real record, would complete the monitoring, yielding a pure-state "quantum trajectory." Averaging the pure state over this smoothed distribution yields the (mixed) smoothed quantum state. We study how the choice of actual unraveling affects the purity increase over that of the conventional (filtered) state conditioned only on the past record.

  20. Quantum State Smoothing.

    PubMed

    Guevara, Ivonne; Wiseman, Howard

    2015-10-30

    Smoothing is an estimation method whereby a classical state (probability distribution for classical variables) at a given time is conditioned on all-time (both earlier and later) observations. Here we define a smoothed quantum state for a partially monitored open quantum system, conditioned on an all-time monitoring-derived record. We calculate the smoothed distribution for a hypothetical unobserved record which, when added to the real record, would complete the monitoring, yielding a pure-state "quantum trajectory." Averaging the pure state over this smoothed distribution yields the (mixed) smoothed quantum state. We study how the choice of actual unraveling affects the purity increase over that of the conventional (filtered) state conditioned only on the past record.

  1. Reversed-phase vortex-assisted liquid-liquid microextraction: a new sample preparation method for the determination of amygdalin in oil and kernel samples.

    PubMed

    Hosseini, Mohammad; Heydari, Rouhollah; Alimoradi, Mohammad

    2015-02-01

    A novel, simple, and rapid reversed-phase vortex-assisted liquid-liquid microextraction coupled with high-performance liquid chromatography has been introduced for the extraction, clean-up, and preconcentration of amygdalin in oil and kernel samples. In this technique, deionized water was used as the extracting solvent. Unlike the reversed-phase dispersive liquid-liquid microextraction, dispersive solvent was eliminated in the proposed method. Various parameters that affected the extraction efficiency, such as extracting solvent volume and its pH, vortex, and centrifuging times were evaluated and optimized. The calibration curve shows good linearity (r(2) = 0.9955) and precision (RSD < 5.2%) in the range of 0.07-20 μg/mL. The limit of detection and limit of quantitation were 0.02 and 0.07 μg/mL, respectively. The recoveries were in the range of 96.0-102.0% with relative standard deviation values ranging from 4.0 to 5.1%. Unlike the conventional extraction methods for plant extracts, no evaporative and re-solubilizing operations were needed in the proposed technique.

  2. HS-SPME-GC-MS/MS Method for the Rapid and Sensitive Quantitation of 2-Acetyl-1-pyrroline in Single Rice Kernels.

    PubMed

    Hopfer, Helene; Jodari, Farman; Negre-Zakharov, Florence; Wylie, Phillip L; Ebeler, Susan E

    2016-05-25

    Demand for aromatic rice varieties (e.g., Basmati) is increasing in the US. Aromatic varieties typically have elevated levels of the aroma compound 2-acetyl-1-pyrroline (2AP). Due to its very low aroma threshold, analysis of 2AP provides a useful screening tool for rice breeders. Methods for 2AP analysis in rice should quantitate 2AP at or below sensory threshold level, avoid artifactual 2AP generation, and be able to analyze single rice kernels in cases where only small sample quantities are available (e.g., breeding trials). We combined headspace solid phase microextraction with gas chromatography tandem mass spectrometry (HS-SPME-GC-MS/MS) for analysis of 2AP, using an extraction temperature of 40 °C and a stable isotopologue as internal standard. 2AP calibrations were linear between the concentrations of 53 and 5380 pg/g, with detection limits below the sensory threshold of 2AP. Forty-eight aromatic and nonaromatic, milled rice samples from three harvest years were screened with the method for their 2AP content, and overall reproducibility, observed for all samples, ranged from 5% for experimental aromatic lines to 33% for nonaromatic lines.

  3. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  4. Regularization techniques for PSF-matching kernels - I. Choice of kernel basis

    NASA Astrophysics Data System (ADS)

    Becker, A. C.; Homrighausen, D.; Connolly, A. J.; Genovese, C. R.; Owen, R.; Bickerton, S. J.; Lupton, R. H.

    2012-09-01

    We review current methods for building point spread function (PSF)-matching kernels for the purposes of image subtraction or co-addition. Such methods use a linear decomposition of the kernel on a series of basis functions. The correct choice of these basis functions is fundamental to the efficiency and effectiveness of the matching - the chosen bases should represent the underlying signal using a reasonably small number of shapes, and/or have a minimum number of user-adjustable tuning parameters. We examine methods whose bases comprise multiple Gauss-Hermite polynomials, as well as a form-free basis composed of delta-functions. Kernels derived from delta-functions are unsurprisingly shown to be more expressive; they are able to take more general shapes and perform better in situations where sum-of-Gaussian methods are known to fail. However, due to its many degrees of freedom (the maximum number allowed by the kernel size) this basis tends to overfit the problem and yields noisy kernels having large variance. We introduce a new technique to regularize these delta-function kernel solutions, which bridges the gap between the generality of delta-function kernels and the compactness of sum-of-Gaussian kernels. Through this regularization we are able to create general kernel solutions that represent the intrinsic shape of the PSF-matching kernel with only one degree of freedom, the strength of the regularization λ. The role of λ is effectively to exchange variance in the resulting difference image with variance in the kernel itself. We examine considerations in choosing the value of λ, including statistical risk estimators and the ability of the solution to predict solutions for adjacent areas. Both of these suggest moderate strengths of λ between 0.1 and 1.0, although this optimization is likely data set dependent. This model allows for flexible representations of the convolution kernel that have significant predictive ability and will prove useful in implementing

  5. A simple, rapid, and efficient method for isolating detrusor for the culture of bladder smooth muscle cells.

    PubMed

    Ding, Zhi; Xie, Hua; Huang, Yichen; Lv, Yiqing; Yang, Ganggang; Chen, Yan; Sun, Huizhen; Zhou, Junmei; Chen, Fang

    2016-01-01

    To establish a simple and rapid method to remove serosa and mucosa from detrusor for the culture of bladder smooth muscle cells (SMCs). Fourteen New Zealand rabbits were randomly allocated to two groups. In the first group, pure bladder detrusor was directly obtained from bladder wall using novel method characterized by subserous injection of normal saline. In the second group, full thickness bladder wall sample was cut down, and then, mucosa and serosa were trimmed off detrusor ex vivo. Twelve detrusor samples from two groups were manually minced and enzymatically digested, respectively, to form dissociated cells whose livability was detected by trypan blue exclusion. Proliferative ability of primary culture cells was detected by CCK-8 kit, and purity of second-passage SMCs was detected by flow cytometric analyses. Another two detrusor samples from two groups were used for histological examination. Subserous injection of normal saline combined with blunt dissection can remove mucosa and serosa from detrusor layer easily and quickly. Statistical analysis revealed the first group possessed higher cell livability, shorter primary culture cell doubling time, and higher purity of SMCs than the second group (P < 0.05). Histological examination confirmed no serosa and mucosa existed on the surface of detrusor obtained by novel method, while serosa or mucosa residual can be found on the surface of detrusor obtained by traditional method. Pure detrusor can be acquired from bladder wall conveniently using novel method. This novel method brought about significantly higher purity and cell livability as compared to traditional method.

  6. Kernel mucking in top

    SciTech Connect

    LeFebvre, W.

    1994-08-01

    For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.

  7. Poles tracking of weakly nonlinear structures using a Bayesian smoothing method

    NASA Astrophysics Data System (ADS)

    Stephan, Cyrille; Festjens, Hugo; Renaud, Franck; Dion, Jean-Luc

    2017-02-01

    This paper describes a method for the identification and the tracking of poles of a weakly nonlinear structure from its free responses. This method is based on a model of multichannel damped sines whose parameters evolve over time. Their variations are approximated in discrete time by a nonlinear state space model. States are estimated by an iterative process which couples a two-pass Bayesian smoother with an Expectation-Maximization (EM) algorithm. The method is applied on numerical and experimental cases. As a result, accurate frequency and damping estimates are obtained as a function of amplitude.

  8. Kernel-Based Reconstruction of Graph Signals

    NASA Astrophysics Data System (ADS)

    Romero, Daniel; Ma, Meng; Giannakis, Georgios B.

    2017-02-01

    A number of applications in engineering, social sciences, physics, and biology involve inference over networks. In this context, graph signals are widely encountered as descriptors of vertex attributes or features in graph-structured data. Estimating such signals in all vertices given noisy observations of their values on a subset of vertices has been extensively analyzed in the literature of signal processing on graphs (SPoG). This paper advocates kernel regression as a framework generalizing popular SPoG modeling and reconstruction and expanding their capabilities. Formulating signal reconstruction as a regression task on reproducing kernel Hilbert spaces of graph signals permeates benefits from statistical learning, offers fresh insights, and allows for estimators to leverage richer forms of prior information than existing alternatives. A number of SPoG notions such as bandlimitedness, graph filters, and the graph Fourier transform are naturally accommodated in the kernel framework. Additionally, this paper capitalizes on the so-called representer theorem to devise simpler versions of existing Thikhonov regularized estimators, and offers a novel probabilistic interpretation of kernel methods on graphs based on graphical models. Motivated by the challenges of selecting the bandwidth parameter in SPoG estimators or the kernel map in kernel-based methods, the present paper further proposes two multi-kernel approaches with complementary strengths. Whereas the first enables estimation of the unknown bandwidth of bandlimited signals, the second allows for efficient graph filter selection. Numerical tests with synthetic as well as real data demonstrate the merits of the proposed methods relative to state-of-the-art alternatives.

  9. Analog forecasting with dynamics-adapted kernels

    NASA Astrophysics Data System (ADS)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  10. Kernel bandwidth optimization in spike rate estimation.

    PubMed

    Shimazaki, Hideaki; Shinomoto, Shigeru

    2010-08-01

    Kernel smoother and a time-histogram are classical tools for estimating an instantaneous rate of spike occurrences. We recently established a method for selecting the bin width of the time-histogram, based on the principle of minimizing the mean integrated square error (MISE) between the estimated rate and unknown underlying rate. Here we apply the same optimization principle to the kernel density estimation in selecting the width or "bandwidth" of the kernel, and further extend the algorithm to allow a variable bandwidth, in conformity with data. The variable kernel has the potential to accurately grasp non-stationary phenomena, such as abrupt changes in the firing rate, which we often encounter in neuroscience. In order to avoid possible overfitting that may take place due to excessive freedom, we introduced a stiffness constant for bandwidth variability. Our method automatically adjusts the stiffness constant, thereby adapting to the entire set of spike data. It is revealed that the classical kernel smoother may exhibit goodness-of-fit comparable to, or even better than, that of modern sophisticated rate estimation methods, provided that the bandwidth is selected properly for a given set of spike data, according to the optimization methods presented here.

  11. Including State Excitation in the Fixed-Interval Smoothing Algorithm and Implementation of the Maneuver Detection Method Using Error Residuals

    DTIC Science & Technology

    1990-12-01

    N is taken as the first smoothed estimate, P, must be equal to P,,, at this last data point. This can be seen graphically in Figure 4. Meditch [Ref...D-A246 336 NAVAL POSTGRADUATE SCHOOL Monterey , California R AWDTIC ELECTIE THESIS INCLUDING STATE EXCITATION IN THE FIXED-INTERVAL SMOOTHING ...Filter, Smoothing , Noise Process, Maneuver Detection. 19 Abstract (continue on reverse f necessary and idcntify by block number) The effects of the state

  12. Monitoring county-level chlamydia incidence in Texas, 2004 – 2005: application of empirical Bayesian smoothing and Exploratory Spatial Data Analysis (ESDA) methods

    PubMed Central

    Owusu-Edusei, Kwame; Owens, Chantelle J

    2009-01-01

    Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA) methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races) from the National Electronic Telecommunications System for Surveillance (NETSS) for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p < 0.05) higher smoothed rates (> 300 cases per 100,000 residents) than its contiguous neighbors (195 or less) in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379). The relative change in smoothed chlamydia rates in Newton county was significantly (p < 0.05) higher than its contiguous neighbors. Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time. PMID:19245686

  13. Smoothed Particle Hydrodynamics Continuous Boundary Force method for Navier-Stokes equations subject to Robin boundary condition

    SciTech Connect

    Pan, Wenxiao; Bao, Jie; Tartakovsky, Alexandre M.

    2014-02-15

    Robin boundary condition for the Navier-Stokes equations is used to model slip conditions at the fluid-solid boundaries. A novel Continuous Boundary Force (CBF) method is proposed for solving the Navier-Stokes equations subject to Robin boundary condition. In the CBF method, the Robin boundary condition at boundary is replaced by the homogeneous Neumann boundary condition at the boundary and a volumetric force term added to the momentum conservation equation. Smoothed Particle Hydrodynamics (SPH) method is used to solve the resulting Navier-Stokes equations. We present solutions for two-dimensional and three-dimensional flows in domains bounded by flat and curved boundaries subject to various forms of the Robin boundary condition. The numerical accuracy and convergence are examined through comparison of the SPH-CBF results with the solutions of finite difference or finite element method. Taken the no-slip boundary condition as a special case of slip boundary condition, we demonstrate that the SPH-CBF method describes accurately both no-slip and slip conditions.

  14. A comparative study of energy minimization methods for Markov random fields with smoothness-based priors.

    PubMed

    Szeliski, Richard; Zabih, Ramin; Scharstein, Daniel; Veksler, Olga; Kolmogorov, Vladimir; Agarwala, Aseem; Tappen, Marshall; Rother, Carsten

    2008-06-01

    Among the most exciting advances in early vision has been the development of efficient energy minimization algorithms for pixel-labeling tasks such as depth or texture computation. It has been known for decades that such problems can be elegantly expressed as Markov random fields, yet the resulting energy minimization problems have been widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the top-performing stereo methods. However, the tradeoffs among different energy minimization algorithms are still not well understood. In this paper we describe a set of energy minimization benchmarks and use them to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods graph cuts, LBP, and tree-reweighted message passing in addition to the well-known older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching, interactive segmentation, and denoising. We also provide a general-purpose software interface that allows vision researchers to easily switch between optimization methods. Benchmarks, code, images, and results are available at http://vision.middlebury.edu/MRF/.

  15. Convolution kernel design and efficient algorithm for sampling density correction.

    PubMed

    Johnson, Kenneth O; Pipe, James G

    2009-02-01

    Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally efficient algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the algorithm are extended to 3D. Copyright 2009 Wiley-Liss, Inc.

  16. A Kernel-based Account of Bibliometric Measures

    NASA Astrophysics Data System (ADS)

    Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji

    The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.

  17. Isolation of bacterial endophytes from germinated maize kernels.

    PubMed

    Rijavec, Tomaz; Lapanje, Ales; Dermastia, Marina; Rupnik, Maja

    2007-06-01

    The germination of surface-sterilized maize kernels under aseptic conditions proved to be a suitable method for isolation of kernel-associated bacterial endophytes. Bacterial strains identified by partial 16S rRNA gene sequencing as Pantoea sp., Microbacterium sp., Frigoribacterium sp., Bacillus sp., Paenibacillus sp., and Sphingomonas sp. were isolated from kernels of 4 different maize cultivars. Genus Pantoea was associated with a specific maize cultivar. The kernels of this cultivar were often overgrown with the fungus Lecanicillium aphanocladii; however, those exhibiting Pantoea growth were never colonized with it. Furthermore, the isolated bacterium strain inhibited fungal growth in vitro.

  18. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  19. A Parameterization-Based Numerical Method for Isotropic and Anisotropic Diffusion Smoothing on Non-Flat Surfaces

    PubMed Central

    Joshi, Anand A.; Shattuck, David W.; Thompson, Paul M.; Leahy, Richard M.

    2009-01-01

    Neuroimaging data, such as 3-D maps of cortical thickness or neural activation, can often be analyzed more informatively with respect to the cortical surface rather than the entire volume of the brain. Any cortical surface-based analysis should be carried out using computations in the intrinsic geometry of the surface rather than using the metric of the ambient 3-D space. We present parameterization-based numerical methods for performing isotropic and anisotropic filtering on triangulated surface geometries. In contrast to existing FEM-based methods for triangulated geometries, our approach accounts for the metric of the surface. In order to discretize and numerically compute the isotropic and anisotropic geometric operators, we first parameterize the surface using a p-harmonic mapping. We then use this parameterization as our computational domain and account for the surface metric while carrying out isotropic and anisotropic filtering. To validate our method, we compare our numerical results to the analytical expression for isotropic diffusion on a spherical surface. We apply these methods to smoothing of mean curvature maps on the cortical surface, a step commonly required for analysis of gyrification or for registering surface-based maps across subjects. PMID:19423447

  20. Robotic Intelligence Kernel: Visualization

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.

  1. Robotic Intelligence Kernel: Architecture

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  2. A method of smooth bivariate interpolation for data given on a generalized curvilinear grid

    NASA Technical Reports Server (NTRS)

    Zingg, David W.; Yarrow, Maurice

    1992-01-01

    A method of locally bicubic interpolation is presented for data given at the nodes of a two-dimensional generalized curvilinear grid. The physical domain is transformed to a computational domain in which the grid is uniform and rectangular by a generalized curvilinear coordinate transformation. The metrics of the transformation are obtained by finite differences in the computational domain. Metric derivatives are determined by repeated application of the chain rule for partial differentiation. Given the metrics and the metric derivatives, the partial derivatives required to determine a locally bicubic interpolant can be estimated at each data point using finite differences in the computational domain. A bilinear transformation is used to analytically transform the individual quadrilateral cells in the physical domain into unit squares, thus allowing the use of simple formulas for bicubic interpolation.

  3. Smooth Sailing.

    ERIC Educational Resources Information Center

    Price, Beverley; Pincott, Maxine; Rebman, Ashley; Northcutt, Jen; Barsanti, Amy; Silkunas, Betty; Brighton, Susan K.; Reitz, David; Winkler, Maureen

    1999-01-01

    Presents discipline tips from several teachers to keep classrooms running smoothly all year. Some of the suggestions include the following: a bear-cave warning system, peer mediation, a motivational mystery, problem students acting as the teacher's assistant, a positive-behavior-reward chain, a hallway scavenger hunt (to ensure quiet passage…

  4. Smooth Sailing.

    ERIC Educational Resources Information Center

    Price, Beverley; Pincott, Maxine; Rebman, Ashley; Northcutt, Jen; Barsanti, Amy; Silkunas, Betty; Brighton, Susan K.; Reitz, David; Winkler, Maureen

    1999-01-01

    Presents discipline tips from several teachers to keep classrooms running smoothly all year. Some of the suggestions include the following: a bear-cave warning system, peer mediation, a motivational mystery, problem students acting as the teacher's assistant, a positive-behavior-reward chain, a hallway scavenger hunt (to ensure quiet passage…

  5. Smoothed Biasing Forces Yield Unbiased Free Energies with the Extended-System Adaptive Biasing Force Method.

    PubMed

    Lesage, Adrien; Lelièvre, Tony; Stoltz, Gabriel; Hénin, Jérôme

    2016-12-27

    We report a theoretical description and numerical tests of the extended-system adaptive biasing force method (eABF), together with an unbiased estimator of the free energy surface from eABF dynamics. Whereas the original ABF approach uses its running estimate of the free energy gradient as the adaptive biasing force, eABF is built on the idea that the exact free energy gradient is not necessary for efficient exploration, and that it is still possible to recover the exact free energy separately with an appropriate estimator. eABF does not directly bias the collective coordinates of interest, but rather fictitious variables that are harmonically coupled to them; therefore is does not require second derivative estimates, making it easily applicable to a wider range of problems than ABF. Furthermore, the extended variables present a smoother, coarse-grain-like sampling problem on a mollified free energy surface, leading to faster exploration and convergence. We also introduce CZAR, a simple, unbiased free energy estimator from eABF trajectories. eABF/CZAR converges to the physical free energy surface faster than standard ABF for a wide range of parameters.

  6. An equatorially enhanced grid with smooth resolution distribution generated by a spring dynamics method

    NASA Astrophysics Data System (ADS)

    Iga, Shin-ichi

    2017-02-01

    An equatorially enhanced grid is applicable to atmospheric general circulation simulations with better representations of the cumulus convection active in the tropics. This study improved the topology of previously proposed equatorially enhanced grids (Iga, 2015) [1], which had extremely large grid intervals around the poles. The proposed grids in this study are of a triangular mesh and are generated by a spring dynamics method with stretching around singular points, which are connected to five or seven neighboring grid points. The latitudinal distribution of resolution is nearly proportional to the combination of the map factors of the Mercator, Lambert conformal conic, and polar stereographic projections. The resolution contrast between the equator and pole is 2.3 ∼ 4.5 for the sampled cases, which is much smaller than that for the LML grids. This improvement requires only a small amount of additional grid resources, less than 11% of the total. The proposed grids are also examined with shallow water tests, and were found to perform better than the previous LML grids.

  7. Modern methods for calculating ground-wave field strength over a smooth spherical Earth

    NASA Astrophysics Data System (ADS)

    Eckert, R. P.

    1986-02-01

    The report makes available the computer program that produces the proposed new FCC ground-wave propagation prediction curves for the new band of standard broadcast frequencies between 1605 and 1705 kHz. The curves are included in recommendations to the U.S. Department of State in preparation for an International Telecommunication Union Radio Conference. The history of the FCC curves is traced from the early 1930's, when the Federal Radio Commission and later the FFC faced an intensifying need for technical information concerning interference distances. A family of curves satisfactorily meeting this need was published in 1940. The FCC reexamined the matter recently in connection with the planned expansion of the AM broadcast band, and the resulting new curves are a precise representation of the mathematical theory. Mathematical background is furnished so that the computer program can be critically evaluated. This will be particularly valuable to persons implementing the program on other computers or adapting it for special applications. Technical references are identified for each of the formulas used by the program, and the history of the development of mathematical methods is outlined.

  8. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  9. Geometric tree kernels: classification of COPD from airway tree geometry.

    PubMed

    Feragen, Aasa; Petersen, Jens; Grimm, Dominik; Dirksen, Asger; Pedersen, Jesper Holst; Borgwardt, Karsten; de Bruijne, Marleen

    2013-01-01

    Methodological contributions: This paper introduces a family of kernels for analyzing (anatomical) trees endowed with vector valued measurements made along the tree. While state-of-the-art graph and tree kernels use combinatorial tree/graph structure with discrete node and edge labels, the kernels presented in this paper can include geometric information such as branch shape, branch radius or other vector valued properties. In addition to being flexible in their ability to model different types of attributes, the presented kernels are computationally efficient and some of them can easily be computed for large datasets (N - 10.000) of trees with 30 - 600 branches. Combining the kernels with standard machine learning tools enables us to analyze the relation between disease and anatomical tree structure and geometry. Experimental results: The kernels are used to compare airway trees segmented from low-dose CT, endowed with branch shape descriptors and airway wall area percentage measurements made along the tree. Using kernelized hypothesis testing we show that the geometric airway trees are significantly differently distributed in patients with Chronic Obstructive Pulmonary Disease (COPD) than in healthy individuals. The geometric tree kernels also give a significant increase in the classification accuracy of COPD from geometric tree structure endowed with airway wall thickness measurements in comparison with state-of-the-art methods, giving further insight into the relationship between airway wall thickness and COPD. Software: Software for computing kernels and statistical tests is available at http://image.diku.dk/aasa/software.php.

  10. Discrimination of Maize Haploid Seeds from Hybrid Seeds Using Vis Spectroscopy and Support Vector Machine Method.

    PubMed

    Liu, Jin; Guo, Ting-ting; Li, Hao-chuan; Jia, Shi-qiang; Yan, Yan-lu; An, Dong; Zhang, Yao; Chen, Shao-jiang

    2015-11-01

    Doubled haploid (DH) lines are routinely applied in the hybrid maize breeding programs of many institutes and companies for their advantages of complete homozygosity and short breeding cycle length. A key issue in this approach is an efficient screening system to identify haploid kernels from the hybrid kernels crossed with the inducer. At present, haploid kernel selection is carried out manually using the"red-crown" kernel trait (the haploid kernel has a non-pigmented embryo and pigmented endosperm) controlled by the R1-nj gene. Manual selection is time-consuming and unreliable. Furthermore, the color of the kernel embryo is concealed by the pericarp. Here, we establish a novel approach for identifying maize haploid kernels based on visible (Vis) spectroscopy and support vector machine (SVM) pattern recognition technology. The diffuse transmittance spectra of individual kernels (141 haploid kernels and 141 hybrid kernels from 9 genotypes) were collected using a portable UV-Vis spectrometer and integrating sphere. The raw spectral data were preprocessed using smoothing and vector normalization methods. The desired feature wavelengths were selected based on the results of the Kolmogorov-Smirnov test. The wavelengths with p values above 0. 05 were eliminated because the distributions of absorbance data in these wavelengths show no significant difference between haploid and hybrid kernels. Principal component analysis was then performed to reduce the number of variables. The SVM model was evaluated by 9-fold cross-validation. In each round, samples of one genotype were used as the testing set, while those of other genotypes were used as the training set. The mean rate of correct discrimination was 92.06%. This result demonstrates the feasibility of using Vis spectroscopy to identify haploid maize kernels. The method would help develop a rapid and accurate automated screening-system for haploid kernels.

  11. High speed sorting of Fusarium-damaged wheat kernels

    USDA-ARS?s Scientific Manuscript database

    Recent studies have found that resistance to Fusarium fungal infection can be inherited in wheat from one generation to another. However, there is not yet available a cost effective method to separate Fusarium-damaged wheat kernels from undamaged kernels so that wheat breeders can take advantage of...

  12. A Real-Time Orbit Determination Method for Smooth Transition from Optical Tracking to Laser Ranging of Debris

    PubMed Central

    Li, Bin; Sang, Jizhang; Zhang, Zhongping

    2016-01-01

    A critical requirement to achieve high efficiency of debris laser tracking is to have sufficiently accurate orbit predictions (OP) in both the pointing direction (better than 20 arc seconds) and distance from the tracking station to the debris objects, with the former more important than the latter because of the narrow laser beam. When the two line element (TLE) is used to provide the orbit predictions, the resultant pointing errors are usually on the order of tens to hundreds of arc seconds. In practice, therefore, angular observations of debris objects are first collected using an optical tracking sensor, and then used to guide the laser beam pointing to the objects. The manual guidance may cause interrupts to the laser tracking, and consequently loss of valuable laser tracking data. This paper presents a real-time orbit determination (OD) and prediction method to realize smooth and efficient debris laser tracking. The method uses TLE-computed positions and angles over a short-arc of less than 2 min as observations in an OD process where simplified force models are considered. After the OD convergence, the OP is performed from the last observation epoch to the end of the tracking pass. Simulation and real tracking data processing results show that the pointing prediction errors are usually less than 10″, and the distance errors less than 100 m, therefore, the prediction accuracy is sufficient for the blind laser tracking. PMID:27347958

  13. Investigation of calcium antagonist-L-type calcium channel interactions by a vascular smooth muscle cell membrane chromatography method.

    PubMed

    Du, Hui; He, Jianyu; Wang, Sicen; He, Langchong

    2010-07-01

    The dissociation equilibrium constant (K(D)) is an important affinity parameter for studying drug-receptor interactions. A vascular smooth muscle (VSM) cell membrane chromatography (CMC) method was developed for determination of the K(D) values for calcium antagonist-L-type calcium channel (L-CC) interactions. VSM cells, by means of primary culture with rat thoracic aortas, were used for preparation of the cell membrane stationary phase in the VSM/CMC model. All measurements were performed with spectrophotometric detection (237 nm) at 37 degrees C. The K(D) values obtained using frontal analysis were 3.36 x 10(-6) M for nifedipine, 1.34 x 10(-6) M for nimodipine, 6.83 x 10(-7) M for nitrendipine, 1.23 x 10(-7) M for nicardipine, 1.09 x 10(-7) M for amlodipine, and 8.51 x 10(-8) M for verapamil. This affinity rank order obtained from the VSM/CMC method had a strong positive correlation with that obtained from radioligand binding assay. The location of the binding region was examined by displacement experiments using nitrendipine as a mobile-phase additive. It was found that verapamil occupied a class of binding sites on L-CCs different from those occupied by nitrendipine. In addition, nicardipine, amlodipine, and nitrendipine had direct competition at a single common binding site. The studies showed that CMC can be applied to the investigation of drug-receptor interactions.

  14. A smoothed finite element method for analysis of anisotropic large deformation of passive rabbit ventricles in diastole.

    PubMed

    Jiang, Chen; Liu, Gui-Rong; Han, Xu; Zhang, Zhi-Qian; Zeng, Wei

    2015-01-01

    The smoothed FEM (S-FEM) is firstly extended to explore the behavior of 3D anisotropic large deformation of rabbit ventricles during the passive filling process in diastole. Because of the incompressibility of myocardium, a special method called selective face-based/node-based S-FEM using four-node tetrahedral elements (FS/NS-FEM-TET4) is adopted in order to avoid volumetric locking. To validate the proposed algorithms of FS/NS-FEM-TET4, the 3D Lame problem is implemented. The performance contest results show that our FS/NS-FEM-TET4 is accurate, volumetric locking-free and insensitive to mesh distortion than standard linear FEM because of absence of isoparametric mapping. Actually, the efficiency of FS/NS-FEM-TET4 is comparable with higher-order FEM, such as 10-node tetrahedral elements. The proposed method for Holzapfel myocardium hyperelastic strain energy is also validated by simple shear tests through the comparison outcomes reported in available references. Finally, the FS/NS-FEM-TET4 is applied in the example of the passive filling of MRI-based rabbit ventricles with fiber architecture derived from rule-based algorithm to demonstrate its efficiency. Hence, we conclude that FS/NS-FEM-TET4 is a promising alternative other than FEM in passive cardiac mechanics.

  15. Feasibility of near infrared spectroscopy for analyzing corn kernel damage and viability of soybean and corn kernels

    USDA-ARS?s Scientific Manuscript database

    The current US corn grading system accounts for the portion of damaged kernels, which is measured by time-consuming and inaccurate visual inspection. Near infrared spectroscopy (NIRS), a non-destructive and fast analytical method, was tested as a tool for discriminating corn kernels with heat and f...

  16. Robust C-Loss Kernel Classifiers.

    PubMed

    Xu, Guibiao; Hu, Bao-Gang; Principe, Jose C

    2016-12-29

    The correntropy-induced loss (C-loss) function has the nice property of being robust to outliers. In this paper, we study the C-loss kernel classifier with the Tikhonov regularization term, which is used to avoid overfitting. After using the half-quadratic optimization algorithm, which converges much faster than the gradient optimization algorithm, we find out that the resulting C-loss kernel classifier is equivalent to an iterative weighted least square support vector machine (LS-SVM). This relationship helps explain the robustness of iterative weighted LS-SVM from the correntropy and density estimation perspectives. On the large-scale data sets which have low-rank Gram matrices, we suggest to use incomplete Cholesky decomposition to speed up the training process. Moreover, we use the representer theorem to improve the sparseness of the resulting C-loss kernel classifier. Experimental results confirm that our methods are more robust to outliers than the existing common classifiers.

  17. Fractal Weyl law for Linux Kernel architecture

    NASA Astrophysics Data System (ADS)

    Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.

    2011-01-01

    We study the properties of spectrum and eigenstates of the Google matrix of a directed network formed by the procedure calls in the Linux Kernel. Our results obtained for various versions of the Linux Kernel show that the spectrum is characterized by the fractal Weyl law established recently for systems of quantum chaotic scattering and the Perron-Frobenius operators of dynamical maps. The fractal Weyl exponent is found to be ν ≈ 0.65 that corresponds to the fractal dimension of the network d ≈ 1.3. An independent computation of the fractal dimension by the cluster growing method, generalized for directed networks, gives a close value d ≈ 1.4. The eigenmodes of the Google matrix of Linux Kernel are localized on certain principal nodes. We argue that the fractal Weyl law should be generic for directed networks with the fractal dimension d < 2.

  18. Gaussian kernel based anatomically-aided diffuse optical tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Baikejiang, Reheman; Zhang, Wei; Li, Changqing

    2017-02-01

    Image reconstruction in diffuse optical tomography (DOT) is challenging because its inverse problem is nonlinear, ill-posed and ill-conditioned. Anatomical guidance from high spatial resolution imaging modalities can substantially improve the quality of reconstructed DOT images. In this paper, inspired by the kernel methods in machine learning, we propose the kernel method to introduce anatomical information into the DOT image reconstruction algorithm. In this kernel method, optical absorption coefficient at each finite element node is represented as a function of a set of features obtained from anatomical images such as computed tomography (CT). The kernel based image model is directly incorporated into the forward model of DOT, which exploits the sparseness of the image in the feature space. Compared with Laplacian approaches to include structural priors, the proposed method does not require the image segmentation of distinct regions. The proposed kernel method is validated with numerical simulations of 3D DOT reconstruction using synthetic CT data. We added 15% Gaussian noise onto both the numerical DOT measurements and the simulated CT image. We have also validated the proposed method by agar phantom experiment with anatomical guidance from a CT scan. We have studied the effects of voxel size and number of nearest neighborhood size in kernel method on the reconstructed DOT images. Our results indicate that the spatial resolution and the accuracy of the reconstructed DOT images have been improved substantially after applying the anatomical guidance with the proposed kernel method.

  19. Gaussian kernel width optimization for sparse Bayesian learning.

    PubMed

    Mohsenzadeh, Yalda; Sheikhzadeh, Hamid

    2015-04-01

    Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters.

  20. Comparison of neck movement smoothness between patients with mechanical neck disorder and healthy volunteers using the spectral entropy method.

    PubMed

    Yang, Chia-Chi; Su, Fong-Chin; Guo, Lan-Yuen

    2014-08-01

    Mechanical neck disorder is one of the most common health issues. No related observations have applied spectral entropy to explore the smoothness of cervical movement. Therefore, the objectives were to ascertain whether the spectral entropy of time-series linear acceleration could extend to estimate the smoothness of cervical movement and compare the characteristics of the smoothness of cervical movement in patients with mechanical neck pain (MND) with healthy volunteers. The smoothness of cervical movement during cervical circumduction from 36 subjects (MND: n = 18, asymptomatic: n = 18) was quantified by the spectral entropy of time-series linear acceleration and other speed-dependent parameters, respectively. Patients with MND showed significantly longer movement time, higher value in the spectral entropy and wider band response in frequency spectrum than healthy volunteers (P = 0.01). The spectral entropy would be suitable to discriminate the smoothness of cervical movement between patients with MND with healthy volunteers and demonstrated patients with MND had significantly less smooth cervical movement.

  1. Learning kernels from biological networks by maximizing entropy.

    PubMed

    Tsuda, Koji; Noble, William Stafford

    2004-08-04

    The diffusion kernel is a general method for computing pairwise distances among all nodes in a graph, based on the sum of weighted paths between each pair of nodes. This technique has been used successfully, in conjunction with kernel-based learning methods, to draw inferences from several types of biological networks. We show that computing the diffusion kernel is equivalent to maximizing the von Neumann entropy, subject to a global constraint on the sum of the Euclidean distances between nodes. This global constraint allows for high variance in the pairwise distances. Accordingly, we propose an alternative, locally constrained diffusion kernel, and we demonstrate that the resulting kernel allows for more accurate support vector machine prediction of protein functional classifications from metabolic and protein-protein interaction networks. Supplementary results and data are available at noble.gs.washington.edu/proj/maxent

  2. Choosing parameters of kernel subspace LDA for recognition of face images under pose and illumination variations.

    PubMed

    Huang, Jian; Yuen, Pong C; Chen, Wen-Sheng; Lai, Jian Huang

    2007-08-01

    This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.

  3. Smoothed particle hydrodynamics method applied to pulsatile flow inside a rigid two-dimensional model of left heart cavity.

    PubMed

    Shahriari, S; Kadem, L; Rogers, B D; Hassan, I

    2012-11-01

    This paper aims to extend the application of smoothed particle hydrodynamics (SPH), a meshfree particle method, to simulate flow inside a model of the heart's left ventricle (LV). This work is considered the first attempt to simulate flow inside a heart cavity using a meshfree particle method. Simulating this kind of flow, characterized by high pulsatility and moderate Reynolds number using SPH is challenging. As a consequence, validation of the computational code using benchmark cases is required prior to simulating the flow inside a model of the LV. In this work, this is accomplished by simulating an unsteady oscillating flow (pressure amplitude: A = 2500 N ∕ m(3) and Womersley number: W(o)  = 16) and the steady lid-driven cavity flow (Re = 3200, 5000). The results are compared against analytical solutions and reference data to assess convergence. Then, both benchmark cases are combined and a pulsatile jet in a cavity is simulated and the results are compared with the finite volume method. Here, an approach to deal with inflow and outflow boundary conditions is introduced. Finally, pulsatile inlet flow in a rigid model of the LV is simulated. The results demonstrate the ability of SPH to model complex cardiovascular flows and to track the history of fluid properties. Some interesting features of SPH are also demonstrated in this study, including the relation between particle resolution and sound speed to control compressibility effects and also order of convergence in SPH simulations, which is consistently demonstrated to be between first-order and second-order at the moderate Reynolds numbers investigated.

  4. Computing the roots of complex orthogonal and kernel polynomials

    SciTech Connect

    Saylor, P.E.; Smolarski, D.C.

    1988-01-01

    A method is presented to compute the roots of complex orthogonal and kernel polynomials. An important application of complex kernel polynomials is the acceleration of iterative methods for the solution of nonsymmetric linear equations. In the real case, the roots of orthogonal polynomials coincide with the eigenvalues of the Jacobi matrix, a symmetric tridiagonal matrix obtained from the defining three-term recurrence relationship for the orthogonal polynomials. In the real case kernel polynomials are orthogonal. The Stieltjes procedure is an algorithm to compute the roots of orthogonal and kernel polynomials bases on these facts. In the complex case, the Jacobi matrix generalizes to a Hessenberg matrix, the eigenvalues of which are roots of either orthogonal or kernel polynomials. The resulting algorithm generalizes the Stieljes procedure. It may not be defined in the case of kernel polynomials, a consequence of the fact that they are orthogonal with respect to a nonpositive bilinear form. (Another consequence is that kernel polynomials need not be of exact degree.) A second algorithm that is always defined is presented for kernel polynomials. Numerical examples are described.

  5. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  7. KERNEL PHASE IN FIZEAU INTERFEROMETRY

    SciTech Connect

    Martinache, Frantz

    2010-11-20

    The detection of high contrast companions at small angular separation appears feasible in conventional direct images using the self-calibration properties of interferometric observable quantities. The friendly notion of closure phase, which is key to the recent observational successes of non-redundant aperture masking interferometry used with adaptive optics, appears to be one example of a wide family of observable quantities that are not contaminated by phase noise. In the high-Strehl regime, soon to be available thanks to the coming generation of extreme adaptive optics systems on ground-based telescopes, and already available from space, closure phase like information can be extracted from any direct image, even taken with a redundant aperture. These new phase-noise immune observable quantities, called kernel phases, are determined a priori from the knowledge of the geometry of the pupil only. Re-analysis of archive data acquired with the Hubble Space Telescope NICMOS instrument using this new kernel-phase algorithm demonstrates the power of the method as it clearly detects and locates with milliarcsecond precision a known companion to a star at angular separation less than the diffraction limit.

  8. Kernel machine SNP-set testing under multiple candidate kernels.

    PubMed

    Wu, Michael C; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M; Harmon, Quaker E; Lin, Xinyi; Engel, Stephanie M; Molldrem, Jeffrey J; Armistead, Paul M

    2013-04-01

    Joint testing for the cumulative effect of multiple single-nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large-scale genetic association studies. The kernel machine (KM)-testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori because this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest P-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power vs. using the best candidate kernel.

  9. Training Lp norm multiple kernel learning in the primal.

    PubMed

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method.

  10. A method for three-dimensional quantification of vascular smooth muscle orientation: application in viable murine carotid arteries.

    PubMed

    Spronck, Bart; Megens, Remco T A; Reesink, Koen D; Delhaas, Tammo

    2016-04-01

    When studying in vivo arterial mechanical behaviour using constitutive models, smooth muscle cells (SMCs) should be considered, while they play an important role in regulating arterial vessel tone. Current constitutive models assume a strictly circumferential SMC orientation, without any dispersion. We hypothesised that SMC orientation would show considerable dispersion in three dimensions and that helical dispersion would be greater than transversal dispersion. To test these hypotheses, we developed a method to quantify the 3D orientation of arterial SMCs. Fluorescently labelled SMC nuclei of left and right carotid arteries of ten mice were imaged using two-photon laser scanning microscopy. Arteries were imaged at a range of luminal pressures. 3D image processing was used to identify individual nuclei and their orientations. SMCs showed to be arranged in two distinct layers. Orientations were quantified by fitting a Bingham distribution to the observed orientations. As hypothesised, orientation dispersion was much larger helically than transversally. With increasing luminal pressure, transversal dispersion decreased significantly, whereas helical dispersion remained unaltered. Additionally, SMC orientations showed a statistically significant (p < 0.05) mean right-handed helix angle in both left and right arteries and in both layers, which is a relevant finding from a developmental biology perspective. In conclusion, vascular SMC orientation (1) can be quantified in 3D; (2) shows considerable dispersion, predominantly in the helical direction; and (3) has a distinct right-handed helical component in both left and right carotid arteries. The obtained quantitative distribution data are instrumental for constitutive modelling of the artery wall and illustrate the merit of our method.

  11. Kernel density estimation applied to bond length, bond angle, and torsion angle distributions.

    PubMed

    McCabe, Patrick; Korb, Oliver; Cole, Jason

    2014-05-27

    We describe the method of kernel density estimation (KDE) and apply it to molecular structure data. KDE is a quite general nonparametric statistical method suitable even for multimodal data. The method generates smooth probability density function (PDF) representations and finds application in diverse fields such as signal processing and econometrics. KDE appears to have been under-utilized as a method in molecular geometry analysis, chemo-informatics, and molecular structure optimization. The resulting probability densities have advantages over histograms and, importantly, are also suitable for gradient-based optimization. To illustrate KDE, we describe its application to chemical bond length, bond valence angle, and torsion angle distributions and show the ability of the method to model arbitrary torsion angle distributions.

  12. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or...

  13. Simultaneous Tumor Segmentation, Image Restoration, and Blur Kernel Estimation in PET Using Multiple Regularizations

    PubMed Central

    Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan

    2016-01-01

    Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the

  14. Simultaneous Tumor Segmentation, Image Restoration, and Blur Kernel Estimation in PET Using Multiple Regularizations.

    PubMed

    Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan

    2017-02-01

    Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin's lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial

  15. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images.

  16. Study of the Impact of Tissue Density Heterogeneities on 3-Dimensional Abdominal Dosimetry: Comparison Between Dose Kernel Convolution and Direct Monte Carlo Methods

    PubMed Central

    Dieudonné, Arnaud; Hobbs, Robert F.; Lebtahi, Rachida; Maurel, Fabien; Baechler, Sébastien; Wahl, Richard L.; Boubaker, Ariane; Le Guludec, Dominique; Sgouros, Georges; Gardin, Isabelle

    2014-01-01

    Dose kernel convolution (DK) methods have been proposed to speed up absorbed dose calculations in molecular radionuclide therapy. Our aim was to evaluate the impact of tissue density heterogeneities (TDH) on dosimetry when using a DK method and to propose a simple density-correction method. Methods This study has been conducted on 3 clinical cases: case 1, non-Hodgkin lymphoma treated with 131I-tositumomab; case 2, a neuroendocrine tumor treatment simulated with 177Lu-peptides; and case 3, hepatocellular carcinoma treated with 90Y-microspheres. Absorbed dose calculations were performed using a direct Monte Carlo approach accounting for TDH (3D-RD), and a DK approach (VoxelDose, or VD). For each individual voxel, the VD absorbed dose, DVD, calculated assuming uniform density, was corrected for density, giving DVDd. The average 3D-RD absorbed dose values, D3DRD, were compared with DVD and DVDd, using the relative difference ΔVD/3DRD. At the voxel level, density-binned ΔVD/3DRD and ΔVDd/3DRD were plotted against ρ and fitted with a linear regression. Results The DVD calculations showed a good agreement with D3DRD. ΔVD/3DRD was less than 3.5%, except for the tumor of case 1 (5.9%) and the renal cortex of case 2 (5.6%). At the voxel level, the ΔVD/3DRD range was 0%–14% for cases 1 and 2, and −3% to 7% for case 3. All 3 cases showed a linear relationship between voxel bin-averaged ΔVD/3DRD and density, ρ: case 1 (Δ = −0.56ρ + 0.62, R2 = 0.93), case 2 (Δ = −0.91ρ + 0.96, R2 = 0.99), and case 3 (Δ = −0.69ρ + 0.72, R2 = 0.91). The density correction improved the agreement of the DK method with the Monte Carlo approach (ΔVDd/3DRD < 1.1%), but with a lesser extent for the tumor of case 1 (3.1%). At the voxel level, the ΔVDd/3DRD range decreased for the 3 clinical cases (case 1, −1% to 4%; case 2, −0.5% to 1.5%, and −1.5% to 2%). No more linear regression existed for cases 2 and 3, contrary to case 1 (Δ = 0.41ρ − 0.38, R2 = 0.88) although

  17. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  18. Estimation of smoothing error in SBUV profile and total ozone retrieval

    NASA Astrophysics Data System (ADS)

    Kramarova, N. A.; Bhartia, P. K.; Frith, S. M.; Fisher, B. L.; McPeters, R. D.; Taylor, S.; Labow, G. J.

    2011-12-01

    Data from the Nimbus-4, Nimbus-7 Solar Backscatter Ultra Violet (SBUV) and seven of the NOAA series of SBUV/2 instruments spanning 41 years are being reprocessed using V8.6 algorithm. The data are scheduled to be released by the end of August 2011. An important focus of the new algorithm is to estimate various sources of errors in the SBUV profiles and total ozone retrievals. We discuss here the smoothing errors that describe the components of the profile variability that the SBUV observing system can not measure. The SBUV(/2) instruments have a vertical resolution of 5 km in the middle stratosphere, decreasing to 8 to 10 km below the ozone peak and above 0.5 hPa. To estimate the smoothing effect of the SBUV algorithm, the actual statistics of the fine vertical structure of ozone profiles must be known. The covariance matrix of the ensemble of measured ozone profiles with the high vertical resolution would be a formal representation of the actual ozone variability. We merged the MLS (version 3) and sonde ozone profiles to calculate the covariance matrix, which in general case, for single profile retrieval, might be a function of the latitude and month. Using the averaging kernels of the SBUV(/2) measurements and calculated total covariance matrix one can estimate the smoothing errors for the SBUV ozone profiles. A method to estimate the smoothing effect of the SBUV algorithm is described and the covariance matrixes and averaging kernels are provided along with the SBUV(/2) ozone profiles. The magnitude of the smoothing error varies with altitude, latitude, season and solar zenith angle. The analysis of the smoothing errors, based on the SBUV(/2) monthly zonal mean time series, shows that the largest smoothing errors were detected in the troposphere and might be as large as 15-20% and rapidly decrease with the altitude. In the stratosphere above 40 hPa the smoothing errors are less than 5% and between 10 and 1 hPa the smoothing errors are on the order of 1%. We

  19. Estimation of Smoothing Error in SBUV Profile and Total Ozone Retrieval

    NASA Technical Reports Server (NTRS)

    Kramarova, N. A.; Bhartia, P. K.; Frith, S. M.; Fisher, B. L.; McPeters, R. D.; Taylor, S.; Labow, G. J.

    2011-01-01

    Data from the Nimbus-4, Nimbus-7 Solar Backscatter Ultra Violet (SBUV) and seven of the NOAA series of SBUV/2 instruments spanning 41 years are being reprocessed using V8.6 algorithm. The data are scheduled to be released by the end of August 2011. An important focus of the new algorithm is to estimate various sources of errors in the SBUV profiles and total ozone retrievals. We discuss here the smoothing errors that describe the components of the profile variability that the SBUV observing system can not measure. The SBUV(/2) instruments have a vertical resolution of 5 km in the middle stratosphere, decreasing to 8 to 10 km below the ozone peak and above 0.5 hPa. To estimate the smoothing effect of the SBUV algorithm, the actual statistics of the fine vertical structure of ozone profiles must be known. The covariance matrix of the ensemble of measured ozone profiles with the high vertical resolution would be a formal representation of the actual ozone variability. We merged the MLS (version 3) and sonde ozone profiles to calculate the covariance matrix, which in general case, for single profile retrieval, might be a function of the latitude and month. Using the averaging kernels of the SBUV(/2) measurements and calculated total covariance matrix one can estimate the smoothing errors for the SBUV ozone profiles. A method to estimate the smoothing effect of the SBUV algorithm is described and the covariance matrixes and averaging kernels are provided along with the SBUV(/2) ozone profiles. The magnitude of the smoothing error varies with altitude, latitude, season and solar zenith angle. The analysis of the smoothing errors, based on the SBUV(/2) monthly zonal mean time series, shows that the largest smoothing errors were detected in the troposphere and might be as large as 15-20% and rapidly decrease with the altitude. In the stratosphere above 40 hPa the smoothing errors are less than 5% and between 10 and 1 hPa the smoothing errors are on the order of 1%. We

  20. Evaluation of smoothing in an iterative lp-norm minimization algorithm for surface-based source localization of MEG

    NASA Astrophysics Data System (ADS)

    Han, Jooman; Sic Kim, June; Chung, Chun Kee; Park, Kwang Suk

    2007-08-01

    The imaging of neural sources of magnetoencephalographic data based on distributed source models requires additional constraints on the source distribution in order to overcome ill-posedness and obtain a plausible solution. The minimum lp norm (0 < p <= 1) constraint is known to be appropriate for reconstructing focal sources distributed in several regions. A well-known recursive method for solving the lp-norm minimization problem, for example, is the focal underdetermined system solver (FOCUSS). However, this iterative algorithm tends to give spurious sources when the noise level is high. In this study, we present an algorithm to incorporate a smoothing technique into the FOCUSS algorithm and test different smoothing kernels in a surface-based cortical source space. Simulations with cortical source patches assumed in auditory areas show that the incorporation of the smoothing procedure improves the performance of the FOCUSS algorithm, and that using the geodesic distance for constructing a smoothing kernel is a better choice than using the Euclidean one, particularly when employing a cortical source space. We also apply these methods to a real data set obtained from an auditory experiment and illustrate their applicability to realistic data by presenting the reconstructed source images localized in the superior temporal gyrus.

  1. Numerical solution of the nonlinear Schrödinger equation using smoothed-particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Mocz, Philip; Succi, Sauro

    2015-05-01

    We formulate a smoothed-particle hydrodynamics numerical method, traditionally used for the Euler equations for fluid dynamics in the context of astrophysical simulations, to solve the nonlinear Schrödinger equation in the Madelung formulation. The probability density of the wave function is discretized into moving particles, whose properties are smoothed by a kernel function. The traditional fluid pressure is replaced by a quantum pressure tensor, for which a robust discretization is found. We demonstrate our numerical method on a variety of numerical test problems involving the simple harmonic oscillator, soliton-soliton collision, Bose-Einstein condensates, collapsing singularities, and dark matter halos governed by the Gross-Pitaevskii-Poisson equation. Our method is conservative, applicable to unbounded domains, and is automatically adaptive in its resolution, making it well suited to study problems with collapsing solutions.

  2. Numerical solution of the nonlinear Schrödinger equation using smoothed-particle hydrodynamics.

    PubMed

    Mocz, Philip; Succi, Sauro

    2015-05-01

    We formulate a smoothed-particle hydrodynamics numerical method, traditionally used for the Euler equations for fluid dynamics in the context of astrophysical simulations, to solve the nonlinear Schrödinger equation in the Madelung formulation. The probability density of the wave function is discretized into moving particles, whose properties are smoothed by a kernel function. The traditional fluid pressure is replaced by a quantum pressure tensor, for which a robust discretization is found. We demonstrate our numerical method on a variety of numerical test problems involving the simple harmonic oscillator, soliton-soliton collision, Bose-Einstein condensates, collapsing singularities, and dark matter halos governed by the Gross-Pitaevskii-Poisson equation. Our method is conservative, applicable to unbounded domains, and is automatically adaptive in its resolution, making it well suited to study problems with collapsing solutions.

  3. Estimating the Bias of Local Polynomial Approximations Using the Peano Kernel

    SciTech Connect

    Blair, J., and Machorro, E.

    2012-03-22

    These presentation visuals define local polynomial approximations, give formulas for bias and random components of the error, and express bias error in terms of the Peano kernel. They further derive constants that give figures of merit, and show the figures of merit for 3 common weighting functions. The Peano kernel theorem yields estimates for the bias error for local-polynomial-approximation smoothing that are superior in several ways to the error estimates in the current literature.

  4. [Selection of Characteristic Wavelengths Using SPA and Qualitative Discrimination of Mildew Degree of Corn Kernels Based on SVM].

    PubMed

    Yuan, Ying; Wang, Wei; Chu, Xuan; Xi, Ming-jie

    2016-01-01

    The feasibility of Fourier transform near infrared (FT-NIR) spectroscopy with spectral range between 833 and 2 500 nm to detect the moldy corn kernels with different levels of mildew was verified in this paper. Firstly, to avoid the influence of noise, moving average smoothing was used for spectral data preprocessing after four common pretreatment methods were compared. Then to improve the prediction performance of the model, SPXY (sample set partitioning based on joint x-y distance) was selected and used for sample set partition. Furthermore, in order to reduce the dimensions of the original spectral data, successive projection algorithm (SPA) was adopted and ultimately 7 characteristic wavelengths were extracted, the characteristic wave-lengths were 833, 927, 1 208, 1 337, 1 454, 1 861, 2 280 nm. The experimental results showed when the spectrum data of the 7 characteristic wavelengths were taken as the input of SVM, the radial basic function (RBF) used as the kernel function, and kernel parameter C = 7 760 469, γ = 0.017 003, the classification accuracies of the established SVM model were 97.78% and 93.33% for the training and testing sets respectively. In addition, the independent validation set was selected in the same standard, and used to verify the model. At last, the classification accuracy of 91.11% for the independent validation set was achieved. The result indicated that it is feasible to identify and classify different degree of moldy corn grain kernels using SPA and SVM, and characteristic wavelengths selected by SPA in this paper also lay a foundation for the online NIR detection of mildew corn kernels.

  5. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  6. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    PubMed Central

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-01-01

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202

  7. Spectrum-based kernel length estimation for Gaussian process classification.

    PubMed

    Wang, Liang; Li, Chuan

    2014-06-01

    Recent studies have shown that Gaussian process (GP) classification, a discriminative supervised learning approach, has achieved competitive performance in real applications compared with most state-of-the-art supervised learning methods. However, the problem of automatic model selection in GP classification, involving the kernel function form and the corresponding parameter values (which are unknown in advance), remains a challenge. To make GP classification a more practical tool, this paper presents a novel spectrum analysis-based approach for model selection by refining the GP kernel function to match the given input data. Specifically, we target the problem of GP kernel length scale estimation. Spectrums are first calculated analytically from the kernel function itself using the autocorrelation theorem as well as being estimated numerically from the training data themselves. Then, the kernel length scale is automatically estimated by equating the two spectrum values, i.e., the kernel function spectrum equals to the estimated training data spectrum. Compared with the classical Bayesian method for kernel length scale estimation via maximizing the marginal likelihood (which is time consuming and could suffer from multiple local optima), extensive experimental results on various data sets show that our proposed method is both efficient and accurate.

  8. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including pieces and particles, regardless of whether edible or inedible, contained in any lot of almonds...

  9. Kernel descriptors for chest x-ray analysis

    NASA Astrophysics Data System (ADS)

    Orbán, Gergely Gy.; Horváth, Gábor

    2017-03-01

    In this study, we address the problem of lesion classification in radiographic scans. We adapt image kernel functions to be applicable for high-resolution, grayscale images to improve the classification accuracy of a support vector machine. We take existing kernel functions inspired by the histogram of oriented gradients, and derive an approximation that can be evaluated in linear time of the image size instead of the original quadratic complexity, enabling highresolution input. Moreover, we propose a new variant inspired by the matched filter, to better utilize intensity space. The new kernels are improved to be scale-invariant and combined with a Gaussian kernel built from handcrafted image features. We introduce a simple multiple kernel learning framework that is robust when one of the kernels, in the current case the image feature kernel, dominates the others. The combined kernel is input to a support vector classifier. We tested our method on lesion classification both in chest radiographs and digital tomosynthesis scans. The radiographs originated from a database including 364 patients with lung nodules and 150 healthy cases. The digital tomosynthesis scans were obtained by simulation using 91 CT scans from the LIDC-IDRI database as input. The new kernels showed good separation capability: ROC AuC was in [0.827, 0.853] for the radiograph database and 0.763 for the tomosynthesis scans. Adding the new kernels to the image-feature-based classifier significantly improved accuracy: AuC increased from 0.958 to 0.967 and from 0.788 to 0.801 for the two applications.

  10. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects.

  11. MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES

    PubMed Central

    Dunson, David B.

    2013-01-01

    Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563

  12. Prediction of kernel density of corn using single-kernel near infrared spectroscopy

    USDA-ARS?s Scientific Manuscript database

    Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...

  13. A TWO-DIMENSIONAL METHOD OF MANUFACTURED SOLUTIONS BENCHMARK SUITE BASED ON VARIATIONS OF LARSEN'S BENCHMARK WITH ESCALATING ORDER OF SMOOTHNESS OF THE EXACT SOLUTION

    SciTech Connect

    Sebastian Schunert; Yousry Y. Azmy

    2011-05-01

    The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally fine mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite first eliminates the aforementioned limitation of fine mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme.

  14. Arbitrary-resolution global sensitivity kernels

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Fournier, A.; Dahlen, F.

    2007-12-01

    Extracting observables out of any part of a seismogram (e.g. including diffracted phases such as Pdiff) necessitates the knowledge of 3-D time-space wavefields for the Green functions that form the backbone of Fréchet sensitivity kernels. While known for a while, this idea is still computationally intractable in 3-D, facing major simulation and storage issues when high-frequency wavefields are considered at the global scale. We recently developed a new "collapsed-dimension" spectral-element method that solves the 3-D system of elastodynamic equations in a 2-D space, based on exploring symmetry considerations of the seismic-wave radiation patterns. We will present the technical background on the computation of waveform kernels, various examples of time- and frequency-dependent sensitivity kernels and subsequently extracted time-window kernels (e.g. banana- doughnuts). Given the computationally light-weighted 2-D nature, we will explore some crucial parameters such as excitation type, source time functions, frequency, azimuth, discontinuity locations, and phase type, i.e. an a priori view into how, when, and where seismograms carry 3-D Earth signature. A once-and-for-all database of 2-D waveforms for various source depths shall then serve as a complete set of global time-space sensitivity for a given spherically symmetric background model, thereby allowing for tomographic inversions with arbitrary frequencies, observables, and phases.

  15. The Adaptive Kernel Neural Network

    DTIC Science & Technology

    1989-10-01

    A neural network architecture for clustering and classification is described. The Adaptive Kernel Neural Network (AKNN) is a density estimation...classification layer. The AKNN retains the inherent parallelism common in neural network models. Its relationship to the kernel estimator allows the network to

  16. StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.

    PubMed

    Li, Chenhui; Baciu, George; Yu, Han

    2017-02-13

    Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heatmap. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.

  17. Image filtering as an alternative to the application of a different reconstruction kernel in CT imaging: Feasibility study in lung cancer screening

    SciTech Connect

    Ohkubo, Masaki; Wada, Shinichi; Kayugawa, Akihiro; Matsumoto, Toru; Murao, Kohei

    2011-07-15

    Purpose: While the acquisition of projection data in a computed tomography (CT) scanner is generally carried out once, the projection data is often removed from the system, making further reconstruction with a different reconstruction filter impossible. The reconstruction kernel is one of the most important parameters. To have access to all the reconstructions, either prior reconstructions with multiple kernels must be performed or the projection data must be stored. Each of these requirements would increase the burden on data archiving. This study aimed to design an effective method to achieve similar image quality using an image filtering technique in the image space, instead of a reconstruction filter in the projection space for CT imaging. The authors evaluated the clinical feasibility of the proposed method in lung cancer screening. Methods: The proposed technique is essentially the same as common image filtering, which performs processing in the spatial-frequency domain with a filter function. However, the filter function was determined based on the quantitative analysis of the point spread functions (PSFs) measured in the system. The modulation transfer functions (MTFs) were derived from the PSFs, and the ratio of the MTFs was used as the filter function. Therefore, using an image reconstructed with a kernel, an image reconstructed with a different kernel was obtained by filtering, which used the ratio of the MTFs obtained for the two kernels. The performance of the method was evaluated by using routine clinical images obtained from CT screening for lung cancer in five subjects. Results: Filtered images for all combinations of three types of reconstruction kernels (''smooth,''''standard,'' and ''sharp'' kernels) showed good agreement with original reconstructed images regarded as the gold standard. On the filtered images, abnormal shadows suspected as being lung cancers were identical to those on the reconstructed images. The standard deviations (SDs) for

  18. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    PubMed

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  19. Determining the Parameters of the Hereditary Kernels of Nonlinear Viscoelastic Isotropic Materials in Torsion

    NASA Astrophysics Data System (ADS)

    Golub, V. P.; Ragulina, V. S.; Fernati, P. V.

    2015-03-01

    A method for determining the parameters of the hereditary kernels for nonlinear viscoelastic materials is tested in conditions of pure torsion. A Rabotnov-type model is chosen. The parameters of the hereditary kernels are determined by fitting discrete values of the kernels found using a similarity condition. The discrete values of the kernels in the zone of singularity occurring in short-term tests are found using weight functions. The Abel kernel, a combination of power and exponential functions, and a fractional-exponential function are considered

  20. Image filtering as an alternative to the application of a different reconstruction kernel in CT imaging: feasibility study in lung cancer screening.

    PubMed

    Ohkubo, Masaki; Wada, Shinichi; Kayugawa, Akihiro; Matsumoto, Toru; Murao, Kohei

    2011-07-01

    While the acquisition of projection data in a computed tomography (CT) scanner is generally cqrried out once, the projection data is often removed from the system, making further reconstruction with a different reconstruction filter impossible. The reconstruction kernel is one of the most important parameters. To have access to all the reconstructions, either prior reconstructions with multiple kernels must be performed or the projection data must be stored. Each of these requirements would increase the burden on data archiving. This study aimed to design an effective method to achieve similar image quality using an image filtering technique in the image space, instead of a reconstruction filter in the projection space for CT imaging. The authors evaluated the clinical feasibility of the proposed method in lung cancer screening. The proposed technique is essentially the same as common image filtering, which performs processing in the spatial-frequency domain with a filter function. However, the filter function was determined based on the quantitative analysis of the point spread functions (PSFs) measured in the system. The modulation transfer functions (MTFs) were derived from the PSFs, and the ratio of the MTFs was used as the filter function. Therefore, using an image reconstructed with a kernel, an image reconstructed with a different kernel was obtained by filtering, which used the ratio of the MTFs obtained for the two kernels. The performance of the method was evaluated by using routine clinical images obtained from CT screening for lung cancer in five subjects. Filtered images for all combinations of three types of reconstruction kernels ("smooth," "standard," and "sharp" kernels) showed good agreement with original reconstructed images regarded as the gold standard. On the filtered images, abnormal shadows suspected as being lung cancers were identical to those on the reconstructed images. The standard deviations (SDs) for the difference between filtered

  1. Robotic intelligence kernel

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  2. Flexible Kernel Memory

    PubMed Central

    Nowicki, Dimitri; Siegelmann, Hava

    2010-01-01

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013

  3. Gasoline2: a modern smoothed particle hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Wadsley, James W.; Keller, Benjamin W.; Quinn, Thomas R.

    2017-10-01

    The methods in the Gasoline2 smoothed particle hydrodynamics (SPH) code are described and tested. Gasoline2 is the most recent version of the Gasoline code for parallel hydrodynamics and gravity with identical hydrodynamics to the Changa code. As with other Modern SPH codes, we prevent sharp jumps in time-steps, use upgraded kernels and larger neighbour numbers and employ local viscosity limiters. Unique features in Gasoline2 include its Geometric Density Average Force expression, explicit Turbulent Diffusion terms and Gradient-Based shock detection to limit artificial viscosity. This last feature allows Gasoline2 to completely avoid artificial viscosity in non-shocking compressive flows. We present a suite of tests demonstrating the value of these features with the same code configuration and parameter choices used for production simulations.

  4. Motion Blur Kernel Estimation via Deep Learning.

    PubMed

    Xu, Xiangyu; Pan, Jinshan; Zhang, Yu-Jin; Yang, Ming-Hsuan

    2017-09-18

    The success of the state-of-the-art deblurring methods mainly depends on restoration of sharp edges in a coarse-tofine kernel estimation process. In this paper, we propose to learn a deep convolutional neural network for extracting sharp edges from blurred images. Motivated by the success of the existing filtering based deblurring methods, the proposed model consists of two stages: suppressing extraneous details and enhancing sharp edges. We show that the two-stage model simplifies the learning process and effectively restores sharp edges. Facilitated by the learned sharp edges, the proposed deblurring algorithm does not require any coarse-to-fine strategy or edge selection, thereby significantly simplifying kernel estimation and reducing computation load. Extensive experimental results on challenging blurry images demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of visual quality and run-time.

  5. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    SciTech Connect

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-10

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  6. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    SciTech Connect

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-10

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  7. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    PubMed

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  8. Stochastic subset selection for learning with kernel machines.

    PubMed

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  9. An Ensemble Approach to Building Mercer Kernels with Prior Information

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  10. Diffusion tensor smoothing through weighted Karcher means

    PubMed Central

    Carmichael, Owen; Chen, Jun; Paul, Debashis; Peng, Jie

    2014-01-01

    Diffusion tensor magnetic resonance imaging (MRI) quantifies the spatial distribution of water Diffusion at each voxel on a regular grid of locations in a biological specimen by Diffusion tensors– 3 × 3 positive definite matrices. Removal of noise from DTI is an important problem due to the high scientific relevance of DTI and relatively low signal to noise ratio it provides. Leading approaches to this problem amount to estimation of weighted Karcher means of Diffusion tensors within spatial neighborhoods, under various metrics imposed on the space of tensors. However, it is unclear how the behavior of these estimators varies with the magnitude of DTI sensor noise (the noise resulting from the thermal e!ects of MRI scanning) as well as the geometric structure of the underlying Diffusion tensor neighborhoods. In this paper, we combine theoretical analysis, empirical analysis of simulated DTI data, and empirical analysis of real DTI scans to compare the noise removal performance of three kernel-based DTI smoothers that are based on Euclidean, log-Euclidean, and affine-invariant metrics. The results suggest, contrary to conventional wisdom, that imposing a simplistic Euclidean metric may in fact provide comparable or superior noise removal, especially in relatively unstructured regions and/or in the presence of moderate to high levels of sensor noise. On the contrary, log-Euclidean and affine-invariant metrics may lead to better noise removal in highly structured anatomical regions, especially when the sensor noise is of low magnitude. These findings emphasize the importance of considering the interplay of sensor noise magnitude and tensor field geometric structure when assessing Diffusion tensor smoothing options. They also point to the necessity for continued development of smoothing methods that perform well across a large range of scenarios. PMID:25419264

  11. Seismic hazard assessment in Central Asia using smoothed seismicity approaches

    NASA Astrophysics Data System (ADS)

    Ullah, Shahid; Bindi, Dino; Zuccolo, Elisa; Mikhailova, Natalia; Danciu, Laurentiu; Parolai, Stefano

    2014-05-01

    Central Asia has a long history of large to moderate frequent seismicity and is therefore considered one of the most seismically active regions with a high hazard level in the world. In the hazard map produced at global scale by GSHAP project in 1999( Giardini, 1999), Central Asia is characterized by peak ground accelerations with return period of 475 years as high as 4.8 m/s2. Therefore Central Asia was selected as a target area for EMCA project (Earthquake Model Central Asia), a regional project of GEM (Global Earthquake Model) for this area. In the framework of EMCA, a new generation of seismic hazard maps are foreseen in terms of macro-seismic intensity, in turn to be used to obtain seismic risk maps for the region. Therefore Intensity Prediction Equation (IPE) had been developed for the region based on the distribution of intensity data for different earthquakes occurred in Central Asia since the end of 19th century (Bindi et al. 2011). The same observed intensity distribution had been used to assess the seismic hazard following the site approach (Bindi et al. 2012). In this study, we present the probabilistic seismic hazard assessment of Central Asia in terms of MSK-64 based on two kernel estimation methods. We consider the smoothed seismicity approaches of Frankel (1995), modified for considering the adaptive kernel proposed by Stock and Smith (2002), and of Woo (1996), modified for considering a grid of sites and estimating a separate bandwidth for each site. The activity rate maps are shown from Frankel approach showing the effects of fixed and adaptive kernel. The hazard is estimated for rock site condition based on 10% probability of exceedance in 50 years. Maximum intensity of about 9 is observed in the Hindukush region.

  12. Multiphase simulation of liquid jet breakup using smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Pourabdian, Majid; Omidvar, Pourya; Morad, Mohammad Reza

    This paper deals with numerical modeling of two-phase liquid jet breakup using the smoothed particle hydrodynamics (SPH) method. Simulation of multiphase flows involving fluids with a high-density ratio causes large pressure gradients at the interface and subsequently divergence of numerical solutions. A modified procedure extended by Monaghan and Rafiee is employed to stabilize the sharp interface betwe