Science.gov

Sample records for kernel smoothing methods

  1. A method of smoothed particle hydrodynamics using spheroidal kernels

    NASA Technical Reports Server (NTRS)

    Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.

    1995-01-01

    We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.

  2. An Adaptive Kernel Smoothing Method for Classifying Austrosimulium tillyardianum (Diptera: Simuliidae) Larval Instars

    PubMed Central

    Cen, Guanjun; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks’ rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby’s growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods. PMID:26546689

  3. A high-order fast method for computing convolution integral with smooth kernel

    SciTech Connect

    Qiang, Ji

    2009-09-28

    In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h{sup 4}), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.

  4. Carbon dioxide at an unpolluted site analysed with the smoothing kernel method and skewed distributions.

    PubMed

    Pérez, Isidro A; Sánchez, M Luisa; García, M Ángeles; Pardo, Nuria

    2013-07-01

    CO₂ concentrations recorded for two years using a Picarro G1301 analyser at a rural site were studied applying two procedures. Firstly, the smoothing kernel method, which to date has been used with one linear and another circular variable, was used with pairs of circular variables: wind direction, time of day, and time of year, providing that the daily cycle was the prevailing cyclical evolution and that the highest concentrations were justified by the influence of one nearby city source, which was only revealed by directional analysis. Secondly, histograms were obtained, and these revealed most observations to be located between 380 and 410 ppm, and that there was a sharp contrast during the year. Finally, histograms were fitted to 14 distributions, the best known using analytical procedures, and the remainder using numerical procedures. RMSE was used as the goodness of fit indicator to compare and select distributions. Most functions provided similar RMSE values. However, the best fits were obtained using numerical procedures due to their greater flexibility, the triangular distribution being the simplest function of this kind. This distribution allowed us to identify directions and months of noticeable CO₂ input (SSE and April-May, respectively) as well as the daily cycle of the distribution symmetry. Among the functions whose parameters were calculated using an analytical expression, Erlang distributions provided satisfactory fits for monthly analysis, and gamma for the rest. By contrast, the Rayleigh and Weibull distributions gave the worst RMSE values. PMID:23602977

  5. A short- time beltrami kernel for smoothing images and manifolds.

    PubMed

    Spira, Alon; Kimmel, Ron; Sochen, Nir

    2007-06-01

    We introduce a short-time kernel for the Beltrami image enhancing flow. The flow is implemented by "convolving" the image with a space dependent kernel in a similar fashion to the solution of the heat equation by a convolution with a Gaussian kernel. The kernel is appropriate for smoothing regular (flat) 2-D images, for smoothing images painted on manifolds, and for simultaneously smoothing images and the manifolds they are painted on. The kernel combines the geometry of the image and that of the manifold into one metric tensor, thus enabling a natural unified approach for the manipulation of both. Additionally, the derivation of the kernel gives a better geometrical understanding of the Beltrami flow and shows that the bilateral filter is a Euclidean approximation of it. On a practical level, the use of the kernel allows arbitrarily large time steps as opposed to the existing explicit numerical schemes for the Beltrami flow. In addition, the kernel works with equal ease on regular 2-D images and on images painted on parametric or triangulated manifolds. We demonstrate the denoising properties of the kernel by applying it to various types of images and manifolds. PMID:17547140

  6. Estimating Mixture of Gaussian Processes by Kernel Smoothing.

    PubMed

    Huang, Mian; Li, Runze; Wang, Hansheng; Yao, Weixin

    2014-01-01

    When the functional data are not homogeneous, e.g., there exist multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this paper, we propose a new estimation procedure for the Mixture of Gaussian Processes, to incorporate both functional and inhomogeneous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from EM algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset. PMID:24976675

  7. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. PMID:25791435

  8. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  9. Smoothing Methods for Estimating Test Score Distributions.

    ERIC Educational Resources Information Center

    Kolen, Michael J.

    1991-01-01

    Estimation/smoothing methods that are flexible enough to fit a wide variety of test score distributions are reviewed: kernel method, strong true-score model-based method, and method that uses polynomial log-linear models. Applications of these methods include describing/comparing test score distributions, estimating norms, and estimating…

  10. Jointly optimal bandwidth selection for the planar kernel-smoothed density-ratio.

    PubMed

    Davies, Tilman M

    2013-06-01

    The kernel-smoothed density-ratio or 'relative risk' function for planar point data is a useful tool for examining disease rates over a certain geographical region. Instrumental to the quality of the resulting risk surface estimate is the choice of bandwidth for computation of the required numerator and denominator densities. The challenge associated with finding some 'optimal' smoothing parameter for standalone implementation of the kernel estimator given observed data is compounded when we deal with the density-ratio per se. To date, only one method specifically designed for calculation of density-ratio optimal bandwidths has received any notable attention in the applied literature. However, this method exhibits significant variability in the estimated smoothing parameters. In this work, the first practical comparison of this selector with a little-known alternative technique is provided. The possibility of exploiting an asymptotic MISE formulation in an effort to control excess variability is also examined, and numerical results seem promising. PMID:23725887

  11. Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels.

    PubMed

    Jayasumana, Sadeep; Hartley, Richard; Salzmann, Mathieu; Li, Hongdong; Harandi, Mehrtash

    2015-12-01

    In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. PMID:26539851

  12. Adaptive Optimal Kernel Smooth-Windowed Wigner-Ville Distribution for Digital Communication Signal

    NASA Astrophysics Data System (ADS)

    Tan, Jo Lynn; Sha'ameri, Ahmad Zuribin

    2009-12-01

    Time-frequency distributions (TFDs) are powerful tools to represent the energy content of time-varying signal in both time and frequency domains simultaneously but they suffer from interference due to cross-terms. Various methods have been described to remove these cross-terms and they are typically signal-dependent. Thus, there is no single TFD with a fixed window or kernel that can produce accurate time-frequency representation (TFR) for all types of signals. In this paper, a globally adaptive optimal kernel smooth-windowed Wigner-Ville distribution (AOK-SWWVD) is designed for digital modulation signals such as ASK, FSK, and M-ary FSK, where its separable kernel is determined automatically from the input signal, without prior knowledge of the signal. This optimum kernel is capable of removing the cross-terms and maintaining accurate time-frequency representation at SNR as low as 0 dB. It is shown that this system is comparable to the system with prior knowledge of the signal.

  13. Equalizing resolution in smoothed-particle hydrodynamics calculations using self-adaptive sinc kernels

    NASA Astrophysics Data System (ADS)

    García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin

    2014-10-01

    Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.

  14. PET Image Reconstruction Using Kernel Method

    PubMed Central

    Wang, Guobao; Qi, Jinyi

    2014-01-01

    Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249

  15. PET image reconstruction using kernel method.

    PubMed

    Wang, Guobao; Qi, Jinyi

    2015-01-01

    Image reconstruction from low-count positron emission tomography (PET) projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4-D dynamic PET patient dataset showed promising results. PMID:25095249

  16. Improved kernel gradient free-smoothed particle hydrodynamics and its applications to heat transfer problems

    NASA Astrophysics Data System (ADS)

    Juan-Mian, Lei; Xue-Ying, Peng

    2016-02-01

    Kernel gradient free-smoothed particle hydrodynamics (KGF-SPH) is a modified smoothed particle hydrodynamics (SPH) method which has higher precision than the conventional SPH. However, the Laplacian in KGF-SPH is approximated by the two-pass model which increases computational cost. A new kind of discretization scheme for the Laplacian is proposed in this paper, then a method with higher precision and better stability, called Improved KGF-SPH, is developed by modifying KGF-SPH with this new Laplacian model. One-dimensional (1D) and two-dimensional (2D) heat conduction problems are used to test the precision and stability of the Improved KGF-SPH. The numerical results demonstrate that the Improved KGF-SPH is more accurate than SPH, and stabler than KGF-SPH. Natural convection in a closed square cavity at different Rayleigh numbers are modeled by the Improved KGF-SPH with shifting particle position, and the Improved KGF-SPH results are presented in comparison with those of SPH and finite volume method (FVM). The numerical results demonstrate that the Improved KGF-SPH is a more accurate method to study and model the heat transfer problems.

  17. Using Cochran's Z Statistic to Test the Kernel-Smoothed Item Response Function Differences between Focal and Reference Groups

    ERIC Educational Resources Information Center

    Zheng, Yinggan; Gierl, Mark J.; Cui, Ying

    2010-01-01

    This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…

  18. Kernel method and linear recurrence system

    NASA Astrophysics Data System (ADS)

    Hou, Qing-Hu; Mansour, Toufik

    2008-06-01

    Based on the kernel method, we present systematic methods to solve equation systems on generating functions of two variables. Using these methods, we get the generating functions for the number of permutations which avoid 1234 and 12k(k-1)...3 and permutations which avoid 1243 and 12...k.

  19. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    PubMed

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach. PMID:24805227

  20. Multiobjective optimization for model selection in kernel methods in regression.

    PubMed

    You, Di; Benitez-Quiroz, Carlos Fabian; Martinez, Aleix M

    2014-10-01

    Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-versus-variance tradeoff. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a tradeoff between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition, and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared with methods in the state of the art. PMID:25291740

  1. Multiobjective Optimization for Model Selection in Kernel Methods in Regression

    PubMed Central

    You, Di; Benitez-Quiroz, C. Fabian; Martinez, Aleix M.

    2016-01-01

    Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-vs-variance trade-off. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a trade-off between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared to methods in the state of the art. PMID:25291740

  2. Modified wavelet kernel methods for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Hsu, Pai-Hui; Huang, Xiu-Man

    2015-10-01

    Hyperspectral images have the capability of acquiring images of earth surface with several hundred of spectral bands. Providing such abundant spectral data should increase the abilities in classifying land use/cover type. However, due to the high dimensionality of hyperspectral data, traditional classification methods are not suitable for hyperspectral data classification. The common method to solve this problem is dimensionality reduction by using feature extraction before classification. Kernel methods such as support vector machine (SVM) and multiple kernel learning (MKL) have been successfully applied to hyperspectral images classification. In kernel methods applications, the selection of kernel function plays an important role. The wavelet kernel with multidimensional wavelet functions can find the optimal approximation of data in feature space for classification. The SVM with wavelet kernels (called WSVM) have been also applied to hyperspectral data and improve classification accuracy. In this study, wavelet kernel method combined multiple kernel learning algorithm and wavelet kernels was proposed for hyperspectral image classification. After the appropriate selection of a linear combination of kernel functions, the hyperspectral data will be transformed to the wavelet feature space, which should have the optimal data distribution for kernel learning and classification. Finally, the proposed methods were compared with the existing methods. A real hyperspectral data set was used to analyze the performance of wavelet kernel method. According to the results the proposed wavelet kernel methods in this study have well performance, and would be an appropriate tool for hyperspectral image classification.

  3. Application of smoothed particle hydrodynamics method in aerodynamics

    NASA Astrophysics Data System (ADS)

    Cortina, Miguel

    2014-11-01

    Smoothed Particle Hydrodynamics (SPH) is a meshless Lagrangian method in which the domain is represented by particles. Each particle is assigned properties such as mass, pressure, density, temperature, and velocity. These properties are then evaluated at the particle positions using a smoothing kernel that integrates over the values of the surrounding particles. In the present study the SPH method is first used to obtain numerical solutions for fluid flows over a cylinder and then we are going to apply the same principle over an airfoil obstacle.

  4. The Kernel Method of Equating Score Distributions. Program Statistics Research Technical Report No. 89-84.

    ERIC Educational Resources Information Center

    Holland, Paul W.; Thayer, Dorothy T.

    A new and unified approach to test equating is described that is based on log-linear models for smoothing score distributions and on the kernel method of nonparametric density estimation. The new method contains both linear and standard equipercentile methods as special cases and can handle several important equating data collection designs. An…

  5. Kernel map compression for speeding the execution of kernel-based methods.

    PubMed

    Arif, Omar; Vela, Patricio A

    2011-06-01

    The use of Mercer kernel methods in statistical learning theory provides for strong learning capabilities, as seen in kernel principal component analysis and support vector machines. Unfortunately, after learning, the computational complexity of execution through a kernel is of the order of the size of the training set, which is quite large for many applications. This paper proposes a two-step procedure for arriving at a compact and computationally efficient execution procedure. After learning in the kernel space, the proposed extension exploits the universal approximation capabilities of generalized radial basis function neural networks to efficiently approximate and replace the projections onto the empirical kernel map used during execution. Sample applications demonstrate significant compression of the kernel representation with graceful performance loss. PMID:21550884

  6. The Kernel Energy Method: Construction of 3 & 4 tuple Kernels from a List of Double Kernel Interactions

    PubMed Central

    Huang, Lulu; Massa, Lou

    2010-01-01

    The Kernel Energy Method (KEM) provides a way to calculate the ab-initio energy of very large biological molecules. The results are accurate, and the computational time reduced. However, by use of a list of double kernel interactions a significant additional reduction of computational effort may be achieved, still retaining ab-initio accuracy. A numerical comparison of the indices that name the known double interactions in question, allow one to list higher order interactions having the property of topological continuity within the full molecule of interest. When, that list of interactions is unpacked, as a kernel expansion, which weights the relative importance of each kernel in an expression for the total molecular energy, high accuracy, and a further significant reduction in computational effort results. A KEM molecular energy calculation based upon the HF/STO3G chemical model, is applied to the protein insulin, as an illustration. PMID:21243065

  7. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  8. Introduction to Kernel Methods: Classification of Multivariate Data

    NASA Astrophysics Data System (ADS)

    Fauvel, M.

    2016-05-01

    In this chapter, kernel methods are presented for the classification of multivariate data. An introduction example is given to enlighten the main idea of kernel methods. Then emphasis is done on the Support Vector Machine. Structural risk minimization is presented, and linear and non-linear SVM are described. Finally, a full example of SVM classification is given on simulated hyperspectral data.

  9. A 3D Contact Smoothing Method

    SciTech Connect

    Puso, M A; Laursen, T A

    2002-05-02

    Smoothing of contact surfaces can be used to eliminate the chatter typically seen with node on facet contact and give a better representation of the actual contact surface. The latter affect is well demonstrated for problems with interference fits. In this work we present two methods for the smoothing of contact surfaces for 3D finite element contact. In the first method, we employ Gregory patches to smooth the faceted surface in a node on facet implementation. In the second method, we employ a Bezier interpolation of the faceted surface in a mortar method implementation of contact. As is well known, node on facet approaches can exhibit locking due to the failure of the Babuska-Brezzi condition and in some instances fail the patch test. The mortar method implementation is stable and provides optimal convergence in the energy of error. In the this work we demonstrate the superiority of the smoothed versus the non-smoothed node on facet implementations. We also show where the node on facet method fails and some results from the smoothed mortar method implementation.

  10. Kernelization

    NASA Astrophysics Data System (ADS)

    Fomin, Fedor V.

    Preprocessing (data reduction or kernelization) as a strategy of coping with hard problems is universally used in almost every implementation. The history of preprocessing, like applying reduction rules simplifying truth functions, can be traced back to the 1950's [6]. A natural question in this regard is how to measure the quality of preprocessing rules proposed for a specific problem. For a long time the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this anomaly was that if we start with an instance I of an NP-hard problem and can show that in polynomial time we can replace this with an equivalent instance I' with |I'| < |I| then that would imply P=NP in classical complexity.

  11. Intelligent classification methods of grain kernels using computer vision analysis

    NASA Astrophysics Data System (ADS)

    Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo

    2011-06-01

    In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.

  12. Constructing Bayesian formulations of sparse kernel learning methods.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2005-01-01

    We present here a simple technique that simplifies the construction of Bayesian treatments of a variety of sparse kernel learning algorithms. An incomplete Cholesky factorisation is employed to modify the dual parameter space, such that the Gaussian prior over the dual model parameters is whitened. The regularisation term then corresponds to the usual weight-decay regulariser, allowing the Bayesian analysis to proceed via the evidence framework of MacKay. There is in addition a useful by-product associated with the incomplete Cholesky factorisation algorithm, it also identifies a subset of the training data forming an approximate basis for the entire dataset in the kernel-induced feature space, resulting in a sparse model. Bayesian treatments of the kernel ridge regression (KRR) algorithm, with both constant and heteroscedastic (input dependent) variance structures, and kernel logistic regression (KLR) are provided as illustrative examples of the proposed method, which we hope will be more widely applicable. PMID:16085387

  13. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method. PMID:25119982

  14. Chebyshev moment problems: Maximum entropy and kernel polynomial methods

    SciTech Connect

    Silver, R.N.; Roeder, H.; Voter, A.F.; Kress, J.D.

    1995-12-31

    Two Chebyshev recursion methods are presented for calculations with very large sparse Hamiltonians, the kernel polynomial method (KPM) and the maximum entropy method (MEM). They are applicable to physical properties involving large numbers of eigenstates such as densities of states, spectral functions, thermodynamics, total energies for Monte Carlo simulations and forces for tight binding molecular dynamics. this paper emphasizes efficient algorithms.

  15. Multiple predictor smoothing methods for sensitivity analysis.

    SciTech Connect

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  16. Input space versus feature space in kernel-based methods.

    PubMed

    Schölkopf, B; Mika, S; Burges, C C; Knirsch, P; Müller, K R; Rätsch, G; Smola, A J

    1999-01-01

    This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the Kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data. PMID:18252603

  17. Method for producing smooth inner surfaces

    DOEpatents

    Cooper, Charles A.

    2016-05-17

    The invention provides a method for preparing superconducting cavities, the method comprising causing polishing media to tumble by centrifugal barrel polishing within the cavities for a time sufficient to attain a surface smoothness of less than 15 nm root mean square roughness over approximately a 1 mm.sup.2 scan area. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media bound to a carrier to tumble within the cavities. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media in a slurry to tumble within the cavities.

  18. A locally adaptive kernel regression method for facies delineation

    NASA Astrophysics Data System (ADS)

    Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.

    2015-12-01

    Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.

  19. Smooth electrode and method of fabricating same

    SciTech Connect

    Weaver, Stanton Earl; Kennerly, Stacey Joy; Aimi, Marco Francesco

    2012-08-14

    A smooth electrode is provided. The smooth electrode includes at least one metal layer having thickness greater than about 1 micron; wherein an average surface roughness of the smooth electrode is less than about 10 nm.

  20. An Extended Method of SIRMs Connected Fuzzy Inference Method Using Kernel Method

    NASA Astrophysics Data System (ADS)

    Seki, Hirosato; Mizuguchi, Fuhito; Watanabe, Satoshi; Ishii, Hiroaki; Mizumoto, Masaharu

    The single input rule modules connected fuzzy inference method (SIRMs method) by Yubazaki et al. can decrease the number of fuzzy rules drastically in comparison with the conventional fuzzy inference methods. Moreover, Seki et al. have proposed a functional-type SIRMs method which generalizes the consequent part of the SIRMs method to function. However, these SIRMs methods can not be applied to XOR (Exclusive OR). In this paper, we propose a “kernel-type SIRMs method” which uses the kernel trick to the SIRMs method, and show that this method can treat XOR. Further, a learning algorithm of the proposed SIRMs method is derived by using the steepest descent method, and compared with the one of conventional SIRMs method and kernel perceptron by applying to identification of nonlinear functions, medical diagnostic system and discriminant analysis of Iris data.

  1. Hardness methods for testing maize kernels.

    PubMed

    Fox, Glen; Manley, Marena

    2009-07-01

    Maize is a highly important crop to many countries around the world, through the sale of the maize crop to domestic processors and subsequent production of maize products and also provides a staple food to subsistance farms in undeveloped countries. In many countries, there have been long-term research efforts to develop a suitable hardness method that could assist the maize industry in improving efficiency in processing as well as possibly providing a quality specification for maize growers, which could attract a premium. This paper focuses specifically on hardness and reviews a number of methodologies as well as important biochemical aspects of maize that contribute to maize hardness used internationally. Numerous foods are produced from maize, and hardness has been described as having an impact on food quality. However, the basis of hardness and measurement of hardness are very general and would apply to any use of maize from any country. From the published literature, it would appear that one of the simpler methods used to measure hardness is a grinding step followed by a sieving step, using multiple sieve sizes. This would allow the range in hardness within a sample as well as average particle size and/or coarse/fine ratio to be calculated. Any of these parameters could easily be used as reference values for the development of near-infrared (NIR) spectroscopy calibrations. The development of precise NIR calibrations will provide an excellent tool for breeders, handlers, and processors to deliver specific cultivars in the case of growers and bulk loads in the case of handlers, thereby ensuring the most efficient use of maize by domestic and international processors. This paper also considers previous research describing the biochemical aspects of maize that have been related to maize hardness. Both starch and protein affect hardness, with most research focusing on the storage proteins (zeins). Both the content and composition of the zein fractions affect

  2. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  3. Kernel-Smoothing Estimation of Item Characteristic Functions for Continuous Personality Items: An Empirical Comparison with the Linear and the Continuous-Response Models

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2004-01-01

    This study used kernel-smoothing procedures to estimate the item characteristic functions (ICFs) of a set of continuous personality items. The nonparametric ICFs were compared with the ICFs estimated (a) by the linear model and (b) by Samejima's continuous-response model. The study was based on a conditioned approach and used an error-in-variables…

  4. Kernel methods for large-scale genomic data analysis

    PubMed Central

    Xing, Eric P.; Schaid, Daniel J.

    2015-01-01

    Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743

  5. Kernel weights optimization for error diffusion halftoning method

    NASA Astrophysics Data System (ADS)

    Fedoseev, Victor

    2015-02-01

    This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.

  6. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods

    PubMed Central

    Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  7. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.

    PubMed

    Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  8. Estimating the Bias of Local Polynomial Approximation Methods Using the Peano Kernel

    SciTech Connect

    Blair, J.; Machorro, E.; Luttman, A.

    2013-03-01

    The determination of uncertainty of an estimate requires both the variance and the bias of the estimate. Calculating the variance of local polynomial approximation (LPA) estimates is straightforward. We present a method, using the Peano Kernel Theorem, to estimate the bias of LPA estimates and show how this can be used to optimize the LPA parameters in terms of the bias-variance tradeoff. Figures of merit are derived and values calculated for several common methods. The results in the literature are expanded by giving bias error bounds that are valid for all lengths of the smoothing interval, generalizing the currently available asymptotic results that are only valid in the limit as the length of this interval goes to zero.

  9. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  10. Linear and kernel methods for multi- and hypervariate change detection

    NASA Astrophysics Data System (ADS)

    Nielsen, Allan A.; Canty, Morton J.

    2010-10-01

    The iteratively re-weighted multivariate alteration detection (IR-MAD) algorithm may be used both for unsuper- vised change detection in multi- and hyperspectral remote sensing imagery as well as for automatic radiometric normalization of multi- or hypervariate multitemporal image sequences. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA), kernel MAF and kernel MNF analyses handle nonlinearities by implicitly transforming data into high (even innite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In image analysis the Gram matrix is often prohibitively large (its size is the number of pixels in the image squared). In this case we may sub-sample the image and carry out the kernel eigenvalue analysis on a set of training data samples only. To obtain a transformed version of the entire image we then project all pixels, which we call the test data, mapped nonlinearly onto the primal eigenvectors. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric normalization and kernel PCA/MAF/MNF transformations have been written

  11. Tracking flame base movement and interaction with ignition kernels using topological methods

    NASA Astrophysics Data System (ADS)

    Mascarenhas, A.; Grout, R. W.; Yoo, C. S.; Chen, J. H.

    2009-07-01

    We segment the stabilization region in a simulation of a lifted jet flame based on its topology induced by the YOH field. Our segmentation method yields regions that correspond to the flame base and to potential auto-ignition kernels. We apply a region overlap based tracking method to follow the flame-base and the kernels over time, to study the evolution of kernels, and to detect when the kernels merge with the flame. The combination of our segmentation and tracking methods allow us observe flame stabilization via merging between the flame base and kernels; we also obtain YCH2O histories inside the kernels and detect a distinct decrease in radical concentration during transition to a developed flame.

  12. Decoding intracranial EEG data with multiple kernel learning method

    PubMed Central

    Schrouff, Jessica; Mourão-Miranda, Janaina; Phillips, Christophe; Parvizi, Josef

    2016-01-01

    Background Machine learning models have been successfully applied to neuroimaging data to make predictions about behavioral and cognitive states of interest. While these multivariate methods have greatly advanced the field of neuroimaging, their application to electrophysiological data has been less common especially in the analysis of human intracranial electroencephalography (iEEG, also known as electrocorticography or ECoG) data, which contains a rich spectrum of signals recorded from a relatively high number of recording sites. New method In the present work, we introduce a novel approach to determine the contribution of different bandwidths of EEG signal in different recording sites across different experimental conditions using the Multiple Kernel Learning (MKL) method. Comparison with existing method To validate and compare the usefulness of our approach, we applied this method to an ECoG dataset that was previously analysed and published with univariate methods. Results Our findings proved the usefulness of the MKL method in detecting changes in the power of various frequency bands during a given task and selecting automatically the most contributory signal in the most contributory site(s) of recording. Conclusions With a single computation, the contribution of each frequency band in each recording site in the estimated multivariate model can be highlighted, which then allows formulation of hypotheses that can be tested a posteriori with univariate methods if needed. PMID:26692030

  13. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    PubMed Central

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  14. Simulating non-Newtonian flows with the moving particle semi-implicit method with an SPH kernel

    NASA Astrophysics Data System (ADS)

    Xiang, Hao; Chen, Bin

    2015-02-01

    The moving particle semi-implicit (MPS) method and smoothed particle hydrodynamics (SPH) are commonly used mesh-free particle methods for free surface flows. The MPS method has superiority in incompressible flow simulation and simple programing. However, the crude kernel function is not accurate enough for the discretization of the divergence of the shear stress tensor by the particle inconsistency when the MPS method is extended to non-Newtonian flows. This paper presents an improved MPS method with an SPH kernel to simulate non-Newtonian flows. To improve the consistency of the partial derivative, the SPH cubic spline kernel and the Taylor series expansion are combined with the MPS method. This approach is suitable for all non-Newtonian fluids that can be described with τ = μ(|γ|) Δ (where τ is the shear stress tensor, μ is the viscosity, |γ| is the shear rate, and Δ is the strain tensor), e.g., the Casson and Cross fluids. Two examples are simulated including the Newtonian Poiseuille flow and container filling process of the Cross fluid. The results of Poiseuille flow are more accurate than the traditional MPS method, and different filling processes are obtained with good agreement with previous results, which verified the validation of the new algorithm. For the Cross fluid, the jet fracture length can be correlated with We0.28Fr0.78 (We is the Weber number, Fr is the Froude number).

  15. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    SciTech Connect

    Huang, Jessie Y.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.; Eklund, David; Childress, Nathan L.

    2013-12-15

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm.Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels.Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found

  16. Improvements to the kernel function method of steady, subsonic lifting surface theory

    NASA Technical Reports Server (NTRS)

    Medan, R. T.

    1974-01-01

    The application of a kernel function lifting surface method to three dimensional, thin wing theory is discussed. A technique for determining the influence functions is presented. The technique is shown to require fewer quadrature points, while still calculating the influence functions accurately enough to guarantee convergence with an increasing number of spanwise quadrature points. The method also treats control points on the wing leading and trailing edges. The report introduces and employs an aspect of the kernel function method which apparently has never been used before and which significantly enhances the efficiency of the kernel function approach.

  17. On the collocation methods for singular integral equations with Hilbert kernel

    NASA Astrophysics Data System (ADS)

    Du, Jinyuan

    2009-06-01

    In the present paper, we introduce some singular integral operators, singular quadrature operators and discretization matrices of singular integral equations with Hilbert kernel. These results both improve the classical theory of singular integral equations and develop the theory of singular quadrature with Hilbert kernel. Then by using them a unified framework for various collocation methods of numerical solutions of singular integral equations with Hilbert kernel is given. Under the framework, it is very simple and obvious to obtain the coincidence theorem of collocation methods, then the existence and convergence for constructing approximate solutions are also given based on the coincidence theorem.

  18. LoCoH: Non-parameteric kernel methods for constructing home ranges and utilization distributions

    USGS Publications Warehouse

    Getz, Wayne M.; Fortmann-Roe, Scott; Cross, Paul C.; Lyons, Andrew J.; Ryan, Sadie J.; Wilmers, Christopher C.

    2007-01-01

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: ‘‘fixed sphere-of-influence,’’ or r -LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an ‘‘adaptive sphere-of-influence,’’ or a -LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a ), and compare them to the original ‘‘fixed-number-of-points,’’ or k -LoCoH (all kernels constructed from k -1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a -LoCoH is generally superior to k - and r -LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).

  19. LoCoH: Nonparameteric Kernel Methods for Constructing Home Ranges and Utilization Distributions

    PubMed Central

    Getz, Wayne M.; Fortmann-Roe, Scott; Wilmers, Christopher C.

    2007-01-01

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: “fixed sphere-of-influence,” or r-LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an “adaptive sphere-of-influence,” or a-LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a), and compare them to the original “fixed-number-of-points,” or k-LoCoH (all kernels constructed from k-1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a-LoCoH is generally superior to k- and r-LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu). PMID:17299587

  20. Development of a single kernel analysis method for detection of 2-acetyl-1-pyrroline in aromatic rice germplasm

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Solid-phase microextraction (SPME) in conjunction with GC/MS was used to distinguish non-aromatic rice (Oryza sativa, L.) kernels from aromatic rice kernels. In this method, single kernels along with 10 µl of 0.1 ng 2,4,6-Trimethylpyridine (TMP) were placed in sealed vials and heated to 80oC for 18...

  1. A Non-smooth Newton Method for Multibody Dynamics

    SciTech Connect

    Erleben, K.; Ortiz, R.

    2008-09-01

    In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.

  2. Postprocessing Fourier spectral methods: The case of smooth solutions

    SciTech Connect

    Garcia-Archilla, B.; Novo, J.; Titi, E.S.

    1998-11-01

    A postprocessing technique to improve the accuracy of Galerkin methods, when applied to dissipative partial differential equations, is examined in the particular case of smooth solutions. Pseudospectral methods are shown to perform poorly. This performance is analyzed and a refined postprocessing technique is proposed.

  3. A Comprehensive Benchmark of Kernel Methods to Extract Protein–Protein Interactions from Literature

    PubMed Central

    Tikk, Domonkos; Thomas, Philippe; Palaga, Peter; Hakenberg, Jörg; Leser, Ulf

    2010-01-01

    The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein–protein interactions (PPIs) reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study shows that three

  4. Hyperbolic Divergence Cleaning Method for Godunov Smoothed Particle Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Iwasaki, K.; Inutsuka, S.-I.

    2013-04-01

    In this paper, we implement a divergence cleaning method into Godunov smoothed particle magnetohydrodynamics (GSPM). In the GSPM, to describe MHD shocks accurately, a Riemann solver is applied to the SPH method instead of artificial viscosity and resistivity that have been used in previous works. We confirmed that the divergence cleaning method reduces divergence errors significantly. The performance of the method is demonstrated in the numerical simulations of a strongly magnetized gas and bipolar outflow from the first core.

  5. A Simple Method for Solving the SVM Regularization Path for Semidefinite Kernels.

    PubMed

    Sentelle, Christopher G; Anagnostopoulos, Georgios C; Georgiopoulos, Michael

    2016-04-01

    The support vector machine (SVM) remains a popular classifier for its excellent generalization performance and applicability of kernel methods; however, it still requires tuning of a regularization parameter, C , to achieve optimal performance. Regularization path-following algorithms efficiently solve the solution at all possible values of the regularization parameter relying on the fact that the SVM solution is piece-wise linear in C . The SVMPath originally introduced by Hastie et al., while representing a significant theoretical contribution, does not work with semidefinite kernels. Ong et al. introduce a method improved SVMPath (ISVMP) algorithm, which addresses the semidefinite kernel; however, Singular Value Decomposition or QR factorizations are required, and a linear programming solver is required to find the next C value at each iteration. We introduce a simple implementation of the path-following algorithm that automatically handles semidefinite kernels without requiring a method to detect singular matrices nor requiring specialized factorizations or an external solver. We provide theoretical results showing how this method resolves issues associated with the semidefinite kernel as well as discuss, in detail, the potential sources of degeneracy and cycling and how cycling is resolved. Moreover, we introduce an initialization method for unequal class sizes based upon artificial variables that work within the context of the existing path-following algorithm and do not require an external solver. Experiments compare performance with the ISVMP algorithm introduced by Ong et al. and show that the proposed method is competitive in terms of training time while also maintaining high accuracy. PMID:26011894

  6. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  7. Smoothness Evaluation of Cotton Nonwovens Using Quality Energy Method

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nonwovens are finding enhanced use in next-to-skin application such as wipes. The global wipe industry is estimated somewhere between $6-8 billion. One important attributes of the wipes is its smoothness as it determines it end use applications. Although there are a number of methods and techniques ...

  8. A detailed error analysis of 13 kernel methods for protein–protein interaction extraction

    PubMed Central

    2013-01-01

    Background Kernel-based classification is the current state-of-the-art for extracting pairs of interacting proteins (PPIs) from free text. Various proposals have been put forward, which diverge especially in the specific kernel function, the type of input representation, and the feature sets. These proposals are regularly compared to each other regarding their overall performance on different gold standard corpora, but little is known about their respective performance on the instance level. Results We report on a detailed analysis of the shared characteristics and the differences between 13 current methods using five PPI corpora. We identified a large number of rather difficult (misclassified by most methods) and easy (correctly classified by most methods) PPIs. We show that kernels using the same input representation perform similarly on these pairs and that building ensembles using dissimilar kernels leads to significant performance gain. However, our analysis also reveals that characteristics shared between difficult pairs are few, which lowers the hope that new methods, if built along the same line as current ones, will deliver breakthroughs in extraction performance. Conclusions Our experiments show that current methods do not seem to do very well in capturing the shared characteristics of positive PPI pairs, which must also be attributed to the heterogeneity of the (still very few) available corpora. Our analysis suggests that performance improvements shall be sought after rather in novel feature sets than in novel kernel functions. PMID:23323857

  9. Immersed boundary smooth extension: A high-order method for solving PDE on arbitrary smooth domains using Fourier spectral methods

    NASA Astrophysics Data System (ADS)

    Stein, David B.; Guy, Robert D.; Thomases, Becca

    2016-01-01

    The Immersed Boundary method is a simple, efficient, and robust numerical scheme for solving PDE in general domains, yet it only achieves first-order spatial accuracy near embedded boundaries. In this paper, we introduce a new high-order numerical method which we call the Immersed Boundary Smooth Extension (IBSE) method. The IBSE method achieves high-order accuracy by smoothly extending the unknown solution of the PDE from a given smooth domain to a larger computational domain, enabling the use of simple Cartesian-grid discretizations (e.g. Fourier spectral methods). The method preserves much of the flexibility and robustness of the original IB method. In particular, it requires minimal geometric information to describe the boundary and relies only on convolution with regularized delta-functions to communicate information between the computational grid and the boundary. We present a fast algorithm for solving elliptic equations, which forms the basis for simple, high-order implicit-time methods for parabolic PDE and implicit-explicit methods for related nonlinear PDE. We apply the IBSE method to solve the Poisson, heat, Burgers', and Fitzhugh-Nagumo equations, and demonstrate fourth-order pointwise convergence for Dirichlet problems and third-order pointwise convergence for Neumann problems.

  10. Early discriminant method of infected kernel based on the erosion effects of laser ultrasonics

    NASA Astrophysics Data System (ADS)

    Fan, Chao

    2015-07-01

    To discriminate the infected kernel of the wheat as early as possible, a new kind of detection method of hidden insects, especially in their egg and larvae stage, was put forward based on the erosion effect of the laser ultrasonic in this paper. The surface of the grain is exposured by the pulsed laser, the energy of which is absorbed and the ultrasonic is excited, and the infected kernel can be recognized by appropriate signal analyzing. Firstly, the detection principle was given based on the classical wave equation and the platform was established. Then, the detected ultrasonic signal was processed both in the time domain and the frequency domain by using FFT and DCT , and six significant features were selected as the characteristic parameters of the signal by the method of stepwise discriminant analysis. Finally, a BP neural network was designed by using these six parameters as the input to classify the infected kernels from the normal ones. Numerous experiments were performed by using twenty wheat varieties, the results shown that the the infected kernels can be recognized effectively, and the false negative error and the false positive error was 12% and 9% respectively, the discriminant method of the infected kernels based on the erosion effect of laser ultrasonics is feasible.

  11. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    USGS Publications Warehouse

    Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.

    2014-01-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood

  12. Incorporation of measurement models in the IHCP: validation of methods for computing correction kernels

    NASA Astrophysics Data System (ADS)

    Woolley, J. W.; Wilson, H. B.; Woodbury, K. A.

    2008-11-01

    Thermocouples or other measuring devices are often imbedded into a solid to provide data for an inverse calculation. It is well-documented that such installations will result in erroneous (biased) sensor readings, unless the thermal properties of the measurement wires and surrounding insulation can be carefully matched to those of the parent domain. Since this rarely can be done, or doing so is prohibitively expensive, an alternative is to include a sensor model in the solution of the inverse problem. In this paper we consider a technique in which a thermocouple model is used to generate a correction kernel for use in the inverse solver. The technique yields a kernel function with terms in the Laplace domain. The challenge of determining the values of the correction kernel function is the focus of this paper. An adaptation of the sequential function specification method[1] as well as numerical Laplace transform inversion techniques are considered for determination of the kernel function values. Each inversion method is evaluated with analytical test functions which provide simulated "measurements". Reconstruction of the undisturbed temperature from the "measured" temperature and the correction kernel is demonstrated.

  13. Chemical method for producing smooth surfaces on silicon wafers

    SciTech Connect

    Yu, Conrad

    2003-01-01

    An improved method for producing optically smooth surfaces in silicon wafers during wet chemical etching involves a pre-treatment rinse of the wafers before etching and a post-etching rinse. The pre-treatment with an organic solvent provides a well-wetted surface that ensures uniform mass transfer during etching, which results in optically smooth surfaces. The post-etching treatment with an acetic acid solution stops the etching instantly, preventing any uneven etching that leads to surface roughness. This method can be used to etch silicon surfaces to a depth of 200 .mu.m or more, while the finished surfaces have a surface roughness of only 15-50 .ANG. (RMS).

  14. A Fourier-series-based kernel-independent fast multipole method

    SciTech Connect

    Zhang Bo; Huang Jingfang; Pitsianis, Nikos P.; Sun Xiaobai

    2011-07-01

    We present in this paper a new kernel-independent fast multipole method (FMM), named as FKI-FMM, for pairwise particle interactions with translation-invariant kernel functions. FKI-FMM creates, using numerical techniques, sufficiently accurate and compressive representations of a given kernel function over multi-scale interaction regions in the form of a truncated Fourier series. It provides also economic operators for the multipole-to-multipole, multipole-to-local, and local-to-local translations that are typical and essential in the FMM algorithms. The multipole-to-local translation operator, in particular, is readily diagonal and does not dominate in arithmetic operations. FKI-FMM provides an alternative and competitive option, among other kernel-independent FMM algorithms, for an efficient application of the FMM, especially for applications where the kernel function consists of multi-physics and multi-scale components as those arising in recent studies of biological systems. We present the complexity analysis and demonstrate with experimental results the FKI-FMM performance in accuracy and efficiency.

  15. Kernel sparse coding method for automatic target recognition in infrared imagery using covariance descriptor

    NASA Astrophysics Data System (ADS)

    Yang, Chunwei; Yao, Junping; Sun, Dawei; Wang, Shicheng; Liu, Huaping

    2016-05-01

    Automatic target recognition in infrared imagery is a challenging problem. In this paper, a kernel sparse coding method for infrared target recognition using covariance descriptor is proposed. First, covariance descriptor combining gray intensity and gradient information of the infrared target is extracted as a feature representation. Then, due to the reason that covariance descriptor lies in non-Euclidean manifold, kernel sparse coding theory is used to solve this problem. We verify the efficacy of the proposed algorithm in terms of the confusion matrices on the real images consisting of seven categories of infrared vehicle targets.

  16. Standard Errors of the Kernel Equating Methods under the Common-Item Design.

    ERIC Educational Resources Information Center

    Liou, Michelle; And Others

    This research derives simplified formulas for computing the standard error of the frequency estimation method for equating score distributions that are continuized using a uniform or Gaussian kernel function (P. W. Holland, B. F. King, and D. T. Thayer, 1989; Holland and Thayer, 1987). The simplified formulas are applicable to equating both the…

  17. Method for smoothing the surface of a protective coating

    DOEpatents

    Sangeeta, D.; Johnson, Curtis Alan; Nelson, Warren Arthur

    2001-01-01

    A method for smoothing the surface of a ceramic-based protective coating which exhibits roughness is disclosed. The method includes the steps of applying a ceramic-based slurry or gel coating to the protective coating surface; heating the slurry/gel coating to remove volatile material; and then further heating the slurry/gel coating to cure the coating and bond it to the underlying protective coating. The slurry/gel coating is often based on yttria-stabilized zirconia, and precursors of an oxide matrix. Related articles of manufacture are also described.

  18. The Continuized Log-Linear Method: An Alternative to the Kernel Method of Continuization in Test Equating

    ERIC Educational Resources Information Center

    Wang, Tianyou

    2008-01-01

    Von Davier, Holland, and Thayer (2004) laid out a five-step framework of test equating that can be applied to various data collection designs and equating methods. In the continuization step, they presented an adjusted Gaussian kernel method that preserves the first two moments. This article proposes an alternative continuization method that…

  19. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    PubMed

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. PMID:26139508

  20. Modeling Electrokinetic Flows by the Smoothed Profile Method

    PubMed Central

    Luo, Xian; Beskok, Ali; Karniadakis, George Em

    2010-01-01

    We propose an efficient modeling method for electrokinetic flows based on the Smoothed Profile Method (SPM) [1–4] and spectral element discretizations. The new method allows for arbitrary differences in the electrical conductivities between the charged surfaces and the the surrounding electrolyte solution. The electrokinetic forces are included into the flow equations so that the Poisson-Boltzmann and electric charge continuity equations are cast into forms suitable for SPM. The method is validated by benchmark problems of electroosmotic flow in straight channels and electrophoresis of charged cylinders. We also present simulation results of electrophoresis of charged microtubules, and show that the simulated electrophoretic mobility and anisotropy agree with the experimental values. PMID:20352076

  1. Arima model and exponential smoothing method: A comparison

    NASA Astrophysics Data System (ADS)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  2. A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System.

    PubMed

    Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan

    2015-01-01

    The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods. PMID:26229526

  3. A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System

    PubMed Central

    Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan

    2015-01-01

    The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods. PMID:26229526

  4. The method of tailored sensitivity kernels for GRACE mass change estimates

    NASA Astrophysics Data System (ADS)

    Groh, Andreas; Horwath, Martin

    2016-04-01

    To infer mass changes (such as mass changes of an ice sheet) from time series of GRACE spherical harmonic solutions, two basic approaches (with many variants) exist: The regional integration approach (or direct approach) is based on surface mass changes (equivalent water height, EWH) from GRACE and integrates those with specific integration kernels. The forward modeling approach (or mascon approach, or inverse approach) prescribes a finite set of mass change patterns and adjusts the amplitudes of those patterns (in a least squares sense) to the GRACE gravity field changes. The present study reviews the theoretical framework of both approaches. We recall that forward modeling approaches ultimately estimate mass changes by linear functionals of the gravity field changes. Therefore, they implicitly apply sensitivity kernels and may be considered as special realizations of the regional integration approach. We show examples for sensitivity kernels intrinsic to forward modeling approaches. We then propose to directly tailor sensitivity kernels (or in other words: mass change estimators) by a formal optimization procedure that minimizes the sum of propagated GRACE solution errors and leakage errors. This approach involves the incorporation of information on the structure of GRACE errors and the structure of those mass change signals that are most relevant for leakage errors. We discuss the realization of this method, as applied within the ESA "Antarctic Ice Sheet CCI (Climate Change Initiative)" project. Finally, results for the Antarctic Ice Sheet in terms of time series of mass changes of individual drainage basins and time series of gridded EWH changes are presented.

  5. Verification and large deformation analysis using the reproducing kernel particle method

    SciTech Connect

    Beckwith, Frank

    2015-09-01

    The reproducing kernel particle method (RKPM) is a meshless method used to solve general boundary value problems using the principle of virtual work. RKPM corrects the kernel approximation by introducing reproducing conditions which force the method to be complete to arbritrary order polynomials selected by the user. Effort in recent years has led to the implementation of RKPM within the Sierra/SM physics software framework. The purpose of this report is to investigate convergence of RKPM for verification and validation purposes as well as to demonstrate the large deformation capability of RKPM in problems where the finite element method is known to experience difficulty. Results from analyses using RKPM are compared against finite element analysis. A host of issues associated with RKPM are identified and a number of potential improvements are discussed for future work.

  6. Scalable Kernel Methods and Algorithms for General Sequence Analysis

    ERIC Educational Resources Information Center

    Kuksa, Pavel

    2011-01-01

    Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…

  7. Single corn kernel aflatoxin B1 extraction and analysis method

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aflatoxins are highly carcinogenic compounds produced by the fungus Aspergillus flavus. Aspergillus flavus is a phytopathogenic fungus that commonly infects crops such as cotton, peanuts, and maize. The goal was to design an effective sample preparation method and analysis for the extraction of afla...

  8. a Kernel Method Based on Topic Model for Very High Spatial Resolution (vhsr) Remote Sensing Image Classification

    NASA Astrophysics Data System (ADS)

    Wu, Linmei; Shen, Li; Li, Zhipeng

    2016-06-01

    A kernel-based method for very high spatial resolution remote sensing image classification is proposed in this article. The new kernel method is based on spectral-spatial information and structure information as well, which is acquired from topic model, Latent Dirichlet Allocation model. The final kernel function is defined as K = u1Kspec + u2Kspat + u3Kstru, in which Kspec, Kspat, Kstru are radial basis function (RBF) and u1 + u2 + u3 = 1. In the experiment, comparison with three other kernel methods, including the spectral-based, the spectral- and spatial-based and the spectral- and structure-based method, is provided for a panchromatic QuickBird image of a suburban area with a size of 900 × 900 pixels and spatial resolution of 0.6 m. The result shows that the overall accuracy of the spectral- and structure-based kernel method is 80 %, which is higher than the spectral-based kernel method, as well as the spectral- and spatial-based which accuracy respectively is 67 % and 74 %. What's more, the accuracy of the proposed composite kernel method that jointly uses the spectral, spatial, and structure information is highest among the four methods which is increased to 83 %. On the other hand, the result of the experiment also verifies the validity of the expression of structure information about the remote sensing image.

  9. Noninvasive reconstruction of cardiac transmembrane potentials using a kernelized extreme learning method

    NASA Astrophysics Data System (ADS)

    Jiang, Mingfeng; Zhang, Heng; Zhu, Lingyan; Cao, Li; Wang, Yaming; Xia, Ling; Gong, Yinglan

    2015-04-01

    Non-invasively reconstructing the cardiac transmembrane potentials (TMPs) from body surface potentials can act as a regression problem. The support vector regression (SVR) method is often used to solve the regression problem, however the computational complexity of the SVR training algorithm is usually intensive. In this paper, another learning algorithm, termed as extreme learning machine (ELM), is proposed to reconstruct the cardiac transmembrane potentials. Moreover, ELM can be extended to single-hidden layer feed forward neural networks with kernel matrix (kernelized ELM), which can achieve a good generalization performance at a fast learning speed. Based on the realistic heart-torso models, a normal and two abnormal ventricular activation cases are applied for training and testing the regression model. The experimental results show that the ELM method can perform a better regression ability than the single SVR method in terms of the TMPs reconstruction accuracy and reconstruction speed. Moreover, compared with the ELM method, the kernelized ELM method features a good approximation and generalization ability when reconstructing the TMPs.

  10. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  11. Discretization errors associated with Reproducing Kernel Methods: One-dimensional domains

    SciTech Connect

    Voth, T.E.; Christon, M.A.

    2000-01-10

    The Reproducing Kernel Particle Method (RKPM) is a discretization technique for partial differential equations that uses the method of weighted residuals, classical reproducing kernel theory and modified kernels to produce either ``mesh-free'' or ``mesh-full'' methods. Although RKPM has many appealing attributes, the method is new, and its numerical performance is just beginning to be quantified. In order to address the numerical performance of RKPM, von Neumann analysis is performed for semi-discretizations of three model one-dimensional PDEs. The von Neumann analyses results are used to examine the global and asymptotic behavior of the semi-discretizations. The model PDEs considered for this analysis include the parabolic and hyperbolic (first and second-order wave) equations. Numerical diffusivity for the former and phase speed for the later are presented over the range of discrete wavenumbers and in an asymptotic sense as the particle spacing tends to zero. Group speed is also presented for the hyperbolic problems. Excellent diffusive and dispersive characteristics are observed when a consistent mass matrix formulation is used with the proper choice of refinement parameter. In contrast, the row-sum lumped mass matrix formulation severely degraded performance. The asymptotic analysis indicates that very good rates of convergence are possible when the consistent mass matrix formulation is used with an appropriate choice of refinement parameter.

  12. Using nonlinear kernels in seismic tomography: go beyond gradient methods

    NASA Astrophysics Data System (ADS)

    Wu, R.

    2013-05-01

    In quasi-linear inversion, a nonlinear problem is typically solved iteratively and at each step the nonlinear problem is linearized through the use of a linear functional derivative, the Fréchet derivative. Higher order terms generally are assumed to be insignificant and neglected. The linearization approach leads to the popular gradient method of seismic inversion. However, for the real Earth, the wave equation (and the real wave propagation) is strongly nonlinear with respect to the medium parameter perturbations. Therefore, the quasi-linear inversion may have a serious convergence problem for strong perturbations. In this presentation I will compare the convergence properties of the Taylor-Fréchet series and the renormalized Fréchet series, the De Wolf approximation, and illustrate the improved convergence property with numerical examples. I'll also discuss the application of nonlinear partial derivative to least-square waveform inversion. References: Bonnans, J., Gilbert, J., Lemarechal, C. and Sagastizabal, C., 2006, Numirical optmization, Springer. Wu, R.S. and Y. Zheng, 2012. Nonlinear Fréchet derivative and its De Wolf approximation, Expanded Abstracts of Society of Exploration Gephysicists, SI 8.1.

  13. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice

    PubMed Central

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel “trick” concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available. PMID:27555865

  14. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    PubMed

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available. PMID:27555865

  15. Interferogram interpolation method research on TSMFTIS based on kernel regression with relative deviation

    NASA Astrophysics Data System (ADS)

    Huang, Fengzhen; Li, Jingzhen; Cao, Jun

    2015-02-01

    Temporally and Spatially Modulated Fourier Transform Imaging Spectrometer (TSMFTIS) is a new imaging spectrometer without moving mirrors and slits. As applied in remote sensing, TSMFTIS needs to rely on push-broom of the flying platform to obtain the interferogram of the target detected, and if the moving state of the flying platform changed during the imaging process, the target interferogram picked up from the remote sensing image sequence will deviate from the ideal interferogram, then the target spectrum recovered shall not reflect the real characteristic of the ground target object. Therefore, in order to achieve a high precision spectrum recovery of the target detected, the geometry position of the target point on the TSMFTIS image surface can be calculated in accordance with the sub-pixel image registration method, and the real point interferogram of the target can be obtained with image interpolation method. The core idea of the interpolation methods (nearest, bilinear and cubic etc) are to obtain the grey value of the point to be interpolated by weighting the grey value of the pixel around and with the kernel function constructed by the distance between the pixel around and the point to be interpolated. This paper adopts the gauss-based kernel regression mode, present a kernel function that consists of the grey information making use of the relative deviation and the distance information, then the kernel function is controlled by the deviation degree between the grey value of the pixel around and the means value so as to adjust weights self adaptively. The simulation adopts the partial spectrum data obtained by the pushbroom hyperspectral imager (PHI) as the spectrum of the target, obtains the successively push broomed motion error image in combination with the related parameter of the actual aviation platform; then obtains the interferogram of the target point with the above interpolation method; finally, recovers spectrogram with the nonuniform fast

  16. A new approach to a maximum à posteriori-based kernel classification method.

    PubMed

    Nopriadi; Yamashita, Yukihiko

    2012-09-01

    This paper presents a new approach to a maximum a posteriori (MAP)-based classification, specifically, MAP-based kernel classification trained by linear programming (MAPLP). Unlike traditional MAP-based classifiers, MAPLP does not directly estimate a posterior probability for classification. Instead, it introduces a kernelized function to an objective function that behaves similarly to a MAP-based classifier. To evaluate the performance of MAPLP, a binary classification experiment was performed with 13 datasets. The results of this experiment are compared with those coming from conventional MAP-based kernel classifiers and also from other state-of-the-art classification methods. It shows that MAPLP performs promisingly against the other classification methods. It is argued that the proposed approach makes a significant contribution to MAP-based classification research; the approach widens the freedom to choose an objective function, it is not constrained to the strict sense Bayesian, and can be solved by linear programming. A substantial advantage of our proposed approach is that the objective function is undemanding, having only a single parameter. This simplicity, thus, allows for further research development in the future. PMID:22721808

  17. A new method by steering kernel-based Richardson-Lucy algorithm for neutron imaging restoration

    NASA Astrophysics Data System (ADS)

    Qiao, Shuang; Wang, Qiao; Sun, Jia-ning; Huang, Ji-peng

    2014-01-01

    Motivated by industrial applications, neutron radiography has become a powerful tool for non-destructive investigation techniques. However, resulted from a combined effect of neutron flux, collimated beam, limited spatial resolution of detector and scattering, etc., the images made with neutrons are degraded severely by blur and noise. For dealing with it, by integrating steering kernel regression into Richardson-Lucy approach, we present a novel restoration method in this paper, which is capable of suppressing noise while restoring details of the blurred imaging result efficiently. Experimental results show that compared with the other methods, the proposed method can improve the restoration quality both visually and quantitatively.

  18. A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4

    NASA Technical Reports Server (NTRS)

    Park, Young-Keun; Fahrenthold, Eric P.

    2004-01-01

    An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.

  19. A novel cortical thickness estimation method based on volumetric Laplace-Beltrami operator and heat kernel.

    PubMed

    Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J; Wang, Yalin

    2015-05-01

    Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the gray matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer's disease (AD) and mild cognitive impairment (MCI) on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces. PMID:25700360

  20. A Novel Cortical Thickness Estimation Method based on Volumetric Laplace-Beltrami Operator and Heat Kernel

    PubMed Central

    Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J.; Wang, Yalin

    2015-01-01

    Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the grey matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer’s disease (AD) and mild cognitive impairment (MCI) on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces. PMID:25700360

  1. A Particle-Particle Collision Model for Smoothed Profile Method

    NASA Astrophysics Data System (ADS)

    Mohaghegh, Fazlolah; Mousel, John; Udaykumar, H. S.

    2014-11-01

    Smoothed Profile Method (SPM) is a type of continuous forcing approach that adds the particles to the fluid using a forcing. The fluid-structure interaction is through a diffuse interface which avoids sudden transition from solid to fluid. The SPM simulation as a monolithic approach uses an indicator function field in the whole domain based on the distance from each particle's boundary where the possible particle-particle interaction can occur. A soft sphere potential based on the indicator function field has been defined to add an artificial pressure to the flow pressure in the potential overlapping regions. Thus, a repulsion force is obtained to avoid overlapping. Study of two particles which impulsively start moving in an initially uniform flow shows that the particle in the wake of the other one will have less acceleration leading to frequent collisions. Various Reynolds numbers and initial distances have been chosen to test the robustness of the method. Study of Drafting-Kissing Tumbling of two cylindrical particles shows a deviation from the benchmarks due to lack of rotation modeling. The method is shown to be accurate enough for simulating particle-particle collision and can easily be extended for particle-wall modeling and for non-spherical particles.

  2. Weighted Wilcoxon-type Smoothly Clipped Absolute Deviation Method

    PubMed Central

    Wang, Lan; Li, Runze

    2009-01-01

    Summary Shrinkage-type variable selection procedures have recently seen increasing applications in biomedical research. However, their performance can be adversely influenced by outliers in either the response or the covariate space. This paper proposes a weighted Wilcoxon-type smoothly clipped absolute deviation (WW-SCAD) method, which deals with robust variable selection and robust estimation simultaneously. The new procedure can be conveniently implemented with the statistical software R. We establish that the WW-SCAD correctly identifies the set of zero coefficients with probability approaching one and estimates the nonzero coefficients with the rate n−1/2. Moreover, with appropriately chosen weights the WW-SCAD is robust with respect to outliers in both the x and y directions. The important special case with constant weights yields an oracle-type estimator with high efficiency at the presence of heavier-tailed random errors. The robustness of the WW-SCAD is partly justified by its asymptotic performance under local shrinking contamination. We propose a BIC-type tuning parameter selector for the WW-SCAD. The performance of the WW-SCAD is demonstrated via simulations and by an application to a study that investigates the effects of personal characteristics and dietary factors on plasma beta-carotene level. PMID:18647294

  3. A high-order Legendre-WENO kernel density function method for modeling disperse flows

    NASA Astrophysics Data System (ADS)

    Smith, Timothy; Pantano, Carlos

    2015-11-01

    We present a high-order kernel density function (KDF) method for disperse flow. The numerical method used to solve the system of hyperbolic equations utilizes a Roe-like update for equations in non-conservation form. We will present the extension of the low-order method to high order using the Legendre-WENO method and demonstrate the improved capability of the method to predict statistics of disperse flows in an accurate, consistent and efficient manner. By construction, the KDF method already enforced many realizability conditions but others remain. The proposed method also considers these constraints and their performance will be discussed. This project was funded by NSF project NSF-DMS 1318161.

  4. Impact of beam smoothing method on direct drive target performance for the NIF

    SciTech Connect

    Rothenberg, J.E.; Weber, S.V.

    1996-11-01

    The impact of smoothing method on the performance of a direct drive target is modeled and examined in terms of its l-mode spectrum. In particular, two classes of smoothing methods are compared, smoothing by spectral dispersion (SSD) and the induced spatial incoherence (ISI) method. It is found that SSD using sinusoidal phase modulation (FM) results in poor smoothing at low l-modes and therefore inferior target performance at both peak velocity and ignition. Modeling of the hydrodynamic nonlinearity shows that saturation tends to reduce the difference between target performance for the smoothing methods considered. However, using SSD with more generalized phase modulation results in a smoothed spatial spectrum, and therefore target performance, which is identical to that obtained with the ISI or similar method where random phase plates are present in both methods and identical beam divergence is assumed.

  5. Kernel sweeping method for exact diagonalization of spin models - numerical computation of a CSL Hamiltonian

    NASA Astrophysics Data System (ADS)

    Schroeter, Darrell; Kapit, Eliot; Thomale, Ronny; Greiter, Martin

    2007-03-01

    We have recently constructed a Hamiltonian that singles out the chiral spin liquid on a square lattice with periodic boundary conditions as the exact and, apart from the two-fold topological degeneracy, unique ground state [1]. The talk will present a kernel-sweeping method that greatly reduces the numerical effort required to perform the exact diagonalization of the Hamiltonian. Results from the calculation of the model on a 4x4 lattice, including the spectrum of the model, will be presented. [1] D. F. Schroeter, E. Kapit, R. Thomale, and M. Greiter, Phys. Rev. Lett. in review.

  6. A unified kernel regression for diffusion wavelets on manifolds detects aging-related changes in the amygdala and hippocampus.

    PubMed

    Chung, Moo K; Schaefer, Stacey M; Van Reekum, Carien M; Peschke-Schmitz, Lara; Sutterer, Mattew J; Davidson, Richard J

    2014-01-01

    We present a new unified kernel regression framework on manifolds. Starting with a symmetric positive definite kernel, we formulate a new bivariate kernel regression framework that is related to heat diffusion, kernel smoothing and recently popular diffusion wavelets. Various properties and performance of the proposed kernel regression framework are demonstrated. The method is subsequently applied in investigating the influence of age and gender on the human amygdala and hippocampus shapes. We detected a significant age effect on the posterior regions of hippocampi while there is no gender effect present. PMID:25485452

  7. Automated endmember determination and adaptive spectral mixture analysis using kernel methods

    NASA Astrophysics Data System (ADS)

    Rand, Robert S.; Banerjee, Amit; Broadwater, Joshua

    2013-09-01

    Various phenomena occur in geographic regions that cause pixels of a scene to contain spectrally mixed pixels. The mixtures may be linear or nonlinear. It could simply be that the pixel size of a sensor is too large so many pixels contain patches of different materials within them (linear), or there could be microscopic mixtures and multiple scattering occurring within pixels (non-linear). Often enough, scenes may contain cases of both linear and non-linear mixing on a pixel-by-pixel basis. Furthermore, appropriate endmembers in a scene are not always easy to determine. A reference spectral library of materials may or may not be available, yet, even if a library is available, using it directly for spectral unmixing may not always be fruitful. This study investigates a generalized kernel-based method for spectral unmixing that attempts to determine if each pixel in a scene is linear or non-linear, and adapts to compute a mixture model at each pixel accordingly. The effort also investigates a kernel-based support vector method for determining spectral endmembers in a scene. Two scenes of hyperspectral imagery calibrated to reflectance are used to validate the methods. We test the approaches using a HyMAP scene collected over the Waimanalo Bay region in Oahu, Hawaii, as well as an AVIRIS scene collected over the oil spill region in the Gulf of Mexico during the Deepwater Horizon oil incident.

  8. A Distributed Learning Method for ℓ 1 -Regularized Kernel Machine over Wireless Sensor Networks.

    PubMed

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ 1 norm regularization ( ℓ 1 -regularized) is investigated, and a novel distributed learning algorithm for the ℓ 1 -regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  9. A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks

    PubMed Central

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  10. Methods and electrolytes for electrodeposition of smooth films

    SciTech Connect

    Zhang, Jiguang; Xu, Wu; Graff, Gordon L; Chen, Xilin; Ding, Fei; Shao, Yuyan

    2015-03-17

    Electrodeposition involving an electrolyte having a surface-smoothing additive can result in self-healing, instead of self-amplification, of initial protuberant tips that give rise to roughness and/or dendrite formation on the substrate and/or film surface. For electrodeposition of a first conductive material (C1) on a substrate from one or more reactants in an electrolyte solution, the electrolyte solution is characterized by a surface-smoothing additive containing cations of a second conductive material (C2), wherein cations of C2 have an effective electrochemical reduction potential in the solution lower than that of the reactants.

  11. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318

  12. Representation of fluctuation features in pathological knee joint vibroarthrographic signals using kernel density modeling method.

    PubMed

    Yang, Shanshan; Cai, Suxian; Zheng, Fang; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Zou, Quan; Chen, Jian

    2014-10-01

    This article applies advanced signal processing and computational methods to study the subtle fluctuations in knee joint vibroarthrographic (VAG) signals. Two new features are extracted to characterize the fluctuations of VAG signals. The fractal scaling index parameter is computed using the detrended fluctuation analysis algorithm to describe the fluctuations associated with intrinsic correlations in the VAG signal. The averaged envelope amplitude feature measures the difference between the upper and lower envelopes averaged over an entire VAG signal. Statistical analysis with the Kolmogorov-Smirnov test indicates that both of the fractal scaling index (p=0.0001) and averaged envelope amplitude (p=0.0001) features are significantly different between the normal and pathological signal groups. The bivariate Gaussian kernels are utilized for modeling the densities of normal and pathological signals in the two-dimensional feature space. Based on the feature densities estimated, the Bayesian decision rule makes better signal classifications than the least-squares support vector machine, with the overall classification accuracy of 88% and the area of 0.957 under the receiver operating characteristic (ROC) curve. Such VAG signal classification results are better than those reported in the state-of-the-art literature. The fluctuation features of VAG signals developed in the present study can provide useful information on the pathological conditions of degenerative knee joints. Classification results demonstrate the effectiveness of the kernel feature density modeling method for computer-aided VAG signal analysis. PMID:25096412

  13. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel

  14. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel

  15. Impact of beam smoothing method on direct drive target performance for the NIF

    SciTech Connect

    Rothenberg, J.E.; Weber, S.V.

    1997-01-01

    The impact of smoothing method on the performance of a direct drive target is modeled and examined in terms of its 1-mode spectrum. In particular, two classes of smoothing methods are compared, smoothing by spectral dispersion (SSD) and the induced spatial incoherence (ISI) method. It is found that SSD using sinusoidal phase modulation (FM) results in poor smoothing at low 1-modes and therefore inferior target performance at both peak velocity and ignition. This disparity is most notable if the effective imprinting integration time of the target is small. However, using SSD with more generalized phase modulation can result in smoothing at low l-modes which is identical to that obtained with ISI. For either smoothing method, the calculations indicate that at peak velocity the surface perturbations are about 100 times larger than that which leads to nonlinear hydrodynamics. Modeling of the hydrodynamic nonlinearity shows that saturation can reduce the amplified nonuniformities to the level required to achieve ignition for either smoothing method. The low l- mode behavior at ignition is found to be strongly dependent on the induced divergence of the smoothing method. For the NIF parameters the target performance asymptotes for smoothing divergence larger than {approximately}100 {mu}rad.

  16. Multi-feature-based robust face detection and coarse alignment method via multiple kernel learning

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Zhang, Di; He, Jun; Yu, Lejun; Wu, Xuewen

    2015-10-01

    Face detection and alignment are two crucial tasks to face recognition which is a hot topic in the field of defense and security, whatever for the safety of social public, personal property as well as information and communication security. Common approaches toward the treatment of these tasks in recent years are often of three types: template matching-based, knowledge-based and machine learning-based, which are always separate-step, high computation cost or fragile robust. After deep analysis on a great deal of Chinese face images without hats, we propose a novel face detection and coarse alignment method, which is inspired by those three types of methods. It is multi-feature fusion with Simple Multiple Kernel Learning1 (Simple-MKL) algorithm. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve promising results.

  17. Investigations on reproducing kernel particle method enriched by partition of unity and visibility criterion

    NASA Astrophysics Data System (ADS)

    Zhang, Z. Q.; Zhou, J. X.; Wang, X. M.; Zhang, Y. F.; Zhang, L.

    2004-09-01

    This work introduces a numerical integration technique based on partition of unity (PU) to reproducing kernel particle method (RKPM) and presents an implementation of the visibility criterion for meshfree methods. According to the theory of PU and the inherent features of Gaussian quadrature, the convergence property of the PU integration is studied in the paper. Moreover, the practical approaches to implement the PU integration are presented in different strategies. And a method to carry out visibility criterion is presented to handle the problems with a complex domain. Furthermore, numerical examples have been performed on the h-version and p-like version convergence studies of the PU integration and the validity of visibility criterion. The results demonstrate that PU integration is a feasible and effective numerical integration technique, and RKPM enriched by PU integration and visibility criterion is of more efficiency, versatility and high performance.

  18. Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods

    NASA Astrophysics Data System (ADS)

    Hora, Heinrich; Aydin, Meral

    1992-04-01

    The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too.

  19. Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods

    SciTech Connect

    Hora, H. ); Aydin, M. )

    1992-04-15

    The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too.

  20. Methods for Smoothing Expectancy Tables Applied to the Prediction of Success in College

    ERIC Educational Resources Information Center

    Perrin, David W.; Whitney, Douglas R.

    1976-01-01

    The gains in accuracy resulting from applying any of the smoothing methods appear sufficient to justify the suggestion that all expectancy tables used by colleges for admission, guidance, or planning purposes should be smoothed. These methods on the average, reduce the criterion measure (an index of inaccuracy) by 30 percent. (Author/MV)

  1. An Evaluation of Kernel Equating: Parallel Equating with Classical Methods in the SAT Subject Tests[TM] Program. Research Report. ETS RR-09-06

    ERIC Educational Resources Information Center

    Grant, Mary C.; Zhang, Lilly; Damiano, Michele

    2009-01-01

    This study investigated kernel equating methods by comparing these methods to operational equatings for two tests in the SAT Subject Tests[TM] program. GENASYS (ETS, 2007) was used for all equating methods and scaled score kernel equating results were compared to Tucker, Levine observed score, chained linear, and chained equipercentile equating…

  2. Scatter kernel estimation with an edge-spread function method for cone-beam computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Li, Heng; Mohan, Radhe; Zhu, X. Ronald

    2008-12-01

    The clinical applications of kilovoltage x-ray cone-beam computed tomography (CBCT) have been compromised by the limited quality of CBCT images, which typically is due to a substantial scatter component in the projection data. In this paper, we describe an experimental method of deriving the scatter kernel of a CBCT imaging system. The estimated scatter kernel can be used to remove the scatter component from the CBCT projection images, thus improving the quality of the reconstructed image. The scattered radiation was approximated as depth-dependent, pencil-beam kernels, which were derived using an edge-spread function (ESF) method. The ESF geometry was achieved with a half-beam block created by a 3 mm thick lead sheet placed on a stack of slab solid-water phantoms. Measurements for ten water-equivalent thicknesses (WET) ranging from 0 cm to 41 cm were taken with (half-blocked) and without (unblocked) the lead sheet, and corresponding pencil-beam scatter kernels or point-spread functions (PSFs) were then derived without assuming any empirical trial function. The derived scatter kernels were verified with phantom studies. Scatter correction was then incorporated into the reconstruction process to improve image quality. For a 32 cm diameter cylinder phantom, the flatness of the reconstructed image was improved from 22% to 5%. When the method was applied to CBCT images for patients undergoing image-guided therapy of the pelvis and lung, the variation in selected regions of interest (ROIs) was reduced from >300 HU to <100 HU. We conclude that the scatter reduction technique utilizing the scatter kernel effectively suppresses the artifact caused by scatter in CBCT.

  3. A fast object-oriented Matlab implementation of the Reproducing Kernel Particle Method

    NASA Astrophysics Data System (ADS)

    Barbieri, Ettore; Meo, Michele

    2012-05-01

    Novel numerical methods, known as Meshless Methods or Meshfree Methods and, in a wider perspective, Partition of Unity Methods, promise to overcome most of disadvantages of the traditional finite element techniques. The absence of a mesh makes meshfree methods very attractive for those problems involving large deformations, moving boundaries and crack propagation. However, meshfree methods still have significant limitations that prevent their acceptance among researchers and engineers, namely the computational costs. This paper presents an in-depth analysis of computational techniques to speed-up the computation of the shape functions in the Reproducing Kernel Particle Method and Moving Least Squares, with particular focus on their bottlenecks, like the neighbour search, the inversion of the moment matrix and the assembly of the stiffness matrix. The paper presents numerous computational solutions aimed at a considerable reduction of the computational times: the use of kd-trees for the neighbour search, sparse indexing of the nodes-points connectivity and, most importantly, the explicit and vectorized inversion of the moment matrix without using loops and numerical routines.

  4. A kernel-based method for markerless tumor tracking in kV fluoroscopic images.

    PubMed

    Zhang, Xiaoyong; Homma, Noriyasu; Ichiji, Kei; Abe, Makoto; Sugita, Norihiro; Takai, Yoshihiro; Narita, Yuichiro; Yoshizawa, Makoto

    2014-09-01

    Markerless tracking of respiration-induced tumor motion in kilo-voltage (kV) fluoroscopic image sequence is still a challenging task in real time image-guided radiation therapy (IGRT). Most of existing markerless tracking methods are based on a template matching technique or its extensions that are frequently sensitive to non-rigid tumor deformation and involve expensive computation. This paper presents a kernel-based method that is capable of tracking tumor motion in kV fluoroscopic image sequence with robust performance and low computational cost. The proposed tracking system consists of the following three steps. To enhance the contrast of kV fluoroscopic image, we firstly utilize a histogram equalization to transform the intensities of original images to a wider dynamical intensity range. A tumor target in the first frame is then represented by using a histogram-based feature vector. Subsequently, the target tracking is then formulated by maximizing a Bhattacharyya coefficient that measures the similarity between the tumor target and its candidates in the subsequent frames. The numerical solution for maximizing the Bhattacharyya coefficient is performed by a mean-shift algorithm. The proposed method was evaluated by using four clinical kV fluoroscopic image sequences. For comparison, we also implement four conventional template matching-based methods and compare their performance with our proposed method in terms of the tracking accuracy and computational cost. Experimental results demonstrated that the proposed method is superior to conventional template matching-based methods. PMID:25098382

  5. A kernel-based method for markerless tumor tracking in kV fluoroscopic images

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyong; Homma, Noriyasu; Ichiji, Kei; Abe, Makoto; Sugita, Norihiro; Takai, Yoshihiro; Narita, Yuichiro; Yoshizawa, Makoto

    2014-09-01

    Markerless tracking of respiration-induced tumor motion in kilo-voltage (kV) fluoroscopic image sequence is still a challenging task in real time image-guided radiation therapy (IGRT). Most of existing markerless tracking methods are based on a template matching technique or its extensions that are frequently sensitive to non-rigid tumor deformation and involve expensive computation. This paper presents a kernel-based method that is capable of tracking tumor motion in kV fluoroscopic image sequence with robust performance and low computational cost. The proposed tracking system consists of the following three steps. To enhance the contrast of kV fluoroscopic image, we firstly utilize a histogram equalization to transform the intensities of original images to a wider dynamical intensity range. A tumor target in the first frame is then represented by using a histogram-based feature vector. Subsequently, the target tracking is then formulated by maximizing a Bhattacharyya coefficient that measures the similarity between the tumor target and its candidates in the subsequent frames. The numerical solution for maximizing the Bhattacharyya coefficient is performed by a mean-shift algorithm. The proposed method was evaluated by using four clinical kV fluoroscopic image sequences. For comparison, we also implement four conventional template matching-based methods and compare their performance with our proposed method in terms of the tracking accuracy and computational cost. Experimental results demonstrated that the proposed method is superior to conventional template matching-based methods.

  6. Alternative methods to smooth the Earth's gravity field

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1981-01-01

    Convolutions on the sphere with corresponding convolution theorems are developed for one and two dimensional functions. Some of these results are used in a study of isotropic smoothing operators or filters. Well known filters in Fourier spectral analysis, such as the rectangular, Gaussian, and Hanning filters, are adapted for data on a sphere. The low-pass filter most often used on gravity data is the rectangular (or Pellinen) filter. However, its spectrum has relatively large sidelobes; and therefore, this filter passes a considerable part of the upper end of the gravity spectrum. The spherical adaptations of the Gaussian and Hanning filters are more efficient in suppressing the high-frequency components of the gravity field since their frequency response functions are strongly field since their frequency response functions are strongly tapered at the high frequencies with no, or small, sidelobes. Formulas are given for practical implementation of these new filters.

  7. Prediction of posttranslational modification sites from amino acid sequences with kernel methods.

    PubMed

    Xu, Yan; Wang, Xiaobo; Wang, Yongcui; Tian, Yingjie; Shao, Xiaojian; Wu, Ling-Yun; Deng, Naiyang

    2014-03-01

    Post-translational modification (PTM) is the chemical modification of a protein after its translation and one of the later steps in protein biosynthesis for many proteins. It plays an important role which modifies the end product of gene expression and contributes to biological processes and diseased conditions. However, the experimental methods for identifying PTM sites are both costly and time-consuming. Hence computational methods are highly desired. In this work, a novel encoding method PSPM (position-specific propensity matrices) is developed. Then a support vector machine (SVM) with the kernel matrix computed by PSPM is applied to predict the PTM sites. The experimental results indicate that the performance of new method is better or comparable with the existing methods. Therefore, the new method is a useful computational resource for the identification of PTM sites. A unified standalone software PTMPred is developed. It can be used to predict all types of PTM sites if the user provides the training datasets. The software can be freely downloaded from http://www.aporc.org/doc/wiki/PTMPred. PMID:24291233

  8. A numerical study of the Regge calculus and smooth lattice methods on a Kasner cosmology

    NASA Astrophysics Data System (ADS)

    Brewin, Leo

    2015-10-01

    Two lattice based methods for numerical relativity, the Regge calculus and the smooth lattice relativity, will be compared with respect to accuracy and computational speed in a full 3+1 evolution of initial data representing a standard Kasner cosmology. It will be shown that both methods provide convergent approximations to the exact Kasner cosmology. It will also be shown that the Regge calculus is of the order of 110 times slower than the smooth lattice method.

  9. Calculates Thermal Neutron Scattering Kernel.

    Energy Science and Technology Software Center (ESTSC)

    1989-11-10

    Version 00 THRUSH computes the thermal neutron scattering kernel by the phonon expansion method for both coherent and incoherent scattering processes. The calculation of the coherent part is suitable only for calculating the scattering kernel for heavy water.

  10. Single kernel method for detection of 2-acetyl-1-pyrroline in aromatic rice germplasm using SPME-GC/MS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    INTRODUCTION Aromatic rice or fragrant rice, (Oryza sativa L.), has a strong popcorn-like aroma due to the presence of a five-membered N-heterocyclic ring compound known as 2-acetyl-1-pyrroline (2-AP). To date, existing methods for detecting this compound in rice require the use of several kernels. ...

  11. Smoothing methods comparison for CMB E- and B-mode separation

    NASA Astrophysics Data System (ADS)

    Wang, Yi-Fan; Wang, Kai; Zhao, Wen

    2016-04-01

    The anisotropies of the B-mode polarization in the cosmic microwave background radiation play a crucial role in the study of the very early Universe. However, in real observations, a mixture of the E-mode and B-mode can be caused by partial sky surveys, which must be separated before being applied to a cosmological explanation. The separation method developed by Smith (2006) has been widely adopted, where the edge of the top-hat mask should be smoothed to avoid numerical errors. In this paper, we compare three different smoothing methods and investigate leakage residuals of the E-B mixture. We find that, if less information loss is needed and a smaller region is smoothed in the analysis, the sin- and cos-smoothing methods are better. However, if we need a cleanly constructed B-mode map, the larger region around the mask edge should be smoothed. In this case, the Gaussian-smoothing method becomes much better. In addition, we find that the leakage caused by numerical errors in the Gaussian-smoothing method is mostly concentrated in two bands, which is quite easy to reduce for further E-B separations.

  12. Rotating vector methods for smooth torque control of a switched reluctance motor drive

    SciTech Connect

    Nagel, N.J.; Lorenz, R.D.

    2000-04-01

    This paper has two primary contributions to switched reluctance motor (SRM) control: a systematic approach to smooth torque production and a high-performance technique for sensorless motion control. The systematic approach to smooth torque production is based on development of novel rotating spatial vectors methods that can be used to predict the torque produced in an arbitrary SRM. This analysis directly leads to explicit, insightful methods to provide smooth torque control of SRM's. The high-performance technique for sensorless motion control is based on a rotating vector method for high bandwidth, high resolution, position, and velocity estimation suitable for both precise torque and motion control. The sensorless control and smooth torque control methods are both verified experimentally.

  13. Volcano clustering determination: Bivariate Gauss vs. Fisher kernels

    NASA Astrophysics Data System (ADS)

    Cañón-Tapia, Edgardo

    2013-05-01

    Underlying many studies of volcano clustering is the implicit assumption that vent distribution can be studied by using kernels originally devised for distribution in plane surfaces. Nevertheless, an important change in topology in the volcanic context is related to the distortion that is introduced when attempting to represent features found on the surface of a sphere that are being projected into a plane. This work explores the extent to which different topologies of the kernel used to study the spatial distribution of vents can introduce significant changes in the obtained density functions. To this end, a planar (Gauss) and a spherical (Fisher) kernels are mutually compared. The role of the smoothing factor in these two kernels is also explored with some detail. The results indicate that the topology of the kernel is not extremely influential, and that either type of kernel can be used to characterize a plane or a spherical distribution with exactly the same detail (provided that a suitable smoothing factor is selected in each case). It is also shown that there is a limitation on the resolution of the Fisher kernel relative to the typical separation between data that can be accurately described, because data sets with separations lower than 500 km are considered as a single cluster using this method. In contrast, the Gauss kernel can provide adequate resolutions for vent distributions at a wider range of separations. In addition, this study also shows that the numerical value of the smoothing factor (or bandwidth) of both the Gauss and Fisher kernels has no unique nor direct relationship with the relevant separation among data. In order to establish the relevant distance, it is necessary to take into consideration the value of the respective smoothing factor together with a level of statistical significance at which the contributions to the probability density function will be analyzed. Based on such reference level, it is possible to create a hierarchy of

  14. Application of dose kernel calculation using a simplified Monte Carlo method to treatment plan for scanned proton beams.

    PubMed

    Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo

    2016-01-01

    Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduc-tion of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine. PMID:27074456

  15. Haplotype kernel association test as a powerful method to identify chromosomal regions harboring uncommon causal variants.

    PubMed

    Lin, Wan-Yu; Yi, Nengjun; Lou, Xiang-Yang; Zhi, Degui; Zhang, Kui; Gao, Guimin; Tiwari, Hemant K; Liu, Nianjun

    2013-09-01

    For most complex diseases, the fraction of heritability that can be explained by the variants discovered from genome-wide association studies is minor. Although the so-called "rare variants" (minor allele frequency [MAF] < 1%) have attracted increasing attention, they are unlikely to account for much of the "missing heritability" because very few people may carry these rare variants. The genetic variants that are likely to fill in the "missing heritability" include uncommon causal variants (MAF < 5%), which are generally untyped in association studies using tagging single-nucleotide polymorphisms (SNPs) or commercial SNP arrays. Developing powerful statistical methods can help to identify chromosomal regions harboring uncommon causal variants, while bypassing the genome-wide or exome-wide next-generation sequencing. In this work, we propose a haplotype kernel association test (HKAT) that is equivalent to testing the variance component of random effects for distinct haplotypes. With an appropriate weighting scheme given to haplotypes, we can further enhance the ability of HKAT to detect uncommon causal variants. With scenarios simulated according to the population genetics theory, HKAT is shown to be a powerful method for detecting chromosomal regions harboring uncommon causal variants. PMID:23740760

  16. Haplotype Kernel Association Test as a Powerful Method to Identify Chromosomal Regions Harboring Uncommon Causal Variants

    PubMed Central

    Lin, Wan-Yu; Yi, Nengjun; Lou, Xiang-Yang; Zhi, Degui; Zhang, Kui; Gao, Guimin; Tiwari, Hemant K.; Liu, Nianjun

    2014-01-01

    For most complex diseases, the fraction of heritability that can be explained by the variants discovered from genome-wide association studies is minor. Although the so-called ‘rare variants’ (minor allele frequency [MAF] < 1%) have attracted increasing attention, they are unlikely to account for much of the ‘missing heritability’ because very few people may carry these rare variants. The genetic variants that are likely to fill in the ‘missing heritability’ include uncommon causal variants (MAF < 5%), which are generally untyped in association studies using tagging single-nucleotide polymorphisms (SNPs) or commercial SNP arrays. Developing powerful statistical methods can help to identify chromosomal regions harboring uncommon causal variants, while bypassing the genome-wide or exome-wide next-generation sequencing. In this work, we propose a haplotype kernel association test (HKAT) that is equivalent to testing the variance component of random effects for distinct haplotypes. With an appropriate weighting scheme given to haplotypes, we can further enhance the ability of HKAT to detect uncommon causal variants. With scenarios simulated according to the population genetics theory, HKAT is shown to be a powerful method for detecting chromosomal regions harboring uncommon causal variants. PMID:23740760

  17. A new adaptive exponential smoothing method for non-stationary time series with level shifts

    NASA Astrophysics Data System (ADS)

    Monfared, Mohammad Ali Saniee; Ghandali, Razieh; Esmaeili, Maryam

    2014-07-01

    Simple exponential smoothing (SES) methods are the most commonly used methods in forecasting and time series analysis. However, they are generally insensitive to non-stationary structural events such as level shifts, ramp shifts, and spikes or impulses. Similar to that of outliers in stationary time series, these non-stationary events will lead to increased level of errors in the forecasting process. This paper generalizes the SES method into a new adaptive method called revised simple exponential smoothing (RSES), as an alternative method to recognize non-stationary level shifts in the time series. We show that the new method improves the accuracy of the forecasting process. This is done by controlling the number of observations and the smoothing parameter in an adaptive approach, and in accordance with the laws of statistical control limits and the Bayes rule of conditioning. We use a numerical example to show how the new RSES method outperforms its traditional counterpart, SES.

  18. Smooth statistical torsion angle potential derived from a large conformational database via adaptive kernel density estimation improves the quality of NMR protein structures

    PubMed Central

    Bermejo, Guillermo A; Clore, G Marius; Schwieters, Charles D

    2012-01-01

    Statistical potentials that embody torsion angle probability densities in databases of high-quality X-ray protein structures supplement the incomplete structural information of experimental nuclear magnetic resonance (NMR) datasets. By biasing the conformational search during the course of structure calculation toward highly populated regions in the database, the resulting protein structures display better validation criteria and accuracy. Here, a new statistical torsion angle potential is developed using adaptive kernel density estimation to extract probability densities from a large database of more than 106 quality-filtered amino acid residues. Incorporated into the Xplor-NIH software package, the new implementation clearly outperforms an older potential, widely used in NMR structure elucidation, in that it exhibits simultaneously smoother and sharper energy surfaces, and results in protein structures with improved conformation, nonbonded atomic interactions, and accuracy. PMID:23011872

  19. An intelligent fault diagnosis method of rolling bearings based on regularized kernel Marginal Fisher analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Shi, Tielin; Xuan, Jianping

    2012-05-01

    Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.

  20. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    ERIC Educational Resources Information Center

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  1. Numerical Convergence In Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zhu, Qirong; Hernquist, Lars; Li, Yuexing

    2015-02-01

    We study the convergence properties of smoothed particle hydrodynamics (SPH) using numerical tests and simple analytic considerations. Our analysis shows that formal numerical convergence is possible in SPH only in the joint limit N → ∞, h → 0, and Nnb → ∞, where N is the total number of particles, h is the smoothing length, and Nnb is the number of neighbor particles within the smoothing volume used to compute smoothed estimates. Previous work has generally assumed that the conditions N → ∞ and h → 0 are sufficient to achieve convergence, while holding Nnb fixed. We demonstrate that if Nnb is held fixed as the resolution is increased, there will be a residual source of error that does not vanish as N → ∞ and h → 0. Formal numerical convergence in SPH is possible only if Nnb is increased systematically as the resolution is improved. Using analytic arguments, we derive an optimal compromise scaling for Nnb by requiring that this source of error balance that present in the smoothing procedure. For typical choices of the smoothing kernel, we find Nnb vpropN 0.5. This means that if SPH is to be used as a numerically convergent method, the required computational cost does not scale with particle number as O(N), but rather as O(N 1 + δ), where δ ≈ 0.5, with a weak dependence on the form of the smoothing kernel.

  2. A method for computing the kernel of the downwash integral equation for arbitrary complex frequencies

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.; Rowe, W. S.

    1984-01-01

    For the design of active controls to stabilize flight vehicles, which requires the use of unsteady aerodynamics that are valid for arbitrary complex frequencies, algorithms are derived for evaluating the nonelementary part of the kernel of the integral equation that relates unsteady pressure to downwash. This part of the kernel is separated into an infinite limit integral that is evaluated using Bessel and Struve functions and into a finite limit integral that is expanded in series and integrated termwise in closed form. The developed series expansions gave reliable answers for all complex reduced frequencies and executed faster than exponential approximations for many pressure stations.

  3. Study on preparation method of Zanthoxylum bungeanum seeds kernel oil with zero trans-fatty acids.

    PubMed

    Liu, Tong; Yao, Shi-Yong; Yin, Zhong-Yi; Zheng, Xu-Xu; Shen, Yu

    2016-04-01

    The seed of Zanthoxylum bungeanum (Z. bungeanum) is a by-product of pepper production and rich in unsaturated fatty acid, cellulose, and protein. The seed oil obtained from traditional producing process by squeezing or extracting would be bad quality and could not be used as edible oil. In this paper, a new preparation method of Z. bungeanum seed kernel oil (ZSKO) was developed by comparing the advantages and disadvantages of alkali saponification-cold squeezing, alkali saponification-solvent extraction, and alkali saponification-supercritical fluid extraction with carbon dioxide (SFE-CO2). The results showed that the alkali saponification-cold squeezing could be the optimal preparation method of ZSKO, which contained the following steps: Z. bungeanum seed was pretreated by alkali saponification under the conditions of adding 10 %NaOH (w/w), solution temperature was 80 °C, and saponification reaction time was 45 min, and pretreated seed was separated by filtering, water washing, and overnight drying at 50 °C, then repeated squeezing was taken until no oil generated at 60 °C with 15 % moisture content, and ZSKO was attained finally using centrifuge. The produced ZSKO contained more than 90 % unsaturated fatty acids and no trans-fatty acids and be testified as a good edible oil with low-value level of acid and peroxide. It was demonstrated that the alkali saponification-cold squeezing process could be scaled up and applied to industrialized production of ZSKO. PMID:26268620

  4. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  5. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  6. Bladder Smooth Muscle Strip Contractility as a Method to Evaluate Lower Urinary Tract Pharmacology

    PubMed Central

    Kullmann, F. Aura; Daugherty, Stephanie L.; de Groat, William C.; Birder, Lori A.

    2015-01-01

    We describe an in vitro method to measure bladder smooth muscle contractility, and its use for investigating physiological and pharmacological properties of the smooth muscle as well as changes induced by pathology. This method provides critical information for understanding bladder function while overcoming major methodological difficulties encountered in in vivo experiments, such as surgical and pharmacological manipulations that affect stability and survival of the preparations, the use of human tissue, and/or the use of expensive chemicals. It also provides a way to investigate the properties of each bladder component (i.e. smooth muscle, mucosa, nerves) in healthy and pathological conditions. The urinary bladder is removed from an anesthetized animal, placed in Krebs solution and cut into strips. Strips are placed into a chamber filled with warm Krebs solution. One end is attached to an isometric tension transducer to measure contraction force, the other end is attached to a fixed rod. Tissue is stimulated by directly adding compounds to the bath or by electric field stimulation electrodes that activate nerves, similar to triggering bladder contractions in vivo. We demonstrate the use of this method to evaluate spontaneous smooth muscle contractility during development and after an experimental spinal cord injury, the nature of neurotransmission (transmitters and receptors involved), factors involved in modulation of smooth muscle activity, the role of individual bladder components, and species and organ differences in response to pharmacological agents. Additionally, it could be used for investigating intracellular pathways involved in contraction and/or relaxation of the smooth muscle, drug structure-activity relationships and evaluation of transmitter release. The in vitro smooth muscle contractility method has been used extensively for over 50 years, and has provided data that significantly contributed to our understanding of bladder function as well as to

  7. Method of smoothing laser range observations by corrections of orbital parameters and station coordinates

    NASA Astrophysics Data System (ADS)

    Lala, P.; Thao, Bui Van

    1986-11-01

    The first step in the treatment of satellite laser ranging data is its smoothing and rejection of incorrect points. The proposed method uses the comparison of observations with ephemerides and iterative matching of corresponding parameters. The method of solution and a program for a minicomputer are described. Examples of results for satellite Starlette are given.

  8. A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems

    NASA Astrophysics Data System (ADS)

    Zhang, Guiyong; Liu, Gui-Rong

    2010-05-01

    In the framework of a weakened weak (W2) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W2 formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods [1]. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H1 space, but in a G1 space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H1 space and G1 space can be viewed as a space of functions with weakened weak (W2) requirement on continuity [1-3]. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W2 formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W2 formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations [2]. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much softer than the overly-stiff FEM model

  9. A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems

    SciTech Connect

    Zhang Guiyong; Liu Guirong

    2010-05-21

    In the framework of a weakened weak (W{sup 2}) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W{sup 2} formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H{sup 1} space, but in a G{sup 1} space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H{sup 1} space and G{sup 1} space can be viewed as a space of functions with weakened weak (W{sup 2}) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W{sup 2} formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W{sup 2} formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much

  10. Tests of smoothing methods for topological study of galaxy redshift surveys

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Dominik, Kurt G.

    1993-01-01

    Studying the topology of large-scale structure as a way to better understand initial conditions has become more widespread in recent years. Studying topology of simulations (which have periodic boundary conditions) in redshift space produces results compatible with the real topological characteristics of the simulation. Thus we expect we can extract useful information from redshift surveys. However, with nonperiodic boundary conditions, the use of smoothing must result in the loss of information at survey boundaries. In this paper, we test different methods of smoothing samples with nonperiodic boundary conditions to see which most efficiently preserves the topological features of the real distribution. We find that a smoothing method which (unlike most previous published analysis) sums only over cells inside the survey volume produces the best results among the schemes tested.

  11. Kernel optimization in discriminant analysis.

    PubMed

    You, Di; Hamsici, Onur C; Martinez, Aleix M

    2011-03-01

    Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results, using a large number of databases and classifiers, demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates. PMID:20820072

  12. Full Waveform Inversion Using Waveform Sensitivity Kernels

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    2013-04-01

    We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver

  13. MC Kernel: Broadband Waveform Sensitivity Kernels for Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; van Driel, Martin; Auer, Ludwig; Hosseini, Kasra; Sigloch, Karin; Nissen-Meyer, Tarje

    2016-04-01

    We present MC Kernel, a software implementation to calculate seismic sensitivity kernels on arbitrary tetrahedral or hexahedral grids across the whole observable seismic frequency band. Seismic sensitivity kernels are the basis for seismic tomography, since they map measurements to model perturbations. Their calculation over the whole frequency range was so far only possible with approximative methods (Dahlen et al. 2000). Fully numerical methods were restricted to the lower frequency range (usually below 0.05 Hz, Tromp et al. 2005). With our implementation, it's possible to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (www.axisem.info), and thus on spherically symmetric models. The advantage is that frequencies up to 0.2 Hz and higher can be accessed. Since the usage of irregular, adapted grids is an integral part of regularisation in seismic tomography, MC Kernel works in a inversion-grid-centred fashion: A Monte-Carlo integration method is used to project the kernel onto each basis function, which allows to control the desired precision of the kernel estimation. Also, it means that the code concentrates calculation effort on regions of interest without prior assumptions on the kernel shape. The code makes extensive use of redundancies in calculating kernels for different receivers or frequency-pass-bands for one earthquake, to facilitate its usage in large-scale global seismic tomography.

  14. A simple method for computing the relativistic Compton scattering kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Prasad, M. K.; Kershaw, D. S.; Beason, J. D.

    1986-01-01

    Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.

  15. Linearized Kernel Dictionary Learning

    NASA Astrophysics Data System (ADS)

    Golts, Alona; Elad, Michael

    2016-06-01

    In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.

  16. Bayesian Kernel Mixtures for Counts

    PubMed Central

    Canale, Antonio; Dunson, David B.

    2011-01-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437

  17. Bayesian Kernel Mixtures for Counts.

    PubMed

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437

  18. A relativistic smoothed particle hydrodynamics method tested with the shock tube

    NASA Astrophysics Data System (ADS)

    Mann, Patrick J.

    1991-12-01

    The smoothed particle hydrodynamics method is applied to an ADM 3 + 1 formulation of the equations for relativistic fluid flow. In particular the one-dimensional shock tube is addressed. Three codes are described. The first is a straightforward extension of classic SPH, while the other two are modifications which allow for time-dependent smoothing lengths. The first of these modifications approximates the internal energy density, while the second approximates the total energy density. Two smoothing forms are tested: an artificial viscosity and the direct method of A.J. Baker [Finite Element Computation Fluid Mechanics (Hemisphere, New York, 1983)]. The results indicate that the classic SPH code with particle-particle based artificial viscosity is reasonably accurate and very consistent. It gives quite sharp edges and flat plateaus, but the velocity plateau is significantly overestimated, and an oscillation can appear in the rarefaction wave. The modified versions with Baker smoothing procedure better results for moderate initial conditions, but begin to show spikes when the initial density jump is large. Generally the results are comparable to simple finite element and finite difference methods.

  19. An Imbricate Finite Element Method (I-FEM) using full, reduced, and smoothed integration

    NASA Astrophysics Data System (ADS)

    Cazes, Fabien; Meschke, Günther

    2013-11-01

    A method to design finite elements that imbricate with each other while being assembled, denoted as imbricate finite element method, is proposed to improve the smoothness and the accuracy of the approximation based upon low order elements. Although these imbricate elements rely on triangular meshes, the approximation stems from the shape functions of bilinear quadrilateral elements. These elements satisfy the standard requirements of the finite element method: continuity, delta function property, and partition of unity. The convergence of the proposed approximation is investigated by means of two numerical benchmark problems comparing three different schemes for the numerical integration including a cell-based smoothed FEM based on a quadratic shape of the elements edges. The method is compared to related existing methods.

  20. A new method for evaluation of the resistance to rice kernel cracking based on moisture absorption in brown rice under controlled conditions

    PubMed Central

    Hayashi, Takeshi; Kobayashi, Asako; Tomita, Katsura; Shimizu, Toyohiro

    2015-01-01

    We developed and evaluated the effectiveness of a new method to detect differences among rice cultivars in their resistance to kernel cracking. The method induces kernel cracking under laboratory controlled condition by moisture absorption to brown rice. The optimal moisture absorption conditions were determined using two japonica cultivars, ‘Nipponbare’ as a cracking-resistant cultivar and ‘Yamahikari’ as a cracking-susceptible cultivar: 12% initial moisture content of the brown rice, a temperature of 25°C, a duration of 5 h, and only a single absorption treatment. We then evaluated the effectiveness of these conditions using 12 japonica cultivars. The proportion of cracked kernels was significantly correlated with the mean 10-day maximum temperature after heading. In addition, the correlation between the proportions of cracked kernels in the 2 years of the study was higher than that for values obtained using the traditional late harvest method. The new moisture absorption method could stably evaluate the resistance to kernel cracking, and will help breeders to develop future cultivars with less cracking of the kernels. PMID:26719740

  1. Numerical Simulation of Crater Creating Process in Dynamic Replacement Method by Smooth Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Danilewicz, Andrzej; Sikora, Zbigniew

    2015-02-01

    A theoretical base of SPH method, including the governing equations, discussion of importance of the smoothing function length, contact formulation, boundary treatment and finally utilization in hydrocode simulations are presented. An application of SPH to a real case of large penetrations (crater creating) into the soil caused by falling mass in Dynamic Replacement Method is discussed. An influence of particles spacing on method accuracy is presented. An example calculated by LS-DYNA software is discussed. Chronological development of Smooth Particle Hydrodynamics is presented. Theoretical basics of SPH method stability and consistency in SPH formulation, artificial viscosity and boundary treatment are discussed. Time integration techniques with stability conditions, SPH+FEM coupling, constitutive equation and equation of state (EOS) are presented as well.

  2. Smooth solvation method for d-orbital semiempirical calculations of biological reactions. 1. Implementation.

    PubMed

    Khandogin, Jana; Gregersen, Brent A; Thiel, Walter; York, Darrin M

    2005-05-19

    The present paper describes the extension of a recently developed smooth conductor-like screening model for solvation to a d-orbital semiempirical framework (MNDO/d-SCOSMO) with analytic gradients that can be used for geometry optimizations, transition state searches, and molecular dynamics simulations. The methodology is tested on the potential energy surfaces for separating ions and the dissociative phosphoryl transfer mechanism of methyl phosphate. The convergence behavior of the smooth COSMO method with respect to discretization level is examined and the numerical stability of the energy and gradient are compared to that from conventional COSMO calculations. The present method is further tested in applications to energy minimum and transition state geometry optimizations of neutral and charged metaphosphates, phosphates, and phosphoranes that are models for stationary points in transphosphorylation reaction pathways of enzymes and ribozymes. The results indicate that the smooth COSMO method greatly enhances the stability of quantum mechanical geometry optimization and transition state search calculations that would routinely fail with conventional solvation methods. The present MNDO/d-SCOSMO method has considerable computational advantages over hybrid quantum mechanical/molecular mechanical methods with explicit solvation, and represents a potentially useful tool in the arsenal of multi-scale quantum models used to study biochemical reactions. PMID:16852180

  3. A method for smoothing segmented lung boundary in chest CT images

    NASA Astrophysics Data System (ADS)

    Yim, Yeny; Hong, Helen

    2007-03-01

    To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.

  4. Testing local anisotropy using the method of smoothed residuals I — methodology

    SciTech Connect

    Appleby, Stephen; Shafieloo, Arman E-mail: arman@apctp.org

    2014-03-01

    We discuss some details regarding the method of smoothed residuals, which has recently been used to search for anisotropic signals in low-redshift distance measurements (Supernovae). In this short note we focus on some details regarding the implementation of the method, particularly the issue of effectively detecting signals in data that are inhomogeneously distributed on the sky. Using simulated data, we argue that the original method proposed in Colin et al. [1] will not detect spurious signals due to incomplete sky coverage, and that introducing additional Gaussian weighting to the statistic as in [2] can hinder its ability to detect a signal. Issues related to the width of the Gaussian smoothing are also discussed.

  5. A new enzymic method for the isolation and culture of human bladder body smooth muscle cells.

    PubMed

    Ma, F -H; Higashira, H; Ukai, Y; Hanai, T; Kiwamoto, H; Park, Y C; Kurita, T

    2002-01-01

    Cultured cells of the human urinary bladder smooth muscle are useful for investigating bladder function, but methods for culturing them are not well developed. We have now established a novel enzymic technique. The smooth muscle layer was separated out and incubated with 0.2% trypsin for 30 min at 37 degrees C. The samples were then minced and incubated with 0.1% collagenase for 30 min and centrifuged at 900 g. The pellets were resuspended in RPMI-1640 medium containing 10% fetal calf serum (FCS) and centrifuged at 250 g. The smooth muscle cells from the supernatant were cultured in RPMI-1640 containing 10% FCS. The cells grew to confluence after 7-10 days, forming the "hills and valleys" growth pattern characteristic of smooth muscle cells. Immunostaining with anti-alpha-actin, anti-myosin, and anti-caldesmon antibodies demonstrated that 99% of the cells were smooth muscle cells. To investigate the pharmacological properties of the cultured cells, we determined the inhibitory effect of muscarinic receptor antagonists on the binding of [3H]N-methylscopolamine to membranes from cultured cells. The pKi values obtained for six antagonists agreed with the corresponding values for transfected cells expressing the human muscarinic M2 subtype. Furthermore, carbachol produced an increase in the concentration of cytoplasmic free Ca2+ an action that was blocked by 4-diphenylacetoxy-N-methylpiperidine methiodide, an M3 selective antagonist. This result suggests that these cells express functional M3 muscarinic receptors, in addition to M2 receptors. The subcultured cells therefore appear to be unaffected by our new isolation method. PMID:11835427

  6. To the theory of volterra integral equations of the first kind with discontinuous kernels

    NASA Astrophysics Data System (ADS)

    Apartsin, A. S.

    2016-05-01

    A nonclassical Volterra linear integral equation of the first kind describing the dynamics of an developing system with allowance for its age structure is considered. The connection of this equation with the classical Volterra linear integral equation of the first kind with a piecewise-smooth kernel is studied. For solving such equations, the quadrature method is applied.

  7. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  8. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment

    PubMed Central

    Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong

    2016-01-01

    In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405

  9. Methods for Least Squares Data Smoothing by Adjustment of Divided Differences

    NASA Astrophysics Data System (ADS)

    Demetriou, I. C.

    2008-09-01

    A brief survey is presented for the main methods that are used in least squares data smoothing by adjusting the signs of divided differences of the smoothed values. The most distinctive feature of the smoothing approach is that it provides automatically a piecewise monotonic or a piecewise convex/concave fit to the data. The data are measured values of a function of one variable that contain random errors. As a consequence of the errors, the number of sign alterations in the sequence of mth divided differences is usually unacceptably large, where m is a prescribed positive integer. Therefore, we make the least sum of squares change to the measurements by requiring the sequence of the divided differences of order m to have at most k-1 sign changes, for some positive integer k. Although, it is a combinatorial problem, whose solution can require about O(nk) quadratic programming calculations in n variables and n-m constraints, where n is the number of data, very efficient algorithms have been developed for the cases when m = 1 or m = 2 and k is arbitrary, as well as when m>2 for small values of k. Attention is paid to the purpose of each method instead of to its details. Some software packages make the methods publicly accessible through library systems.

  10. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment.

    PubMed

    Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong

    2016-01-01

    In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405

  11. Direct simulation of flows with suspended paramagnetic particles using one-stage smoothed profile method

    NASA Astrophysics Data System (ADS)

    Kang, S.; Suh, Y. K.

    2011-02-01

    The so-called smoothed profile method, originally suggested by Nakayama and Yamamoto and further improved by Luo et al. in 2005 and 2009, respectively, is an efficient numerical solver for fluid-structure interaction problems, which represents the particles by a certain smoothed profile on a fixed grid and constructs some form of body force added into the momentum (Navier-Stokes) equation by ensuring the rigidity of particles. For numerical simulations, the method first advances the flow and pressure fields by integrating the momentum equation except the body-force (momentum impulse) term in time and next updates them by separately taking temporal integration of the body-force term, thus requiring one more Poisson-equation solver for the extra pressure field due to the rigidity of particles to ensure the divergence-free constraint of the total velocity field. In the present study, we propose a simplified version of the smoothed profile method or the one-stage method, which combines the two stages of velocity update (temporal integration) into one to eliminate the necessity for the additional solver and, thus, significantly save the computational cost. To validate the proposed one-stage method, we perform the so-called direct numerical simulations on the two-dimensional motion of multiple inertialess paramagnetic particles in a nonmagnetic fluid subjected to an external uniform magnetic field and compare their results with the existing benchmark solutions. For the validation, we develop the finite-volume version of the direct simulation method by employing the proposed one-stage method. Comparison shows that the proposed one-stage method is very accurate and efficient in direct simulations of such magnetic particulate flows.

  12. Spectral-Element Simulations of Wave Propagation in Porous Media: Finite-Frequency Sensitivity Kernels Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Morency, C.; Tromp, J.

    2008-12-01

    successfully performed. We present finite-frequency sensitivity kernels for wave propagation in porous media based upon adjoint methods. We first show that the adjoint equations in porous media are similar to the regular Biot equations upon defining an appropriate adjoint source. Then we present finite-frequency kernels for seismic phases in porous media (e.g., fast P, slow P, and S). These kernels illustrate the sensitivity of seismic observables to structural parameters and form the basis of tomographic inversions. Finally, we show an application of this imaging technique related to the detection of buried landmines and unexploded ordnance (UXO) in porous environments.

  13. A novel method of target recognition based on 3D-color-space locally adaptive regression kernels model

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqi; Han, Jing; Zhang, Yi; Bai, Lianfa

    2015-10-01

    Locally adaptive regression kernels model can describe the edge shape of images accurately and graphic trend of images integrally, but it did not consider images' color information while the color is an important element of an image. Therefore, we present a novel method of target recognition based on 3-D-color-space locally adaptive regression kernels model. Different from the general additional color information, this method directly calculate the local similarity features of 3-D data from the color image. The proposed method uses a few examples of an object as a query to detect generic objects with incompact, complex and changeable shapes. Our method involves three phases: First, calculating the novel color-space descriptors from the RGB color space of query image which measure the likeness of a voxel to its surroundings. Salient features which include spatial- dimensional and color -dimensional information are extracted from said descriptors, and simplifying them to construct a non-similar local structure feature set of the object class by principal components analysis (PCA). Second, we compare the salient features with analogous features from the target image. This comparison is done using a matrix generalization of the cosine similarity measure. Then the similar structures in the target image are obtained using local similarity structure statistical matching. Finally, we use the method of non-maxima suppression in the similarity image to extract the object position and mark the object in the test image. Experimental results demonstrate that our approach is effective and accurate in improving the ability to identify targets.

  14. Validity of two simple rescaling methods for electron/beta dose point kernels in heterogeneous source-target geometry

    NASA Astrophysics Data System (ADS)

    Cho, Sang Hyun; Reece, Warren D.; Kim, Chan-Hyeong

    2004-03-01

    Dose calculations around electron-emitting metallic spherical sources were performed up to the X90 distance of each electron energy ranging from 0.5 to 3.0 MeV using the MCNP 4C Monte Carlo code and the dose point kernel (DPK) method with the DPKs rescaled using the linear range ratio and physical density ratio, respectively. The results show that the discrepancy between the MCNP and DPK results increases with the atomic number of the source (i.e., heterogeneity in source-target geometry), regardless of the rescaling method used. The observed discrepancies between the MCNP and DPK results were up to 100% for extreme cases such as a platinum source immersed in water.

  15. Shot noise limit of the optical 3D measurement methods for smooth surfaces

    NASA Astrophysics Data System (ADS)

    Pavliček, Pavel; Pech, Miroslav

    2016-03-01

    The measurement uncertainty of optical 3D measurement methods for smooth surfaces caused by shot noise is investigated. The shot noise is a fundamental property of the quantum nature of light. If all noise sources are eliminated, the shot noise represents the ultimate limit of the measurement uncertainty. The measurement uncertainty is calculated for several simple model methods. The analysis shows that the measurement uncertainty depends on the wavelength of used light, the number of photons used for the measurement, and on a factor that is connected with the geometric arrangement of the measurement setup.

  16. Incomplete iterations in multistep backward difference methods for parabolic problems with smooth and nonsmooth data

    SciTech Connect

    Bramble, J. H.; Pasciak, J. E.; Sammon, P. H.; Thomee, V.

    1989-04-01

    Backward difference methods for the discretization of parabolic boundary value problems are considered in this paper. In particular, we analyze the case when the backward difference equations are only solved 'approximately' by a preconditioned iteration. We provide an analysis which shows that these methods remain stable and accurate if a suitable number of iterations (often independent of the spatial discretization and time step size) are used. Results are provided for the smooth as well as nonsmooth initial data cases. Finally, the results of numerical experiments illustrating the algorithms' performance on model problems are given.

  17. Source Region Identification Using Kernel Smoothing

    EPA Science Inventory

    As described in this paper, Nonparametric Wind Regression is a source-to-receptor source apportionment model that can be used to identify and quantify the impact of possible source regions of pollutants as defined by wind direction sectors. It is described in detail with an exam...

  18. A Fast Variational Method for the Construction of Resolution Adaptive C-Smooth Molecular Surfaces.

    PubMed

    Bajaj, Chandrajit L; Xu, Guoliang; Zhang, Qin

    2009-05-01

    We present a variational approach to smooth molecular (proteins, nucleic acids) surface constructions, starting from atomic coordinates, as available from the protein and nucleic-acid data banks. Molecular dynamics (MD) simulations traditionally used in understanding protein and nucleic-acid folding processes, are based on molecular force fields, and require smooth models of these molecular surfaces. To accelerate MD simulations, a popular methodology is to employ coarse grained molecular models, which represent clusters of atoms with similar physical properties by psuedo- atoms, resulting in coarser resolution molecular surfaces. We consider generation of these mixed-resolution or adaptive molecular surfaces. Our approach starts from deriving a general form second order geometric partial differential equation in the level-set formulation, by minimizing a first order energy functional which additionally includes a regularization term to minimize the occurrence of chemically infeasible molecular surface pockets or tunnel-like artifacts. To achieve even higher computational efficiency, a fast cubic B-spline C(2) interpolation algorithm is also utilized. A narrow band, tri-cubic B-spline level-set method is then used to provide C(2) smooth and resolution adaptive molecular surfaces. PMID:19802355

  19. Image reconstruction for 3D light microscopy with a regularized linear method incorporating a smoothness prior

    NASA Astrophysics Data System (ADS)

    Preza, Chrysanthe; Miller, Michael I.; Conchello, Jose-Angel

    1993-07-01

    We have shown that the linear least-squares (LLS) estimate of the intensities of a 3-D object obtained from a set of optical sections is unstable due to the inversion of small and zero-valued eigenvalues of the point-spread function (PSF) operator. The LLS solution was regularized by constraining it to lie in a subspace spanned by the eigenvectors corresponding to a selected number of the largest eigenvalues. In this paper we extend the regularized LLS solution to a maximum a posteriori (MAP) solution induced by a prior formed from a 'Good's like' smoothness penalty. This approach also yields a regularized linear estimator which reduces noise as well as edge artifacts in the reconstruction. The advantage of the linear MAP (LMAP) estimate over the current regularized LLS (RLLS) is its ability to regularize the inverse problem by smoothly penalizing components in the image associated with small eigenvalues. Computer simulations were performed using a theoretical PSF and a simple phantom to compare the two regularization techniques. It is shown that the reconstructions using the smoothness prior, give superior variance and bias results compared to the RLLS reconstructions. Encouraging reconstructions obtained with the LMAP method from real microscopical images of a 10 micrometers fluorescent bead, and a four-cell Volvox embryo are shown.

  20. The multiscale restriction smoothed basis method for fractured porous media (F-MsRSB)

    NASA Astrophysics Data System (ADS)

    Shah, Swej; Møyner, Olav; Tene, Matei; Lie, Knut-Andreas; Hajibeygi, Hadi

    2016-08-01

    A novel multiscale method for multiphase flow in heterogeneous fractured porous media is devised. The discrete fine-scale system is described using an embedded fracture modeling approach, in which the heterogeneous rock (matrix) and highly-conductive fractures are represented on independent grids. Given this fine-scale discrete system, the method first partitions the fine-scale volumetric grid representing the matrix and the lower-dimensional grids representing fractures into independent coarse grids. Then, basis functions for matrix and fractures are constructed by restricted smoothing, which gives a flexible and robust treatment of complex geometrical features and heterogeneous coefficients. From the basis functions one constructs a prolongation operator that maps between the coarse- and fine-scale systems. The resulting method allows for general coupling of matrix and fracture basis functions, giving efficient treatment of a large variety of fracture conductivities. In addition, basis functions can be adaptively updated using efficient global smoothing strategies to account for multiphase flow effects. The method is conservative and because it is described and implemented in algebraic form, it is straightforward to employ it to both rectilinear and unstructured grids. Through a series of challenging test cases for single and multiphase flow, in which synthetic and realistic fracture maps are combined with heterogeneous petrophysical matrix properties, we validate the method and conclude that it is an efficient and accurate approach for simulating flow in complex, large-scale, fractured media.

  1. NUMERICAL CONVERGENCE IN SMOOTHED PARTICLE HYDRODYNAMICS

    SciTech Connect

    Zhu, Qirong; Li, Yuexing; Hernquist, Lars

    2015-02-10

    We study the convergence properties of smoothed particle hydrodynamics (SPH) using numerical tests and simple analytic considerations. Our analysis shows that formal numerical convergence is possible in SPH only in the joint limit N → ∞, h → 0, and N{sub nb} → ∞, where N is the total number of particles, h is the smoothing length, and N{sub nb} is the number of neighbor particles within the smoothing volume used to compute smoothed estimates. Previous work has generally assumed that the conditions N → ∞ and h → 0 are sufficient to achieve convergence, while holding N{sub nb} fixed. We demonstrate that if N{sub nb} is held fixed as the resolution is increased, there will be a residual source of error that does not vanish as N → ∞ and h → 0. Formal numerical convergence in SPH is possible only if N{sub nb} is increased systematically as the resolution is improved. Using analytic arguments, we derive an optimal compromise scaling for N{sub nb} by requiring that this source of error balance that present in the smoothing procedure. For typical choices of the smoothing kernel, we find N{sub nb} ∝N {sup 0.5}. This means that if SPH is to be used as a numerically convergent method, the required computational cost does not scale with particle number as O(N), but rather as O(N {sup 1} {sup +} {sup δ}), where δ ≈ 0.5, with a weak dependence on the form of the smoothing kernel.

  2. Kernel Phase and Kernel Amplitude in Fizeau Imaging

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin J. S.

    2016-09-01

    Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent fhistory of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.

  3. Spatial smoothing systematically biases the localization of reward-related brain activity

    PubMed Central

    Sacchet, Matthew D.; Knutson, Brian

    2012-01-01

    Neuroimaging methods with enhanced spatial resolution such as functional magnetic resonance imaging (FMRI) suggest that the subcortical striatum plays a critical role in human reward processing. Analysis of FMRI data requires several preprocessing steps, some of which entail tradeoffs. For instance, while spatial smoothing can enhance statistical power, it may also bias localization towards regions that contain more gray than white matter. In a meta-analysis and reanalysis of an existing dataset, we sought to determine whether spatial smoothing could systematically bias the spatial localization of foci related to reward anticipation in the nucleus accumbens (NAcc). An Activation Likelihood Estimate (ALE) meta-analysis revealed that peak ventral striatal ALE foci for studies that used smaller spatial smoothing kernels (i.e. < 6 mm FWHM) were more anterior than those identified for studies that used larger kernels (i.e. > 7 mm FWHM). Additionally, subtraction analysis of findings for studies that used smaller versus larger smoothing kernels revealed a significant cluster of differential activity in the left relatively anterior NAcc (Talairach coordinates: −10, 9, −1). A second meta-analysis revealed that larger smoothing kernels were correlated with more posterior localizations of NAcc activation foci (p < 0.015), but revealed no significant associations with other potentially relevant parameters (including voxel volume, magnet strength, and publication date). Finally, repeated analysis of a representative dataset processed at different smoothing kernels (i.e., 0–12 mm) also indicated that smoothing systematically yielded more posterior activation foci in the NAcc (p < 0.005). Taken together, these findings indicate that spatial smoothing can systematically bias the spatial localization of striatal activity. These findings have implications both for historical interpretation of past findings related to reward processing and for the analysis of future studies

  4. Smoothed Particle Inference: A Kilo-Parametric Method for X-ray Galaxy Cluster Modeling

    SciTech Connect

    Peterson, John R.; Marshall, P.J.; Andersson, K.; /Stockholm U. /SLAC

    2005-08-05

    We propose an ambitious new method that models the intracluster medium in clusters of galaxies as a set of X-ray emitting smoothed particles of plasma. Each smoothed particle is described by a handful of parameters including temperature, location, size, and elemental abundances. Hundreds to thousands of these particles are used to construct a model cluster of galaxies, with the appropriate complexity estimated from the data quality. This model is then compared iteratively with X-ray data in the form of adaptively binned photon lists via a two-sample likelihood statistic and iterated via Markov Chain Monte Carlo. The complex cluster model is propagated through the X-ray instrument response using direct sampling Monte Carlo methods. Using this approach the method can reproduce many of the features observed in the X-ray emission in a less assumption-dependent way that traditional analyses, and it allows for a more detailed characterization of the density, temperature, and metal abundance structure of clusters. Multi-instrument X-ray analyses and simultaneous X-ray, Sunyaev-Zeldovich (SZ), and lensing analyses are a straight-forward extension of this methodology. Significant challenges still exist in understanding the degeneracy in these models and the statistical noise induced by the complexity of the models.

  5. The CACAO Method for Smoothing, Gap Filling, and Characterizing Seasonal Anomalies in Satellite Time Series

    NASA Technical Reports Server (NTRS)

    Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.

    2013-01-01

    Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.

  6. Unified framework for anisotropic interpolation and smoothing of diffusion tensor images.

    PubMed

    Mishra, Arabinda; Lu, Yonggang; Meng, Jingjing; Anderson, Adam W; Ding, Zhaohua

    2006-07-15

    To enhance the performance of diffusion tensor imaging (DTI)-based fiber tractography, this study proposes a unified framework for anisotropic interpolation and smoothing of DTI data. The critical component of this framework is an anisotropic sigmoid interpolation kernel which is adaptively modulated by the local image intensity gradient profile. The adaptive modulation of the sigmoid kernel permits image smoothing in homogeneous regions and meanwhile guarantees preservation of structural boundaries. The unified scheme thus allows piece-wise smooth, continuous and boundary preservation interpolation of DTI data, so that smooth fiber tracts can be tracked in a continuous manner and confined within the boundaries of the targeted structure. The new interpolation method is compared with conventional interpolation methods on the basis of fiber tracking from synthetic and in vivo DTI data, which demonstrates the effectiveness of this unified framework. PMID:16624586

  7. Method of adiabatic modes in studying problems of smoothly irregular open waveguide structures

    SciTech Connect

    Sevastianov, L. A.; Egorov, A. A.; Sevastyanov, A. L.

    2013-02-15

    Basic steps in developing an original method of adiabatic modes that makes it possible to solve the direct and inverse problems of simulating and designing three-dimensional multilayered smoothly irregular open waveguide structures are described. A new element in the method is that an approximate solution of Maxwell's equations is made to obey 'inclined' boundary conditions at the interfaces between themedia being considered. These boundary conditions take into account the obliqueness of planes tangent to nonplanar boundaries between the media and lead to new equations for coupled vector quasiwaveguide hybrid adiabatic modes. Solutions of these equations describe the phenomenon of 'entanglement' of two linear polarizations of an irregular multilayered waveguide, the appearance of a new mode in an entangled state, and the effect of rotation of the polarization plane of quasiwaveguide modes. The efficiency of the method is demonstrated by considering the example of numerically simulating a thin-film generalized waveguide Lueneburg lens.

  8. Development of the smooth orthogonal decomposition method to derive the modal parameters of vehicle suspension system

    NASA Astrophysics Data System (ADS)

    Rezaee, Mousa; Shaterian-Alghalandis, Vahid; Banan-Nojavani, Ali

    2013-04-01

    In this paper, the smooth orthogonal decomposition (SOD) method is developed to the light damped systems in which the inputs are time shifted functions of one or more random processes. An example of such practical cases is the vehicle suspension system in which the random inputs due to the road roughness applied to the rear wheels are the shifted functions of the same random inputs on the front wheels with a time lag depending on the vehicle wheelbase as well as its velocity. The developed SOD method is applied to determine the natural frequencies and mode shapes of a certain vehicle suspension system and the results are compared with the true values obtained by the structural eigenvalue problem. The consistency of the results indicates that the SOD method can be applied with a high degree of accuracy to calculate the modal parameters of vibrating systems in which the system inputs are shifted functions of one or more random processes.

  9. Generalized hidden-mapping ridge regression, knowledge-leveraged inductive transfer learning for neural networks, fuzzy systems and kernel methods.

    PubMed

    Deng, Zhaohong; Choi, Kup-Sze; Jiang, Yizhang; Wang, Shitong

    2014-12-01

    Inductive transfer learning has attracted increasing attention for the training of effective model in the target domain by leveraging the information in the source domain. However, most transfer learning methods are developed for a specific model, such as the commonly used support vector machine, which makes the methods applicable only to the adopted models. In this regard, the generalized hidden-mapping ridge regression (GHRR) method is introduced in order to train various types of classical intelligence models, including neural networks, fuzzy logical systems and kernel methods. Furthermore, the knowledge-leverage based transfer learning mechanism is integrated with GHRR to realize the inductive transfer learning method called transfer GHRR (TGHRR). Since the information from the induced knowledge is much clearer and more concise than that from the data in the source domain, it is more convenient to control and balance the similarity and difference of data distributions between the source and target domains. The proposed GHRR and TGHRR algorithms have been evaluated experimentally by performing regression and classification on synthetic and real world datasets. The results demonstrate that the performance of TGHRR is competitive with or even superior to existing state-of-the-art inductive transfer learning algorithms. PMID:24710838

  10. Immersed smoothed finite element method for fluid-structure interaction simulation of aortic valves

    NASA Astrophysics Data System (ADS)

    Yao, Jianyao; Liu, G. R.; Narmoneva, Daria A.; Hinton, Robert B.; Zhang, Zhi-Qian

    2012-12-01

    This paper presents a novel numerical method for simulating the fluid-structure interaction (FSI) problems when blood flows over aortic valves. The method uses the immersed boundary/element method and the smoothed finite element method and hence it is termed as IS-FEM. The IS-FEM is a partitioned approach and does not need a body-fitted mesh for FSI simulations. It consists of three main modules: the fluid solver, the solid solver and the FSI force solver. In this work, the blood is modeled as incompressible viscous flow and solved using the characteristic-based-split scheme with FEM for spacial discretization. The leaflets of the aortic valve are modeled as Mooney-Rivlin hyperelastic materials and solved using smoothed finite element method (or S-FEM). The FSI force is calculated on the Lagrangian fictitious fluid mesh that is identical to the moving solid mesh. The octree search and neighbor-to-neighbor schemes are used to detect efficiently the FSI pairs of fluid and solid cells. As an example, a 3D idealized model of aortic valve is modeled, and the opening process of the valve is simulated using the proposed IS-FEM. Numerical results indicate that the IS-FEM can serve as an efficient tool in the study of aortic valve dynamics to reveal the details of stresses in the aortic valves, the flow velocities in the blood, and the shear forces on the interfaces. This tool can also be applied to animal models studying disease processes and may ultimately translate to a new adaptive methods working with magnetic resonance images, leading to improvements on diagnostic and prognostic paradigms, as well as surgical planning, in the care of patients.

  11. A method for the accurate and smooth approximation of standard thermodynamic functions

    NASA Astrophysics Data System (ADS)

    Coufal, O.

    2013-01-01

    A method is proposed for the calculation of approximations of standard thermodynamic functions. The method is consistent with the physical properties of standard thermodynamic functions. This means that the approximation functions are, in contrast to the hitherto used approximations, continuous and smooth in every temperature interval in which no phase transformations take place. The calculation algorithm was implemented by the SmoothSTF program in the C++ language which is part of this paper. Program summaryProgram title:SmoothSTF Catalogue identifier: AENH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3807 No. of bytes in distributed program, including test data, etc.: 131965 Distribution format: tar.gz Programming language: C++. Computer: Any computer with gcc version 4.3.2 compiler. Operating system: Debian GNU Linux 6.0. The program can be run in operating systems in which the gcc compiler can be installed, see http://gcc.gnu.org/install/specific.html. RAM: 256 MB are sufficient for the table of standard thermodynamic functions with 500 lines Classification: 4.9. Nature of problem: Standard thermodynamic functions (STF) of individual substances are given by thermal capacity at constant pressure, entropy and enthalpy. STF are continuous and smooth in every temperature interval in which no phase transformations take place. The temperature dependence of STF as expressed by the table of its values is for further application approximated by temperature functions. In the paper, a method is proposed for calculating approximation functions which, in contrast to the hitherto used approximations, are continuous and smooth in every temperature interval. Solution method: The approximation functions are

  12. A Kernel Machine Method for Detecting Effects of Interaction Between Multidimensional Variable Sets: An Imaging Genetics Application

    PubMed Central

    Ge, Tian; Nichols, Thomas E.; Ghosh, Debashis; Mormino, Elizabeth C.

    2015-01-01

    Measurements derived from neuroimaging data can serve as markers of disease and/or healthy development, are largely heritable, and have been increasingly utilized as (intermediate) phenotypes in genetic association studies. To date, imaging genetic studies have mostly focused on discovering isolated genetic effects, typically ignoring potential interactions with non-genetic variables such as disease risk factors, environmental exposures, and epigenetic markers. However, identifying significant interaction effects is critical for revealing the true relationship between genetic and phenotypic variables, and shedding light on disease mechanisms. In this paper, we present a general kernel machine based method for detecting effects of interaction between multidimensional variable sets. This method can model the joint and epistatic effect of a collection of single nucleotide polymorphisms (SNPs), accommodate multiple factors that potentially moderate genetic influences, and test for nonlinear interactions between sets of variables in a flexible framework. As a demonstration of application, we applied the method to data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) to detect the effects of the interactions between candidate Alzheimer's disease (AD) risk genes and a collection of cardiovascular disease (CVD) risk factors, on hippocampal volume measurements derived from structural brain magnetic resonance imaging (MRI) scans. Our method identified that two genes, CR1 and EPHA1, demonstrate significant interactions with CVD risk factors on hippocampal volume, suggesting that CR1 and EPHA1 may play a role in influencing AD-related neurodegeneration in the presence of CVD risks. PMID:25600633

  13. A kernel machine method for detecting effects of interaction between multidimensional variable sets: an imaging genetics application.

    PubMed

    Ge, Tian; Nichols, Thomas E; Ghosh, Debashis; Mormino, Elizabeth C; Smoller, Jordan W; Sabuncu, Mert R

    2015-04-01

    Measurements derived from neuroimaging data can serve as markers of disease and/or healthy development, are largely heritable, and have been increasingly utilized as (intermediate) phenotypes in genetic association studies. To date, imaging genetic studies have mostly focused on discovering isolated genetic effects, typically ignoring potential interactions with non-genetic variables such as disease risk factors, environmental exposures, and epigenetic markers. However, identifying significant interaction effects is critical for revealing the true relationship between genetic and phenotypic variables, and shedding light on disease mechanisms. In this paper, we present a general kernel machine based method for detecting effects of the interaction between multidimensional variable sets. This method can model the joint and epistatic effect of a collection of single nucleotide polymorphisms (SNPs), accommodate multiple factors that potentially moderate genetic influences, and test for nonlinear interactions between sets of variables in a flexible framework. As a demonstration of application, we applied the method to the data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) to detect the effects of the interactions between candidate Alzheimer's disease (AD) risk genes and a collection of cardiovascular disease (CVD) risk factors, on hippocampal volume measurements derived from structural brain magnetic resonance imaging (MRI) scans. Our method identified that two genes, CR1 and EPHA1, demonstrate significant interactions with CVD risk factors on hippocampal volume, suggesting that CR1 and EPHA1 may play a role in influencing AD-related neurodegeneration in the presence of CVD risks. PMID:25600633

  14. Numerical study of a multigrid method with four smoothing methods for the incompressible Navier-Stokes equations in general coordinates

    NASA Technical Reports Server (NTRS)

    Zeng, S.; Wesseling, P.

    1993-01-01

    The performance of a linear multigrid method using four smoothing methods, called SCGS (Symmetrical Coupled GauBeta-Seidel), CLGS (Collective Line GauBeta-Seidel), SILU (Scalar ILU), and CILU (Collective ILU), is investigated for the incompressible Navier-Stokes equations in general coordinates, in association with Galerkin coarse grid approximation. Robustness and efficiency are measured and compared by application to test problems. The numerical results show that CILU is the most robust, SILU the least, with CLGS and SCGS in between. CLGS is the best in efficiency, SCGS and CILU follow, and SILU is the worst.

  15. A Fuzzy-Based Control Method for Smoothing Power Fluctuations in Substations along High-Speed Railways

    NASA Astrophysics Data System (ADS)

    Sugio, Tetsuya; Yamamoto, Masayoshi; Funabiki, Shigeyuki

    The use of an SMES (Superconducting Magnetic Energy Storage) for smoothing power fluctuations in a railway substation has been discussed. This paper proposes a smoothing control method based on fuzzy reasoning for reducing the SMES capacity at substations along high-speed railways. The proposed smoothing control method comprises three countermeasures for reduction of the SMES capacity. The first countermeasure involves modification of rule 1 for smoothing out the fluctuating electric power to its average value. The other countermeasures involve the modification of the central value of the stored energy control in the SMES and revision of the membership function in rule 2 for reduction of the SMES capacity. The SMES capacity in the proposed smoothing control method is reduced by 49.5% when compared to that in the nonrevised control method. It is confirmed by computer simulations that the proposed control method is suitable for smoothing out power fluctuations in substations along high-speed railways and for reducing the SMES capacity.

  16. An incompressible smoothed particle hydrodynamics method for the motion of rigid bodies in fluids

    NASA Astrophysics Data System (ADS)

    Tofighi, N.; Ozbulut, M.; Rahmat, A.; Feng, J. J.; Yildiz, M.

    2015-09-01

    A two-dimensional incompressible smoothed particle hydrodynamics scheme is presented for simulation of rigid bodies moving through Newtonian fluids. The scheme relies on combined usage of the rigidity constraints and the viscous penalty method to simulate rigid body motion. Different viscosity ratios and interpolation schemes are tested by simulating a rigid disc descending in quiescent medium. A viscosity ratio of 100 coupled with weighted harmonic averaging scheme has been found to provide satisfactory results. The performance of the resulting scheme is systematically tested for cases with linear motion, rotational motion and their combination. The test cases include sedimentation of a single and a pair of circular discs, sedimentation of an elliptic disc and migration and rotation of a circular disc in linear shear flow. Comparison with previous results at various Reynolds numbers indicates that the proposed method captures the motion of rigid bodies driven by flow or external body forces accurately.

  17. Simulation of explosively driven metallic tubes by the cylindrical smoothed particle hydrodynamics method

    NASA Astrophysics Data System (ADS)

    Yang, G.; Han, X.; Hu, D. A.

    2015-11-01

    Modified cylindrical smoothed particle hydrodynamics (MCSPH) approximation equations are derived for hydrodynamics with material strength in axisymmetric cylindrical coordinates. The momentum equation and internal energy equation are represented to be in the axisymmetric form. The MCSPH approximation equations are applied to simulate the process of explosively driven metallic tubes, which includes strong shock waves, large deformations and large inhomogeneities, etc. The meshless and Lagrangian character of the MCSPH method offers the advantages in treating the difficulties embodied in these physical phenomena. Two test cases, the cylinder test and the metallic tube driven by two head-on colliding detonation waves, are presented. Numerical simulation results show that the new form of the MCSPH method can predict the detonation process of high explosives and the expansion process of metallic tubes accurately and robustly.

  18. Using Taguchi method to optimize differential evolution algorithm parameters to minimize workload smoothness index in SALBP

    NASA Astrophysics Data System (ADS)

    Mozdgir, A.; Mahdavi, Iraj; Seyyedi, I.; Shiraqei, M. E.

    2011-06-01

    An assembly line is a flow-oriented production system where the productive units performing the operations, referred to as stations, are aligned in a serial manner. The assembly line balancing problem arises and has to be solved when an assembly line has to be configured or redesigned. The so-called simple assembly line balancing problem (SALBP), a basic version of the general problem, has attracted attention of researchers and practitioners of operations research for almost half a century. There are four types of objective functions which are considered to this kind of problem. The versions of SALBP may be complemented by a secondary objective which consists of smoothing station loads. Many heuristics have been proposed for the assembly line balancing problem due to its computational complexity and difficulty in identifying an optimal solution and so many heuristic solutions are supposed to solve this problem. In this paper a differential evolution algorithm is developed to minimize workload smoothness index in SALBP-2 and the algorithm parameters are optimized using Taguchi method.

  19. Total phenolics, antioxidant activity, and functional properties of 'Tommy Atkins' mango peel and kernel as affected by drying methods.

    PubMed

    Sogi, Dalbir Singh; Siddiq, Muhammad; Greiby, Ibrahim; Dolan, Kirk D

    2013-12-01

    Mango processing produces significant amount of waste (peels and kernels) that can be utilized for the production of value-added ingredients for various food applications. Mango peel and kernel were dried using different techniques, such as freeze drying, hot air, vacuum and infrared. Freeze dried mango waste had higher antioxidant properties than those from other techniques. The ORAC values of peel and kernel varied from 418-776 and 1547-1819 μmol TE/g db. The solubility of freeze dried peel and kernel powder was the highest. The water and oil absorption index of mango waste powders ranged between 1.83-6.05 and 1.66-3.10, respectively. Freeze dried powders had the lowest bulk density values among different techniques tried. The cabinet dried waste powders can be potentially used in food products to enhance their nutritional and antioxidant properties. PMID:23871007

  20. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  1. Inversion Theorem Based Kernel Density Estimation for the Ordinary Least Squares Estimator of a Regression Coefficient

    PubMed Central

    Wang, Dongliang; Hutson, Alan D.

    2016-01-01

    The traditional confidence interval associated with the ordinary least squares estimator of linear regression coefficient is sensitive to non-normality of the underlying distribution. In this article, we develop a novel kernel density estimator for the ordinary least squares estimator via utilizing well-defined inversion based kernel smoothing techniques in order to estimate the conditional probability density distribution of the dependent random variable. Simulation results show that given a small sample size, our method significantly increases the power as compared with Wald-type CIs. The proposed approach is illustrated via an application to a classic small data set originally from Graybill (1961). PMID:26924882

  2. A multiscale restriction-smoothed basis method for high contrast porous media represented on unstructured grids

    NASA Astrophysics Data System (ADS)

    Møyner, Olav; Lie, Knut-Andreas

    2016-01-01

    A wide variety of multiscale methods have been proposed in the literature to reduce runtime and provide better scaling for the solution of Poisson-type equations modeling flow in porous media. We present a new multiscale restricted-smoothed basis (MsRSB) method that is designed to be applicable to both rectilinear grids and unstructured grids. Like many other multiscale methods, MsRSB relies on a coarse partition of the underlying fine grid and a set of local prolongation operators (multiscale basis functions) that map unknowns associated with the fine grid cells to unknowns associated with blocks in the coarse partition. These mappings are constructed by restricted smoothing: Starting from a constant, a localized iterative scheme is applied directly to the fine-scale discretization to compute prolongation operators that are consistent with the local properties of the differential operators. The resulting method has three main advantages: First of all, both the coarse and the fine grid can have general polyhedral geometry and unstructured topology. This means that partitions and good prolongation operators can easily be constructed for complex models involving high media contrasts and unstructured cell connections introduced by faults, pinch-outs, erosion, local grid refinement, etc. In particular, the coarse partition can be adapted to geological or flow-field properties represented on cells or faces to improve accuracy. Secondly, the method is accurate and robust when compared to existing multiscale methods and does not need expensive recomputation of local basis functions to account for transient behavior: Dynamic mobility changes are incorporated by continuing to iterate a few extra steps on existing basis functions. This way, the cost of updating the prolongation operators becomes proportional to the amount of change in fluid mobility and one reduces the need for expensive, tolerance-based updates. Finally, since the MsRSB method is formulated on top of a cell

  3. Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks.

    PubMed

    Wu, Wei; Fan, Qinwei; Zurada, Jacek M; Wang, Jian; Yang, Dakun; Liu, Yan

    2014-02-01

    The aim of this paper is to develop a novel method to prune feedforward neural networks by introducing an L1/2 regularization term into the error function. This procedure forces weights to become smaller during the training and can eventually removed after the training. The usual L1/2 regularization term involves absolute values and is not differentiable at the origin, which typically causes oscillation of the gradient of the error function during the training. A key point of this paper is to modify the usual L1/2 regularization term by smoothing it at the origin. This approach offers the following three advantages: First, it removes the oscillation of the gradient value. Secondly, it gives better pruning, namely the final weights to be removed are smaller than those produced through the usual L1/2 regularization. Thirdly, it makes it possible to prove the convergence of the training. Supporting numerical examples are also provided. PMID:24291693

  4. Coupling of Smoothed Particle Hydrodynamics with Finite Volume method for free-surface flows

    NASA Astrophysics Data System (ADS)

    Marrone, S.; Di Mascio, A.; Le Touzé, D.

    2016-04-01

    A new algorithm for the solution of free surface flows with large front deformation and fragmentation is presented. The algorithm is obtained by coupling a classical Finite Volume (FV) approach, that discretizes the Navier-Stokes equations on a block structured Eulerian grid, with an approach based on the Smoothed Particle Hydrodynamics (SPH) method, implemented in a Lagrangian framework. The coupling procedure is formulated in such a way that each solver is applied in the region where its intrinsic characteristics can be exploited in the most efficient and accurate way: the FV solver is used to resolve the bulk flow and the wall regions, whereas the SPH solver is implemented in the free surface region to capture details of the front evolution. The reported results clearly prove that the combined use of the two solvers is convenient from the point of view of both accuracy and computing time.

  5. A friction regulation hybrid driving method for backward motion restraint of the smooth impact drive mechanism

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Chen, Dong; Cheng, Tinghai; He, Pu; Lu, Xiaohui; Zhao, Hongwei

    2016-08-01

    The smooth impact drive mechanism (SIDM) is a type of piezoelectric actuator that has been developed for several decades. As a kind of driving method for the SIDM, the traditional sawtooth (TS) wave is always employed. The kinetic friction force during the rapid contraction stage usually results in the generation of a backward motion. A friction regulation hybrid (FRH) driving method realized by a composite waveform for the backward motion restraint of the SIDM is proposed in this paper. The composite waveform is composed of a sawtooth driving (SD) wave and a sinusoidal friction regulation (SFR) wave which is applied to the rapid deformation stage of the SD wave. A prototype of the SIDM was fabricated and its output performance under the excitation of the FRH driving method and the TS wave driving method was tested. The results indicate that the backward motion can be restrained obviously using the FRH driving method. Compared with the driving effect of the TS wave, the backward rates of the prototype in forward and reverse motions are decreased by 83% and 85%, respectively.

  6. Workshop on advances in smooth particle hydrodynamics

    SciTech Connect

    Wingate, C.A.; Miller, W.A.

    1993-12-31

    This proceedings contains viewgraphs presented at the 1993 workshop held at Los Alamos National Laboratory. Discussed topics include: negative stress, reactive flow calculations, interface problems, boundaries and interfaces, energy conservation in viscous flows, linked penetration calculations, stability and consistency of the SPH method, instabilities, wall heating and conservative smoothing, tensors, tidal disruption of stars, breaking the 10,000,000 particle limit, modelling relativistic collapse, SPH without H, relativistic KSPH avoidance of velocity based kernels, tidal compression and disruption of stars near a supermassive rotation black hole, and finally relativistic SPH viscosity and energy.

  7. GPUs, a New Tool of Acceleration in CFD: Efficiency and Reliability on Smoothed Particle Hydrodynamics Methods

    PubMed Central

    Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.

    2011-01-01

    Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185

  8. Simulation of surface tension in 2D and 3D with smoothed particle hydrodynamics method

    NASA Astrophysics Data System (ADS)

    Zhang, Mingyu

    2010-09-01

    The methods for simulating surface tension with smoothed particle hydrodynamics (SPH) method in two dimensions and three dimensions are developed. In 2D surface tension model, the SPH particle on the boundary in 2D is detected dynamically according to the algorithm developed by Dilts [G.A. Dilts, Moving least-squares particle hydrodynamics II: conservation and boundaries, International Journal for Numerical Methods in Engineering 48 (2000) 1503-1524]. The boundary curve in 2D is reconstructed locally with Lagrangian interpolation polynomial. In 3D surface tension model, the SPH particle on the boundary in 3D is detected dynamically according to the algorithm developed by Haque and Dilts [A. Haque, G.A. Dilts, Three-dimensional boundary detection for particle methods, Journal of Computational Physics 226 (2007) 1710-1730]. The boundary surface in 3D is reconstructed locally with moving least squares (MLS) method. By transforming the coordinate system, it is guaranteed that the interface function is one-valued in the local coordinate system. The normal vector and curvature of the boundary surface are calculated according to the reconstructed boundary surface and then surface tension force can be calculated. Surface tension force acts only on the boundary particle. Density correction is applied to the boundary particle in order to remove the boundary inconsistency. The surface tension models in 2D and 3D have been applied to benchmark tests for surface tension. The ability of the current method applying to the simulation of surface tension in 2D and 3D is proved.

  9. Enrollment Forecasting with Double Exponential Smoothing: Two Methods for Objective Weight Factor Selection. AIR Forum 1980 Paper.

    ERIC Educational Resources Information Center

    Gardner, Don E.

    The merits of double exponential smoothing are discussed relative to other types of pattern-based enrollment forecasting methods. The difficulties associated with selecting an appropriate weight factor are discussed, and their potential effects on prediction results are illustrated. Two methods for objectively selecting the "best" weight factor…

  10. Ab initio-driven nuclear energy density functional method. A proposal for safe/correlated/improvable parametrizations of the off-diagonal EDF kernels

    NASA Astrophysics Data System (ADS)

    Duguet, T.; Bender, M.; Ebran, J.-P.; Lesinski, T.; Somà, V.

    2015-12-01

    This programmatic paper lays down the possibility to reconcile the necessity to resum many-body correlations into the energy kernel with the fact that safe multi-reference energy density functional (EDF) calculations cannot be achieved whenever the Pauli principle is not enforced, as is for example the case when many-body correlations are parametrized under the form of empirical density dependencies. Our proposal is to exploit a newly developed ab initio many-body formalism to guide the construction of safe, explicitly correlated and systematically improvable parametrizations of the off-diagonal energy and norm kernels that lie at the heart of the nuclear EDF method. The many-body formalism of interest relies on the concepts of symmetry breaking and restoration that have made the fortune of the nuclear EDF method and is, as such, amenable to this guidance. After elaborating on our proposal, we briefly outline the project we plan to execute in the years to come.

  11. Shock-produced ejecta from tin: Comparative study by molecular dynamics and smoothed particle hydrodynamics methods

    NASA Astrophysics Data System (ADS)

    Dyachkov, S. A.; Parshikov, A. N.; Zhakhovsky, V. V.

    2015-11-01

    Experimental methods of observation of early stage of shock-induced ejecta from metal surface with micrometer-sized perturbations are still limited in terms of following a complete sequence of processes having microscale dimensions and nanoscale times. Therefore, simulations by the smoothed particle hydrodynamics (SPH) and molecular dynamics (MD) methods can shed of light on details of micro-jet evolution. The size of simulated sample is too restricted in MD, but the simulations with large enough number of atoms can be scaled well to the sizes of realistic samples. To validate such scaling the comparative MD and SPH simulations of tin samples are performed. SPH simulation takes the realistic experimental sizes, while MD uses the proportionally scaled sizes of samples. It is shown that the velocity and mass distributions along the jets simulated by MD and SPH are in a good agreement. The observed difference in velocity of spikes between MD and experiments can be partially explained by a profound effect of surface tension on jets ejected from the small-scale samples.

  12. Biological Rhythms Modelisation of Vigilance and Sleep in Microgravity State with COSINOR and Volterra's Kernels Methods

    NASA Astrophysics Data System (ADS)

    Gaudeua de Gerlicz, C.; Golding, J. G.; Bobola, Ph.; Moutarde, C.; Naji, S.

    2008-06-01

    The spaceflight under microgravity cause basically biological and physiological imbalance in human being. Lot of study has been yet release on this topic especially about sleep disturbances and on the circadian rhythms (alternation vigilance-sleep, body, temperature...). Factors like space motion sickness, noise, or excitement can cause severe sleep disturbances. For a stay of longer than four months in space, gradual increases in the planned duration of sleep were reported. [1] The average sleep in orbit was more than 1.5 hours shorter than the during control periods on earth, where sleep averaged 7.9 hours. [2] Alertness and calmness were unregistered yield clear circadian pattern of 24h but with a phase delay of 4h.The calmness showed a biphasic component (12h) mean sleep duration was 6.4 structured by 3-5 non REM/REM cycles. Modelisations of neurophysiologic mechanisms of stress and interactions between various physiological and psychological variables of rhythms have can be yet release with the COSINOR method. [3

  13. Visualizing and Interacting with Kernelized Data.

    PubMed

    Barbosa, A; Paulovich, F V; Paiva, A; Goldenstein, S; Petronetto, F; Nonato, L G

    2016-03-01

    Kernel-based methods have experienced a substantial progress in the last years, tuning out an essential mechanism for data classification, clustering and pattern recognition. The effectiveness of kernel-based techniques, though, depends largely on the capability of the underlying kernel to properly embed data in the feature space associated to the kernel. However, visualizing how a kernel embeds the data in a feature space is not so straightforward, as the embedding map and the feature space are implicitly defined by the kernel. In this work, we present a novel technique to visualize the action of a kernel, that is, how the kernel embeds data into a high-dimensional feature space. The proposed methodology relies on a solid mathematical formulation to map kernelized data onto a visual space. Our approach is faster and more accurate than most existing methods while still allowing interactive manipulation of the projection layout, a game-changing trait that other kernel-based projection techniques do not have. PMID:26829242

  14. Do we really need a large number of particles to simulate bimolecular reactive transport with random walk methods? A kernel density estimation approach

    NASA Astrophysics Data System (ADS)

    Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-12-01

    Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a

  15. Estimation of mass ratio of the total kernels within a sample of in-shell peanuts using RF Impedance Method

    Technology Transfer Automated Retrieval System (TEKTRAN)

    It would be useful to know the total kernel mass within a given mass of peanuts (mass ratio) while the peanuts are bought or being processed. In this work, the possibility of finding this mass ratio while the peanuts were in their shells was investigated. Capacitance, phase angle and dissipation fa...

  16. New Equating Methods and Their Relationships with Levine Observed Score Linear Equating under the Kernel Equating Framework

    ERIC Educational Resources Information Center

    Chen, Haiwen; Holland, Paul

    2010-01-01

    In this paper, we develop a new curvilinear equating for the nonequivalent groups with anchor test (NEAT) design under the assumption of the classical test theory model, that we name curvilinear Levine observed score equating. In fact, by applying both the kernel equating framework and the mean preserving linear transformation of…

  17. Development of a Smooth Trajectory Maneuver Method to Accommodate the Ares I Flight Control Constraints

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Schmitt, Terri L.; Hanson, John M.

    2008-01-01

    Six degree-of-freedom (DOF) launch vehicle trajectories are designed to follow an optimized 3-DOF reference trajectory. A vehicle has a finite amount of control power that it can allocate to performing maneuvers. Therefore, the 3-DOF trajectory must be designed to refrain from using 100% of the allowable control capability to perform maneuvers, saving control power for handling off-nominal conditions, wind gusts and other perturbations. During the Ares I trajectory analysis, two maneuvers were found to be hard for the control system to implement; a roll maneuver prior to the gravity turn and an angle of attack maneuver immediately after the J-2X engine start-up. It was decided to develop an approach for creating smooth maneuvers in the optimized reference trajectories that accounts for the thrust available from the engines. A feature of this method is that no additional angular velocity in the direction of the maneuver has been added to the vehicle after the maneuver completion. This paper discusses the equations behind these new maneuvers and their implementation into the Ares I trajectory design cycle. Also discussed is a possible extension to adjusting closed-loop guidance.

  18. Self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation theorem and the exact-exchange kernel

    NASA Astrophysics Data System (ADS)

    Bleiziffer, Patrick; Krug, Marcel; Görling, Andreas

    2015-06-01

    A self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation (ACFD) theorem, employing the frequency-dependent exact exchange kernel fx is presented. The resulting SC-exact-exchange-only (EXX)-ACFD method leads to even more accurate correlation potentials than those obtained within the direct random phase approximation (dRPA). In contrast to dRPA methods, not only the Coulomb kernel but also the exact exchange kernel fx is taken into account in the EXX-ACFD correlation which results in a method that, unlike dRPA methods, is free of self-correlations, i.e., a method that treats exactly all one-electron systems, like, e.g., the hydrogen atom. The self-consistent evaluation of EXX-ACFD total energies improves the accuracy compared to EXX-ACFD total energies evaluated non-self-consistently with EXX or dRPA orbitals and eigenvalues. Reaction energies of a set of small molecules, for which highly accurate experimental reference data are available, are calculated and compared to quantum chemistry methods like Møller-Plesset perturbation theory of second order (MP2) or coupled cluster methods [CCSD, coupled cluster singles, doubles, and perturbative triples (CCSD(T))]. Moreover, we compare our methods to other ACFD variants like dRPA combined with perturbative corrections such as the second order screened exchange corrections or a renormalized singles correction. Similarly, the performance of our EXX-ACFD methods is investigated for the non-covalently bonded dimers of the S22 reference set and for potential energy curves of noble gas, water, and benzene dimers. The computational effort of the SC-EXX-ACFD method exhibits the same scaling of N5 with respect to the system size N as the non-self-consistent evaluation of only the EXX-ACFD correlation energy; however, the prefactor increases significantly. Reaction energies from the SC-EXX-ACFD method deviate quite little from EXX-ACFD energies obtained non-self-consistently with dRPA orbitals

  19. Nodal synthetic kernel (N-SKN) method for solving radiative heat transfer problems in one- and two-dimensional participating medium with isotropic scattering

    NASA Astrophysics Data System (ADS)

    Altaç, Zekeriya; Tekkalmaz, Mesut

    2013-11-01

    In this study, a nodal method based on the synthetic kernel (SKN) approximation is developed for solving the radiative transfer equation (RTE) in one- and two-dimensional cartesian geometries. The RTE for a two-dimensional node is transformed to one-dimensional RTE, based on face-averaged radiation intensity. At the node interfaces, double P1 expansion is employed to the surface angular intensities with the isotropic transverse leakage assumption. The one-dimensional radiative integral transfer equation (RITE) is obtained in terms of the node-face-averaged incoming/outgoing incident energy and partial heat fluxes. The synthetic kernel approximation is employed to the transfer kernels and nodal-face contributions. The resulting SKN equations are solved analytically. One-dimensional interface-coupling nodal SK1 and SK2 equations (incoming/outgoing incident energy and net partial heat flux) are derived for the small nodal-mesh limit. These equations have simple algebraic and recursive forms which impose burden on neither the memory nor the computational time. The method was applied to one- and two-dimensional benchmark problems including hot/cold medium with transparent/emitting walls. The 2D results are free of ray effect and the results, for geometries of a few mean-free-paths or more, are in excellent agreement with the exact solutions.

  20. Domain transfer multiple kernel learning.

    PubMed

    Duan, Lixin; Tsang, Ivor W; Xu, Dong

    2012-03-01

    Cross-domain learning methods have shown promising results by leveraging labeled patterns from the auxiliary domain to learn a robust classifier for the target domain which has only a limited number of labeled samples. To cope with the considerable change between feature distributions of different domains, we propose a new cross-domain kernel learning framework into which many existing kernel methods can be readily incorporated. Our framework, referred to as Domain Transfer Multiple Kernel Learning (DTMKL), simultaneously learns a kernel function and a robust classifier by minimizing both the structural risk functional and the distribution mismatch between the labeled and unlabeled samples from the auxiliary and target domains. Under the DTMKL framework, we also propose two novel methods by using SVM and prelearned classifiers, respectively. Comprehensive experiments on three domain adaptation data sets (i.e., TRECVID, 20 Newsgroups, and email spam data sets) demonstrate that DTMKL-based methods outperform existing cross-domain learning and multiple kernel learning methods. PMID:21646679

  1. Methods of Smoothing Double-Entry Expectancy Tables Applied to the Prediction of Success in College. Research Report No. 91.

    ERIC Educational Resources Information Center

    Kolen, Michael J.; And Others

    Six methods for smoothing double-entry expectancy tables (tables that relate two predictor variables to probability of attaining a selected level of success on a criterion) were compared using data for entering students at 85 colleges and universities. ACT composite scores and self-reported high school grade averages were used to construct…

  2. Methods for Smoothing Expectancy Tables Applied to the Prediction of Success in College. Research Report No. 79.

    ERIC Educational Resources Information Center

    Perrin, David W.; Whitney, Douglas R.

    Six methods for smoothing expectancy tables were compared using data for entering students at 86 colleges and universities. Linear regression analyses were applied to ACT scores and high school grades to obtain predicted first term grade point averages (FGPA's) for students entering each institution in 1969-70. Expectancy tables were constructed…

  3. Three-dimensional neuronal brain activity estimation using shrinking smooth weighted-minimum-norm focal underdetermined-system solver methods

    NASA Astrophysics Data System (ADS)

    Zouch, Wassim; Slima, Mohamed Ben; Feki, Imed; Derambure, Philippe; Taleb-Ahmed, Abdelmalik; Hamida, Ahmed Ben

    2010-12-01

    A new nonparametric method, based on the smooth weighted-minimum-norm (WMN) focal underdetermined-system solver (FOCUSS), for electrical cerebral activity localization using electroencephalography measurements is proposed. This method iteratively adjusts the spatial sources by reducing the size of the lead-field and the weighting matrix. Thus, an enhancement of source localization is obtained, as well as a reduction of the computational complexity. The performance of the proposed method, in terms of localization errors, robustness, and computation time, is compared with the WMN-FOCUSS and nonshrinking smooth WMN-FOCUSS methods as well as with standard generalized inverse methods (unweighted minimum norm, WMN, and FOCUSS). Simulation results for single-source localization confirm the effectiveness and robustness of the proposed method with respect to the reconstruction accuracy of a simulated single dipole.

  4. Marker profile for the evaluation of human umbilical artery smooth muscle cell quality obtained by different isolation and culture methods.

    PubMed

    Mazza, G; Roßmanith, E; Lang-Olip, I; Pfeiffer, D

    2016-08-01

    Even though umbilical cord arteries are a common source of vascular smooth muscle cells, the lack of reliable marker profiles have not facilitated the isolation of human umbilical artery smooth muscle cells (HUASMC). For accurate characterization of HUASMC and cells in their environment, the expression of smooth muscle and mesenchymal markers was analyzed in umbilical cord tissue sections. The resulting marker profile was then used to evaluate the quality of HUASMC isolation and culture methods. HUASMC and perivascular-Wharton's jelly stromal cells (pv-WJSC) showed positive staining for α-smooth muscle actin (α-SMA), smooth muscle myosin heavy chain (SM-MHC), desmin, vimentin and CD90. Anti-CD10 stained only pv-WJSC. Consequently, HUASMC could be characterized as α-SMA+ , SM-MHC+ , CD10- cells, which are additionally negative for endothelial markers (CD31 and CD34). Enzymatic isolation provided primary HUASMC batches with 90-99 % purity, yet, under standard culture conditions, contaminant CD10+ cells rapidly constituted more than 80 % of the total cell population. Contamination was mainly due to the poor adhesion of HUASMC to cell culture plates, regardless of the different protein coatings (fibronectin, collagen I or gelatin). HUASMC showed strong attachment and long-term viability only in 3D matrices. The explant isolation method achieved cultures with only 13-40 % purity with considerable contamination by CD10+ cells. CD10+ cells showed spindle-like morphology and up-regulated expression of α-SMA and SM-MHC upon culture in smooth muscle differentiation medium. Considering the high contamination risk of HUASMC cultures by CD10+ neighboring cells and their phenotypic similarities, precise characterization is mandatory to avoid misleading results. PMID:25535117

  5. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  6. Induced Pluripotent Stem Cell-derived Vascular Smooth Muscle Cells: Methods and Application

    PubMed Central

    Dash, Biraja C.; Jiang, Zhengxin; Suh, Carol; Qyang, Yibing

    2015-01-01

    Vascular smooth muscle cells (VSMCs) play a major role in the pathophysiology of cardiovascular diseases. The advent of induced pluripotent stem cell (iPSC) technology and their capability to differentiation into virtually every cell type in the human body make this field a ray of hope for vascular regenerative therapy and for understanding disease mechanism. In this review, we first discuss the recent iPSC technology and vascular smooth muscle development from embryo and then examine different methodology to derive VSMCs from iPSCs and their applications in regenerative therapy and disease modeling. PMID:25559088

  7. Methods and energy storage devices utilizing electrolytes having surface-smoothing additives

    SciTech Connect

    Xu, Wu; Zhang, Jiguang; Graff, Gordon L; Chen, Xilin; Ding, Fei

    2015-11-12

    Electrodeposition and energy storage devices utilizing an electrolyte having a surface-smoothing additive can result in self-healing, instead of self-amplification, of initial protuberant tips that give rise to roughness and/or dendrite formation on the substrate and anode surface. For electrodeposition of a first metal (M1) on a substrate or anode from one or more cations of M1 in an electrolyte solution, the electrolyte solution is characterized by a surface-smoothing additive containing cations of a second metal (M2), wherein cations of M2 have an effective electrochemical reduction potential in the solution lower than that of the cations of M1.

  8. Smoothed Particle Hydrodynamics Continuous Boundary Force method for Navier-Stokes equations subject to a Robin boundary condition

    NASA Astrophysics Data System (ADS)

    Pan, Wenxiao; Bao, Jie; Tartakovsky, Alexandre

    2013-11-01

    A Continuous Boundary Force (CBF) method was developed for implementing Robin (Navier) boundary condition (BC) that can describe no-slip or slip conditions (slip length from zero to infinity) at the fluid-solid interface. In the CBF method the Robin BC is replaced by a homogeneous Neumann BC and an additional volumetric source term in the governing momentum equation. The formulation is derived based on an approximation of the sharp boundary with a diffuse interface of finite thickness, across which the BC is reformulated by means of a smoothed characteristic function. The CBF method is easy to be implemented in Lagrangian particle-based methods. We first implemented it in smoothed particle hydrodynamics (SPH) to solve numerically the Navier-Stokes equations subject to spatial-independent or dependent Robin BC in two and three dimensions. The numerical accuracy and convergence is examined through comparisons with the corresponding finite difference or finite element solutions. The CBF method is further implemented in smoothed dissipative particle dynamics (SDPD), a mesoscale scheme, for modeling slip flows commonly existent in micro/nano channels and microfluidic devices. The authors acknowledge the funding support by the ASCR Program of the Office of Science, U.S. Department of Energy.

  9. Examination of tear film smoothness on corneae after refractive surgeries using a noninvasive interferometric method

    NASA Astrophysics Data System (ADS)

    Szczesna, Dorota H.; Kulas, Zbigniew; Kasprzak, Henryk T.; Stenevi, Ulf

    2009-11-01

    A lateral shearing interferometer was used to examine the smoothness of the tear film. The information about the distribution and stability of the precorneal tear film is carried out by the wavefront reflected from the surface of tears and coded in interference fringes. Smooth and regular fringes indicate a smooth tear film surface. On corneae after laser in situ keratomileusis (LASIK) or radial keratotomy (RK) surgery, the interference fringes are seldom regular. The fringes are bent on bright lines, which are interpreted as tear film breakups. The high-intensity pattern seems to appear in similar location on the corneal surface after refractive surgery. Our purpose was to extract information about the pattern existing under the interference fringes and calculate its shape reproducibility over time and following eye blinks. A low-pass filter was applied and correlation coefficient was calculated to compare a selected fragment of the template image to each of the following frames in the recorded sequence. High values of the correlation coefficient suggest that irregularities of the corneal epithelium might influence tear film instability and that tear film breakup may be associated with local irregularities of the corneal topography created after the LASIK and RK surgeries.

  10. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  11. Coronary Stent Artifact Reduction with an Edge-Enhancing Reconstruction Kernel – A Prospective Cross-Sectional Study with 256-Slice CT

    PubMed Central

    Tan, Stéphanie; Soulez, Gilles; Diez Martinez, Patricia; Larrivée, Sandra; Stevens, Louis-Mathieu; Goussard, Yves; Mansour, Samer; Chartrand-Lefebvre, Carl

    2016-01-01

    Purpose Metallic artifacts can result in an artificial thickening of the coronary stent wall which can significantly impair computed tomography (CT) imaging in patients with coronary stents. The objective of this study is to assess in vivo visualization of coronary stent wall and lumen with an edge-enhancing CT reconstruction kernel, as compared to a standard kernel. Methods This is a prospective cross-sectional study involving the assessment of 71 coronary stents (24 patients), with blinded observers. After 256-slice CT angiography, image reconstruction was done with medium-smooth and edge-enhancing kernels. Stent wall thickness was measured with both orthogonal and circumference methods, averaging thickness from diameter and circumference measurements, respectively. Image quality was assessed quantitatively using objective parameters (noise, signal to noise (SNR) and contrast to noise (CNR) ratios), as well as visually using a 5-point Likert scale. Results Stent wall thickness was decreased with the edge-enhancing kernel in comparison to the standard kernel, either with the orthogonal (0.97 ± 0.02 versus 1.09 ± 0.03 mm, respectively; p<0.001) or the circumference method (1.13 ± 0.02 versus 1.21 ± 0.02 mm, respectively; p = 0.001). The edge-enhancing kernel generated less overestimation from nominal thickness compared to the standard kernel, both with the orthogonal (0.89 ± 0.19 versus 1.00 ± 0.26 mm, respectively; p<0.001) and the circumference (1.06 ± 0.26 versus 1.13 ± 0.31 mm, respectively; p = 0.005) methods. The edge-enhancing kernel was associated with lower SNR and CNR, as well as higher background noise (all p < 0.001), in comparison to the medium-smooth kernel. Stent visual scores were higher with the edge-enhancing kernel (p<0.001). Conclusion In vivo 256-slice CT assessment of coronary stents shows that the edge-enhancing CT reconstruction kernel generates thinner stent walls, less overestimation from nominal thickness, and better image quality

  12. Performance Assessment of Kernel Density Clustering for Gene Expression Profile Data

    PubMed Central

    Zeng, Beiyan; Chen, Yiping P.; Smith, Oscar H.

    2003-01-01

    Kernel density smoothing techniques have been used in classification or supervised learning of gene expression profile (GEP) data, but their applications to clustering or unsupervised learning of those data have not been explored and assessed. Here we report a kernel density clustering method for analysing GEP data and compare its performance with the three most widely-used clustering methods: hierarchical clustering, K-means clustering, and multivariate mixture model-based clustering. Using several methods to measure agreement, between-cluster isolation, and withincluster coherence, such as the Adjusted Rand Index, the Pseudo F test, the r2 test, and the profile plot, we have assessed the effectiveness of kernel density clustering for recovering clusters, and its robustness against noise on clustering both simulated and real GEP data. Our results show that the kernel density clustering method has excellent performance in recovering clusters from simulated data and in grouping large real expression profile data sets into compact and well-isolated clusters, and that it is the most robust clustering method for analysing noisy expression profile data compared to the other three methods assessed. PMID:18629292

  13. How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.

    PubMed

    Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J

    2014-09-01

    Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PMID:24885339

  14. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    NASA Astrophysics Data System (ADS)

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-01

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  15. Adaptive wiener image restoration kernel

    DOEpatents

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  16. Smoothed finite element method implemented in a resultant eight-node solid-shell element for geometrical linear analysis

    NASA Astrophysics Data System (ADS)

    Élie-Dit-Cosaque, Xavier J.-G.; Gakwaya, Augustin; Naceur, Hakim

    2015-01-01

    A smoothed finite element method formulation for the resultant eight-node solid-shell element is presented in this paper for geometrical linear analysis. The smoothing process is successfully performed on the element mid-surface to deal with the membrane and bending effects of the stiffness matrix. The strain smoothing process allows replacing the Cartesian derivatives of shape functions by the product of shape functions with normal vectors to the element mid-surface boundaries. The present formulation remains competitive when compared to the classical finite element formulations since no inverse of the Jacobian matrix is calculated. The three dimensional resultant shell theory allows the element kinematics to be defined only with the displacement degrees of freedom. The assumed natural strain method is used not only to eliminate the transverse shear locking problem encountered in thin-walled structures, but also to reduce trapezoidal effects. The efficiency of the present element is presented and compared with that of standard solid-shell elements through various benchmark problems including some with highly distorted meshes.

  17. Simulation of wave mitigation by coastal vegetation using smoothed particle hydrodynamics method

    NASA Astrophysics Data System (ADS)

    Iryanto; Gunawan, P. H.

    2016-02-01

    Vegetation in coastal area lead to wave mitigation has been studied by some researchers recently. The effect of vegetation forest in coastal area is minimizing the negative impact of wave propagation. In order to describe the effect of vegetation resistance into the water flow, the modified model of framework smoothed hydrodynamics particle has been constructed. In the Lagrangian framework, the Darcy, Manning, and laminar viscosity resistances are added. The effect of each resistances is given in some results of numerical simulations. Simulation of wave mitigation on sloping beach is also given.

  18. Self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation theorem and the exact-exchange kernel.

    PubMed

    Bleiziffer, Patrick; Krug, Marcel; Görling, Andreas

    2015-06-28

    A self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation (ACFD) theorem, employing the frequency-dependent exact exchange kernel fx is presented. The resulting SC-exact-exchange-only (EXX)-ACFD method leads to even more accurate correlation potentials than those obtained within the direct random phase approximation (dRPA). In contrast to dRPA methods, not only the Coulomb kernel but also the exact exchange kernel fx is taken into account in the EXX-ACFD correlation which results in a method that, unlike dRPA methods, is free of self-correlations, i.e., a method that treats exactly all one-electron systems, like, e.g., the hydrogen atom. The self-consistent evaluation of EXX-ACFD total energies improves the accuracy compared to EXX-ACFD total energies evaluated non-self-consistently with EXX or dRPA orbitals and eigenvalues. Reaction energies of a set of small molecules, for which highly accurate experimental reference data are available, are calculated and compared to quantum chemistry methods like Møller-Plesset perturbation theory of second order (MP2) or coupled cluster methods [CCSD, coupled cluster singles, doubles, and perturbative triples (CCSD(T))]. Moreover, we compare our methods to other ACFD variants like dRPA combined with perturbative corrections such as the second order screened exchange corrections or a renormalized singles correction. Similarly, the performance of our EXX-ACFD methods is investigated for the non-covalently bonded dimers of the S22 reference set and for potential energy curves of noble gas, water, and benzene dimers. The computational effort of the SC-EXX-ACFD method exhibits the same scaling of N(5) with respect to the system size N as the non-self-consistent evaluation of only the EXX-ACFD correlation energy; however, the prefactor increases significantly. Reaction energies from the SC-EXX-ACFD method deviate quite little from EXX-ACFD energies obtained non-self-consistently with dRPA orbitals

  19. Self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation theorem and the exact-exchange kernel

    SciTech Connect

    Bleiziffer, Patrick Krug, Marcel; Görling, Andreas

    2015-06-28

    A self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation (ACFD) theorem, employing the frequency-dependent exact exchange kernel f{sub x} is presented. The resulting SC-exact-exchange-only (EXX)-ACFD method leads to even more accurate correlation potentials than those obtained within the direct random phase approximation (dRPA). In contrast to dRPA methods, not only the Coulomb kernel but also the exact exchange kernel f{sub x} is taken into account in the EXX-ACFD correlation which results in a method that, unlike dRPA methods, is free of self-correlations, i.e., a method that treats exactly all one-electron systems, like, e.g., the hydrogen atom. The self-consistent evaluation of EXX-ACFD total energies improves the accuracy compared to EXX-ACFD total energies evaluated non-self-consistently with EXX or dRPA orbitals and eigenvalues. Reaction energies of a set of small molecules, for which highly accurate experimental reference data are available, are calculated and compared to quantum chemistry methods like Møller-Plesset perturbation theory of second order (MP2) or coupled cluster methods [CCSD, coupled cluster singles, doubles, and perturbative triples (CCSD(T))]. Moreover, we compare our methods to other ACFD variants like dRPA combined with perturbative corrections such as the second order screened exchange corrections or a renormalized singles correction. Similarly, the performance of our EXX-ACFD methods is investigated for the non-covalently bonded dimers of the S22 reference set and for potential energy curves of noble gas, water, and benzene dimers. The computational effort of the SC-EXX-ACFD method exhibits the same scaling of N{sup 5} with respect to the system size N as the non-self-consistent evaluation of only the EXX-ACFD correlation energy; however, the prefactor increases significantly. Reaction energies from the SC-EXX-ACFD method deviate quite little from EXX-ACFD energies obtained non

  20. A smooth dissipative particle dynamics method for domains with arbitrary-geometry solid boundaries

    NASA Astrophysics Data System (ADS)

    Gatsonis, Nikolaos A.; Potami, Raffaele; Yang, Jun

    2014-01-01

    A smooth dissipative particle dynamics method with dynamic virtual particle allocation (SDPD-DV) for modeling and simulation of mesoscopic fluids in wall-bounded domains is presented. The physical domain in SDPD-DV may contain external and internal solid boundaries of arbitrary geometries, periodic inlets and outlets, and the fluid region. The SDPD-DV method is realized with fluid particles, boundary particles, and dynamically allocated virtual particles. The internal or external solid boundaries of the domain can be of arbitrary geometry and are discretized with a surface grid. These boundaries are represented by boundary particles with assigned properties. The fluid domain is discretized with fluid particles of constant mass and variable volume. Conservative and dissipative force models due to virtual particles exerted on a fluid particle in the proximity of a solid boundary supplement the original SDPD formulation. The dynamic virtual particle allocation approach provides the density and the forces due to virtual particles. The integration of the SDPD equations is accomplished with a velocity-Verlet algorithm for the momentum and a Runge-Kutta for the entropy equation. The velocity integrator is supplemented by a bounce-forward algorithm in cases where the virtual particle force model is not able to prevent particle penetration. For the incompressible isothermal systems considered in this work, the pressure of a fluid particle is obtained by an artificial compressibility formulation for liquids and the ideal gas law for gases. The self-diffusion coefficient is obtained by an implementation of the generalized Einstein and the Green-Kubo relations. Field properties are obtained by sampling SDPD-DV outputs on a post-processing grid that allows harnessing the particle information on desired spatiotemporal scales. The SDPD-DV method is verified and validated with simulations in bounded and periodic domains that cover the hydrodynamic and mesoscopic regimes for

  1. A smooth dissipative particle dynamics method for domains with arbitrary-geometry solid boundaries

    NASA Astrophysics Data System (ADS)

    Gatsonis, Nikolaos A.; Potami, Raffaele; Yang, Jun

    2014-01-01

    A smooth dissipative particle dynamics method with dynamic virtual particle allocation (SDPD-DV) for modeling and simulation of mesoscopic fluids in wall-bounded domains is presented. The physical domain in SDPD-DV may contain external and internal solid boundaries of arbitrary geometries, periodic inlets and outlets, and the fluid region. The SDPD-DV method is realized with fluid particles, boundary particles, and dynamically allocated virtual particles. The internal or external solid boundaries of the domain can be of arbitrary geometry and are discretized with a surface grid. These boundaries are represented by boundary particles with assigned properties. The fluid domain is discretized with fluid particles of constant mass and variable volume. Conservative and dissipative force models due to virtual particles exerted on a fluid particle in the proximity of a solid boundary supplement the original SDPD formulation. The dynamic virtual particle allocation approach provides the density and the forces due to virtual particles. The integration of the SDPD equations is accomplished with a velocity-Verlet algorithm for the momentum and a Runge-Kutta for the entropy equation. The velocity integrator is supplemented by a bounce-forward algorithm in cases where the virtual particle force model is not able to prevent particle penetration. For the incompressible isothermal systems considered in this work, the pressure of a fluid particle is obtained by an artificial compressibility formulation for liquids and the ideal gas law for gases. The self-diffusion coefficient is obtained by an implementation of the generalized Einstein and the Green-Kubo relations. Field properties are obtained by sampling SDPD-DV outputs on a post-processing grid that allows harnessing the particle information on desired spatiotemporal scales. The SDPD-DV method is verified and validated with simulations in bounded and periodic domains that cover the hydrodynamic and mesoscopic regimes for

  2. Users manual for Opt-MS : local methods for simplicial mesh smoothing and untangling.

    SciTech Connect

    Freitag, L.

    1999-07-20

    Creating meshes containing good-quality elements is a challenging, yet critical, problem facing computational scientists today. Several researchers have shown that the size of the mesh, the shape of the elements within that mesh, and their relationship to the physical application of interest can profoundly affect the efficiency and accuracy of many numerical approximation techniques. If the application contains anisotropic physics, the mesh can be improved by considering both local characteristics of the approximate application solution and the geometry of the computational domain. If the application is isotropic, regularly shaped elements in the mesh reduce the discretization error, and the mesh can be improved a priori by considering geometric criteria only. The Opt-MS package provides several local node point smoothing techniques that improve elements in the mesh by adjusting grid point location using geometric, criteria. The package is easy to use; only three subroutine calls are required for the user to begin using the software. The package is also flexible; the user may change the technique, function, or dimension of the problem at any time during the mesh smoothing process. Opt-MS is designed to interface with C and C++ codes, ad examples for both two-and three-dimensional meshes are provided.

  3. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint

    PubMed Central

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2015-01-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575

  4. A steady and oscillatory kernel function method for interfering surfaces in subsonic, transonic and supersonic flow. [prediction analysis techniques for airfoils

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1976-01-01

    The theory, results and user instructions for an aerodynamic computer program are presented. The theory is based on linear lifting surface theory, and the method is the kernel function. The program is applicable to multiple interfering surfaces which may be coplanar or noncoplanar. Local linearization was used to treat nonuniform flow problems without shocks. For cases with imbedded shocks, the appropriate boundary conditions were added to account for the flow discontinuities. The data describing nonuniform flow fields must be input from some other source such as an experiment or a finite difference solution. The results are in the form of small linear perturbations about nonlinear flow fields. The method was applied to a wide variety of problems for which it is demonstrated to be significantly superior to the uniform flow method. Program user instructions are given for easy access.

  5. A New Kernel-Based Fuzzy Level Set Method for Automated Segmentation of Medical Images in the Presence of Intensity Inhomogeneity

    PubMed Central

    Shanbehzadeh, Jamshid

    2014-01-01

    Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation. PMID:24624225

  6. An analysis of smoothed particle hydrodynamics

    SciTech Connect

    Swegle, J.W.; Attaway, S.W.; Heinstein, M.W.; Mello, F.J.; Hicks, D.L.

    1994-03-01

    SPH (Smoothed Particle Hydrodynamics) is a gridless Lagrangian technique which is appealing as a possible alternative to numerical techniques currently used to analyze high deformation impulsive loading events. In the present study, the SPH algorithm has been subjected to detailed testing and analysis to determine its applicability in the field of solid dynamics. An important result of the work is a rigorous von Neumann stability analysis which provides a simple criterion for the stability or instability of the method in terms of the stress state and the second derivative of the kernel function. Instability, which typically occurs only for solids in tension, results not from the numerical time integration algorithm, but because the SPH algorithm creates an effective stress with a negative modulus. The analysis provides insight into possible methods for removing the instability. Also, SPH has been coupled into the transient dynamics finite element code PRONTO, and a weighted residual derivation of the SPH equations has been obtained.

  7. Weighted Bergman Kernels and Quantization}

    NASA Astrophysics Data System (ADS)

    Engliš, Miroslav

    Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion for x,y near z, where φ(x,y) is an almost-analytic extension of &\\phi(x)=φ(x,x) and similarly for ψ. Further, . If in addition Ω is of finite type, φ,ψ behave reasonably at the boundary, and - log φ, - log ψ are strictly plurisubharmonic on Ω, we obtain also an analogous asymptotic expansion for the Berezin transform and give applications to the Berezin quantization. Finally, for Ω smoothly bounded and strictly pseudoconvex and φ a smooth strictly plurisubharmonic defining function for Ω, we also obtain results on the Berezin-Toeplitz quantization.

  8. Difference image analysis: automatic kernel design using information criteria

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.

    2016-03-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.

  9. A hybrid smoothed extended finite element/level set method for modeling equilibrium shapes of nano-inhomogeneities

    NASA Astrophysics Data System (ADS)

    Zhao, Xujun; Bordas, Stéphane P. A.; Qu, Jianmin

    2013-12-01

    Interfacial energy plays an important role in equilibrium morphologies of nanosized microstructures of solid materials due to the high interface-to-volume ratio, and can no longer be neglected as it does in conventional mechanics analysis. When designing nanodevices and to understand the behavior of materials at the nano-scale, this interfacial energy must therefore be taken into account. The present work develops an effective numerical approach by means of a hybrid smoothed extended finite element/level set method to model nanoscale inhomogeneities with interfacial energy effect, in which the finite element mesh can be completely independent of the interface geometry. The Gurtin-Murdoch surface elasticity model is used to account for the interface stress effect and the Wachspress interpolants are used for the first time to construct the shape functions in the smoothed extended finite element method. Selected numerical results are presented to study the accuracy and efficiency of the proposed method as well as the equilibrium shapes of misfit particles in elastic solids. The presented results compare very well with those obtained from theoretical solutions and experimental observations, and the computational efficiency of the method is shown to be superior to that of its most advanced competitor.

  10. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  11. Learning with box kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-11-01

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, because the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given that dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization. PMID:24051728

  12. Learning with Box Kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-04-12

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, since the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given which dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization. PMID:23589591

  13. Sparse representation with kernels.

    PubMed

    Gao, Shenghua; Tsang, Ivor Wai-Hung; Chia, Liang-Tien

    2013-02-01

    Recent research has shown the initial success of sparse coding (Sc) in solving many computer vision tasks. Motivated by the fact that kernel trick can capture the nonlinear similarity of features, which helps in finding a sparse representation of nonlinear features, we propose kernel sparse representation (KSR). Essentially, KSR is a sparse coding technique in a high dimensional feature space mapped by an implicit mapping function. We apply KSR to feature coding in image classification, face recognition, and kernel matrix approximation. More specifically, by incorporating KSR into spatial pyramid matching (SPM), we develop KSRSPM, which achieves a good performance for image classification. Moreover, KSR-based feature coding can be shown as a generalization of efficient match kernel and an extension of Sc-based SPM. We further show that our proposed KSR using a histogram intersection kernel (HIK) can be considered a soft assignment extension of HIK-based feature quantization in the feature coding process. Besides feature coding, comparing with sparse coding, KSR can learn more discriminative sparse codes and achieve higher accuracy for face recognition. Moreover, KSR can also be applied to kernel matrix approximation in large scale learning tasks, and it demonstrates its robustness to kernel matrix approximation, especially when a small fraction of the data is used. Extensive experimental results demonstrate promising results of KSR in image classification, face recognition, and kernel matrix approximation. All these applications prove the effectiveness of KSR in computer vision and machine learning tasks. PMID:23014744

  14. Iterative software kernels

    SciTech Connect

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  15. Kernel PLS Estimation of Single-trial Event-related Potentials

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.

    2004-01-01

    Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.

  16. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  17. A three dimensional immersed smoothed finite element method (3D IS-FEM) for fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi-Qian; Liu, G. R.; Khoo, Boo Cheong

    2013-02-01

    A three-dimensional immersed smoothed finite element method (3D IS-FEM) using four-node tetrahedral element is proposed to solve 3D fluid-structure interaction (FSI) problems. The 3D IS-FEM is able to determine accurately the physical deformation of the nonlinear solids placed within the incompressible viscous fluid governed by Navier-Stokes equations. The method employs the semi-implicit characteristic-based split scheme to solve the fluid flows and smoothed finite element methods to calculate the transient dynamics responses of the nonlinear solids based on explicit time integration. To impose the FSI conditions, a novel, effective and sufficiently general technique via simple linear interpolation is presented based on Lagrangian fictitious fluid meshes coinciding with the moving and deforming solid meshes. In the comparisons to the referenced works including experiments, it is clear that the proposed 3D IS-FEM ensures stability of the scheme with the second order spatial convergence property; and the IS-FEM is fairly independent of a wide range of mesh size ratio.

  18. [Methods to smooth mortality indicators: application to analysis of inequalities in mortality in Spanish cities [the MEDEA Project

    PubMed

    Barceló, M Antònia; Saez, Marc; Cano-Serral, Gemma; Martínez-Beneito, Miguel Angel; Martínez, José Miguel; Borrell, Carme; Ocaña-Riola, Ricardo; Montoya, Imanol; Calvo, Montse; López-Abente, Gonzalo; Rodríguez-Sanz, Maica; Toro, Silvia; Alcalá, José Tomás; Saurina, Carme; Sánchez-Villegas, Pablo; Figueiras, Adolfo

    2008-01-01

    Although there is some experience in the study of mortality inequalities in Spanish cities, there are large urban centers that have not yet been investigated using the census tract as the unit of territorial analysis. The coordinated project was designed to fill this gap, with the participation of 10 groups of researchers in Andalusia, Aragon, Catalonia, Galicia, Madrid, Valencia, and the Basque Country. The MEDEA project has four distinguishing features: a) the census tract is used as the basic geographical area; b) statistical methods that include the geographical structure of the region under study are employed for risk estimation; c) data are drawn from three complementary data sources (information on air pollution, information on industrial pollution, and the records of mortality registrars), and d) a coordinated, large-scale analysis, favored by the implantation of coordinated research networks, is carried out. The main objective of the present study was to explain the methods for smoothing mortality indicators in the context of the MEDEA project. This study focusses on the methodology and the results of the Besag, York and Mollié model (BYM) in disease mapping. In the MEDEA project, standardized mortality ratios (SMR), corresponding to 17 large groups of causes of death and 28 specific causes, were smoothed by means of the BYM model; however, in the present study this methodology was applied to mortality due to cancer of the trachea, bronchi and lung in men and women in the city of Barcelona from 1996 to 2003. As a result of smoothing, a different geographical pattern for SMR in both genders was observed. In men, a SMR higher than unity was found in highly deprived areas. In contrast, in women, this pattern was observed in more affluent areas. PMID:19080940

  19. Derivation of aerodynamic kernel functions

    NASA Technical Reports Server (NTRS)

    Dowell, E. H.; Ventres, C. S.

    1973-01-01

    The method of Fourier transforms is used to determine the kernel function which relates the pressure on a lifting surface to the prescribed downwash within the framework of Dowell's (1971) shear flow model. This model is intended to improve upon the potential flow aerodynamic model by allowing for the aerodynamic boundary layer effects neglected in the potential flow model. For simplicity, incompressible, steady flow is considered. The proposed method is illustrated by deriving known results from potential flow theory.

  20. Detecting cancer clusters in a regional population with local cluster tests and Bayesian smoothing methods: a simulation study

    PubMed Central

    2013-01-01

    Background There is a rising public and political demand for prospective cancer cluster monitoring. But there is little empirical evidence on the performance of established cluster detection tests under conditions of small and heterogeneous sample sizes and varying spatial scales, such as are the case for most existing population-based cancer registries. Therefore this simulation study aims to evaluate different cluster detection methods, implemented in the open soure environment R, in their ability to identify clusters of lung cancer using real-life data from an epidemiological cancer registry in Germany. Methods Risk surfaces were constructed with two different spatial cluster types, representing a relative risk of RR = 2.0 or of RR = 4.0, in relation to the overall background incidence of lung cancer, separately for men and women. Lung cancer cases were sampled from this risk surface as geocodes using an inhomogeneous Poisson process. The realisations of the cancer cases were analysed within small spatial (census tracts, N = 1983) and within aggregated large spatial scales (communities, N = 78). Subsequently, they were submitted to the cluster detection methods. The test accuracy for cluster location was determined in terms of detection rates (DR), false-positive (FP) rates and positive predictive values. The Bayesian smoothing models were evaluated using ROC curves. Results With moderate risk increase (RR = 2.0), local cluster tests showed better DR (for both spatial aggregation scales > 0.90) and lower FP rates (both < 0.05) than the Bayesian smoothing methods. When the cluster RR was raised four-fold, the local cluster tests showed better DR with lower FPs only for the small spatial scale. At a large spatial scale, the Bayesian smoothing methods, especially those implementing a spatial neighbourhood, showed a substantially lower FP rate than the cluster tests. However, the risk increases at this scale were mostly diluted by data

  1. Segmentation of 3D microPET images of the rat brain via the hybrid gaussian mixture method with kernel density estimation.

    PubMed

    Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing

    2012-01-01

    Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method. PMID:22948355

  2. Biological sequence classification with multivariate string kernels.

    PubMed

    Kuksa, Pavel P

    2013-01-01

    String kernel-based machine learning methods have yielded great success in practical tasks of structured/sequential data analysis. They often exhibit state-of-the-art performance on many practical tasks of sequence analysis such as biological sequence classification, remote homology detection, or protein superfamily and fold prediction. However, typical string kernel methods rely on the analysis of discrete 1D string data (e.g., DNA or amino acid sequences). In this paper, we address the multiclass biological sequence classification problems using multivariate representations in the form of sequences of features vectors (as in biological sequence profiles, or sequences of individual amino acid physicochemical descriptors) and a class of multivariate string kernels that exploit these representations. On three protein sequence classification tasks, the proposed multivariate representations and kernels show significant 15-20 percent improvements compared to existing state-of-the-art sequence classification methods. PMID:24384708

  3. Biological Sequence Analysis with Multivariate String Kernels.

    PubMed

    Kuksa, Pavel P

    2013-03-01

    String kernel-based machine learning methods have yielded great success in practical tasks of structured/sequential data analysis. They often exhibit state-of-the-art performance on many practical tasks of sequence analysis such as biological sequence classification, remote homology detection, or protein superfamily and fold prediction. However, typical string kernel methods rely on analysis of discrete one-dimensional (1D) string data (e.g., DNA or amino acid sequences). In this work we address the multi-class biological sequence classification problems using multivariate representations in the form of sequences of features vectors (as in biological sequence profiles, or sequences of individual amino acid physico-chemical descriptors) and a class of multivariate string kernels that exploit these representations. On a number of protein sequence classification tasks proposed multivariate representations and kernels show significant 15-20\\% improvements compared to existing state-of-the-art sequence classification methods. PMID:23509193

  4. Determination of particle size distribution by light extinction method using improved pattern search algorithm with Tikhonov smoothing functional

    NASA Astrophysics Data System (ADS)

    Wang, Li; Sun, Xiaogang; Xing, Jian

    2012-12-01

    An inversion technique which combines the pattern search algorithm with the Tikhonov smoothing functional for retrieval of particle size distribution (PSD) by light extinction method is proposed. In the unparameterized shape-independent model, we first transform the PSD inversion problem into an optimization problem, with the Tikhonov smoothing functional employed to model the objective function. The optimization problem is then solved by the pattern search algorithm. To ensure good convergence rate and accuracy of the whole retrieval, a competitive strategy for determining the initial point of the pattern search algorithm is also designed. The accuracy and limitations of the proposed technique are tested by the inversion results of synthetic and real standard polystyrene particles immersed in water. In addition, the issues about the objective function and computation time are further discussed. Both simulation and experimental results show that the technique can be successfully applied to retrieve the PSD with high reliability and stability in the presence of random noise. Compared with the Phillips-Twomey method and genetic algorithm, the proposed technique has certain advantages in terms of reaching a more accurate and steady optimal solution with less computational effort, thus making this technique more suitable for quick and accurate measurement of PSD.

  5. Variable-node plate and shell elements with assumed natural strain and smoothed integration methods for nonmatching meshes

    NASA Astrophysics Data System (ADS)

    Sohn, Dongwoo; Im, Seyoung

    2013-06-01

    In this paper, novel finite elements that include an arbitrary number of additional nodes on each edge of a quadrilateral element are proposed to achieve compatible connection of neighboring nonmatching meshes in plate and shell analyses. The elements, termed variable-node plate elements, are based on two-dimensional variable-node elements with point interpolation and on the Mindlin-Reissner plate theory. Subsequently the flat shell elements, termed variable-node shell elements, are formulated by further extending the plate elements. To eliminate a transverse shear locking phenomenon, the assumed natural strain method is used for plate and shell analyses. Since the variable-node plate and shell elements allow an arbitrary number of additional nodes and overcome locking problems, they make it possible to connect two nonmatching meshes and to provide accurate solutions in local mesh refinement. In addition, the curvature and strain smoothing methods through smoothed integration are adopted to improve the element performance. Several numerical examples are presented to demonstrate the effectiveness of the elements in terms of the accuracy and efficiency of the analyses.

  6. Fast and Reliable Time Delay Estimation of Strong Lens Systems Using the Smoothing and Cross-correlation Methods

    NASA Astrophysics Data System (ADS)

    Aghamousa, Amir; Shafieloo, Arman

    2015-05-01

    The observable time delays between multiple images of strong lensing systems with time variable sources can provide us with some valuable information for probing the expansion history of the universe. Estimating these time delays can be very challenging due to complexities in the observed data caused by seasonal gaps, various noises, and systematics such as unknown microlensing effects. In this paper, we introduce a novel approach for estimating the time delays for strong lensing systems, implementing various statistical methods of data analysis including the smoothing and cross-correlation methods. The method we introduce in this paper has recently been used in the TDC0 and TDC1 Strong Lens Time Delay Challenges and has shown its power in providing reliable and precise estimates of time delays dealing with data with different complexities.

  7. Development of a coupled discrete element (DEM)-smoothed particle hydrodynamics (SPH) simulation method for polyhedral particles

    NASA Astrophysics Data System (ADS)

    Nassauer, Benjamin; Liedke, Thomas; Kuna, Meinhard

    2016-03-01

    In the present paper, the direct coupling of a discrete element method (DEM) with polyhedral particles and smoothed particle hydrodynamics (SPH) is presented. The two simulation techniques are fully coupled in both ways through interaction forces between the solid DEM particles and the fluid SPH particles. Thus this simulation method provides the possibility to simulate the individual movement of polyhedral, sharp-edged particles as well as the flow field around these particles in fluid-saturated granular matter which occurs in many technical processes e.g. wire sawing, grinding or lapping. The coupled method is exemplified and validated by the simulation of a particle in a shear flow, which shows good agreement with analytical solutions.

  8. Critical Parameters of the In Vitro Method of Vascular Smooth Muscle Cell Calcification

    PubMed Central

    Hortells, Luis; Sosa, Cecilia; Millán, Ángel; Sorribas, Víctor

    2015-01-01

    Background Vascular calcification (VC) is primarily studied using cultures of vascular smooth muscle cells. However, the use of very different protocols and extreme conditions can provide findings unrelated to VC. In this work we aimed to determine the critical experimental parameters that affect calcification in vitro and to determine the relevance to calcification in vivo. Experimental Procedures and Results Rat VSMC calcification in vitro was studied using different concentrations of fetal calf serum, calcium, and phosphate, in different types of culture media, and using various volumes and rates of change. The bicarbonate content of the media critically affected pH and resulted in supersaturation, depending on the concentration of Ca2+ and Pi. Such supersaturation is a consequence of the high dependence of bicarbonate buffers on CO2 vapor pressure and bicarbonate concentration at pHs above 7.40. Such buffer systems cause considerable pH variations as a result of minor experimental changes. The variations are more critical for DMEM and are negligible when the bicarbonate concentration is reduced to ¼. Particle nucleation and growth were observed by dynamic light scattering and electron microscopy. Using 2mM Pi, particles of ~200nm were observed at 24 hours in MEM and at 1 hour in DMEM. These nuclei grew over time, were deposited in the cells, and caused osteogene expression or cell death, depending on the precipitation rate. TEM observations showed that the initial precipitate was amorphous calcium phosphate (ACP), which converts into hydroxyapatite over time. In blood, the scenario is different, because supersaturation is avoided by a tightly controlled pH of 7.4, which prevents the formation of PO43--containing ACP. Conclusions The precipitation of ACP in vitro is unrelated to VC in vivo. The model needs to be refined through controlled pH and the use of additional procalcifying agents other than Pi in order to reproduce calcium phosphate deposition in vivo

  9. A class of kernel based real-time elastography algorithms.

    PubMed

    Kibria, Md Golam; Hasan, Md Kamrul

    2015-08-01

    In this paper, a novel real-time kernel-based and gradient-based Phase Root Seeking (PRS) algorithm for ultrasound elastography is proposed. The signal-to-noise ratio of the strain image resulting from this method is improved by minimizing the cross-correlation discrepancy between the pre- and post-compression radio frequency signals with an adaptive temporal stretching method and employing built-in smoothing through an exponentially weighted neighborhood kernel in the displacement calculation. Unlike conventional PRS algorithms, displacement due to tissue compression is estimated from the root of the weighted average of the zero-lag cross-correlation phases of the pair of corresponding analytic pre- and post-compression windows in the neighborhood kernel. In addition to the proposed one, the other time- and frequency-domain elastography algorithms (Ara et al., 2013; Hussain et al., 2012; Hasan et al., 2012) proposed by our group are also implemented in real-time using Java where the computations are serially executed or parallely executed in multiple processors with efficient memory management. Simulation results using finite element modeling simulation phantom show that the proposed method significantly improves the strain image quality in terms of elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe) and mean structural similarity (MSSIM) for strains as high as 4% as compared to other reported techniques in the literature. Strain images obtained for the experimental phantom as well as in vivo breast data of malignant or benign masses also show the efficacy of our proposed method over the other reported techniques in the literature. PMID:25929595

  10. Convex-relaxed kernel mapping for image segmentation.

    PubMed

    Ben Salah, Mohamed; Ben Ayed, Ismail; Jing Yuan; Hong Zhang

    2014-03-01

    This paper investigates a convex-relaxed kernel mapping formulation of image segmentation. We optimize, under some partition constraints, a functional containing two characteristic terms: 1) a data term, which maps the observation space to a higher (possibly infinite) dimensional feature space via a kernel function, thereby evaluating nonlinear distances between the observations and segments parameters and 2) a total-variation term, which favors smooth segment surfaces (or boundaries). The algorithm iterates two steps: 1) a convex-relaxation optimization with respect to the segments by solving an equivalent constrained problem via the augmented Lagrange multiplier method and 2) a convergent fixed-point optimization with respect to the segments parameters. The proposed algorithm can bear with a variety of image types without the need for complex and application-specific statistical modeling, while having the computational benefits of convex relaxation. Our solution is amenable to parallelized implementations on graphics processing units (GPUs) and extends easily to high dimensions. We evaluated the proposed algorithm with several sets of comprehensive experiments and comparisons, including: 1) computational evaluations over 3D medical-imaging examples and high-resolution large-size color photographs, which demonstrate that a parallelized implementation of the proposed method run on a GPU can bring a significant speed-up and 2) accuracy evaluations against five state-of-the-art methods over the Berkeley color-image database and a multimodel synthetic data set, which demonstrates competitive performances of the algorithm. PMID:24723519

  11. Multiple kernel learning for sparse representation-based classification.

    PubMed

    Shrivastava, Ashish; Patel, Vishal M; Chellappa, Rama

    2014-07-01

    In this paper, we propose a multiple kernel learning (MKL) algorithm that is based on the sparse representation-based classification (SRC) method. Taking advantage of the nonlinear kernel SRC in efficiently representing the nonlinearities in the high-dimensional feature space, we propose an MKL method based on the kernel alignment criteria. Our method uses a two step training method to learn the kernel weights and sparse codes. At each iteration, the sparse codes are updated first while fixing the kernel mixing coefficients, and then the kernel mixing coefficients are updated while fixing the sparse codes. These two steps are repeated until a stopping criteria is met. The effectiveness of the proposed method is demonstrated using several publicly available image classification databases and it is shown that this method can perform significantly better than many competitive image classification algorithms. PMID:24835226

  12. A Parallel Implementation of a Smoothed Particle Hydrodynamics Method on Graphics Hardware Using the Compute Unified Device Architecture

    SciTech Connect

    Wong Unhong; Wong Honcheng; Tang Zesheng

    2010-05-21

    The smoothed particle hydrodynamics (SPH), which is a class of meshfree particle methods (MPMs), has a wide range of applications from micro-scale to macro-scale as well as from discrete systems to continuum systems. Graphics hardware, originally designed for computer graphics, now provide unprecedented computational power for scientific computation. Particle system needs a huge amount of computations in physical simulation. In this paper, an efficient parallel implementation of a SPH method on graphics hardware using the Compute Unified Device Architecture is developed for fluid simulation. Comparing to the corresponding CPU implementation, our experimental results show that the new approach allows significant speedups of fluid simulation through handling huge amount of computations in parallel on graphics hardware.

  13. Improving convergence in smoothed particle hydrodynamics simulations without pairing instability

    NASA Astrophysics Data System (ADS)

    Dehnen, Walter; Aly, Hossam

    2012-09-01

    The numerical convergence of smoothed particle hydrodynamics (SPH) can be severely restricted by random force errors induced by particle disorder, especially in shear flows, which are ubiquitous in astrophysics. The increase in the number NH of neighbours when switching to more extended smoothing kernels at fixed resolution (using an appropriate definition for the SPH resolution scale) is insufficient to combat these errors. Consequently, trading resolution for better convergence is necessary, but for traditional smoothing kernels this option is limited by the pairing (or clumping) instability. Therefore, we investigate the suitability of the Wendland functions as smoothing kernels and compare them with the traditional B-splines. Linear stability analysis in three dimensions and test simulations demonstrate that the Wendland kernels avoid the pairing instability for all NH, despite having vanishing derivative at the origin (disproving traditional ideas about the origin of this instability; instead, we uncover a relation with the kernel Fourier transform and give an explanation in terms of the SPH density estimator). The Wendland kernels are computationally more convenient than the higher order B-splines, allowing large NH and hence better numerical convergence (note that computational costs rise sublinear with NH). Our analysis also shows that at low NH the quartic spline kernel with NH ≈ 60 obtains much better convergence than the standard cubic spline.

  14. Community detection using Kernel Spectral Clustering with memory

    NASA Astrophysics Data System (ADS)

    Langone, Rocco; Suykens, Johan A. K.

    2013-02-01

    This work is related to the problem of community detection in dynamic scenarios, which for instance arises in the segmentation of moving objects, clustering of telephone traffic data, time-series micro-array data etc. A desirable feature of a clustering model which has to capture the evolution of communities over time is the temporal smoothness between clusters in successive time-steps. In this way the model is able to track the long-term trend and in the same time it smooths out short-term variation due to noise. We use the Kernel Spectral Clustering with Memory effect (MKSC) which allows to predict cluster memberships of new nodes via out-of-sample extension and has a proper model selection scheme. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness as a valid prior knowledge. The latter, in fact, allows the model to cluster the current data well and to be consistent with the recent history. Here we propose a generalization of the MKSC model with an arbitrary memory, not only one time-step in the past. The experiments conducted on toy problems confirm our expectations: the more memory we add to the model, the smoother over time are the clustering results. We also compare with the Evolutionary Spectral Clustering (ESC) algorithm which is a state-of-the art method, and we obtain comparable or better results.

  15. Method of adiabatic modes in research of smoothly irregular integrated optical waveguides: zero approximation

    SciTech Connect

    Egorov, A A; Sevast'yanov, L A; Sevast'yanov, A L

    2014-02-28

    We consider the application of the method of adiabatic waveguide modes for calculating the propagation of electromagnetic radiation in three-dimensional (3D) irregular integrated optical waveguides. The method of adiabatic modes takes into account a three-dimensional distribution of quasi-waveguide modes and explicit ('inclined') tangential boundary conditions. The possibilities of the method are demonstrated on the example of numerical research of two major elements of integrated optics: a waveguide of 'horn' type and a thin-film generalised waveguide Luneburg lens by the methods of adiabatic modes and comparative waveguides. (integral optical waveguides)

  16. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power

  17. Robotic Intelligence Kernel: Communications

    SciTech Connect

    Walton, Mike C.

    2009-09-16

    The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.

  18. Robotic Intelligence Kernel: Driver

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.

  19. SPLASH: An Interactive Visualization Tool for Smoothed Particle Hydrodynamics Simulations

    NASA Astrophysics Data System (ADS)

    Price, Daniel J.

    2011-03-01

    SPLASH (formerly SUPERSPHPLOT) is a visualization tool for output from (astrophysical) simulations using the Smoothed Particle Hydrodynamics (SPH) method in one, two and three dimensions. It is written in Fortran 90 and utilises the PGPLOT graphics subroutine library to do the actual plotting. It is based around a command-line menu structure but utilises the interactive capabilities of PGPLOT to manipulate data interactively in the plotting window. SPLASH is a fully interactive program; visualizations can be changed rapidly at the touch of a button (e.g. zooming, rotating, shifting cross section positions etc). Data is read directly from the code dump format giving rapid access to results and the visualization is advanced forwards and backwards through timesteps by single keystrokes. SPLASH uses the SPH kernel to render plots of not only density but other physical quantities, giving a smooth representation of the data.

  20. Protein interaction sentence detection using multiple semantic kernels

    PubMed Central

    2011-01-01

    Background Detection of sentences that describe protein-protein interactions (PPIs) in biomedical publications is a challenging and unresolved pattern recognition problem. Many state-of-the-art approaches for this task employ kernel classification methods, in particular support vector machines (SVMs). In this work we propose a novel data integration approach that utilises semantic kernels and a kernel classification method that is a probabilistic analogue to SVMs. Semantic kernels are created from statistical information gathered from large amounts of unlabelled text using lexical semantic models. Several semantic kernels are then fused into an overall composite classification space. In this initial study, we use simple features in order to examine whether the use of combinations of kernels constructed using word-based semantic models can improve PPI sentence detection. Results We show that combinations of semantic kernels lead to statistically significant improvements in recognition rates and receiver operating characteristic (ROC) scores over the plain Gaussian kernel, when applied to a well-known labelled collection of abstracts. The proposed kernel composition method also allows us to automatically infer the most discriminative kernels. Conclusions The results from this paper indicate that using semantic information from unlabelled text, and combinations of such information, can be valuable for classification of short texts such as PPI sentences. This study, however, is only a first step in evaluation of semantic kernels and probabilistic multiple kernel learning in the context of PPI detection. The method described herein is modular, and can be applied with a variety of feature types, kernels, and semantic models, in order to facilitate full extraction of interacting proteins. PMID:21569604

  1. Smoothed Particle Hydrodynamics Continuous Boundary Force method for Navier-Stokes equations subject to Robin boundary condition

    SciTech Connect

    Pan, Wenxiao; Bao, Jie; Tartakovsky, Alexandre M.

    2014-02-15

    Robin boundary condition for the Navier-Stokes equations is used to model slip conditions at the fluid-solid boundaries. A novel Continuous Boundary Force (CBF) method is proposed for solving the Navier-Stokes equations subject to Robin boundary condition. In the CBF method, the Robin boundary condition at boundary is replaced by the homogeneous Neumann boundary condition at the boundary and a volumetric force term added to the momentum conservation equation. Smoothed Particle Hydrodynamics (SPH) method is used to solve the resulting Navier-Stokes equations. We present solutions for two-dimensional and three-dimensional flows in domains bounded by flat and curved boundaries subject to various forms of the Robin boundary condition. The numerical accuracy and convergence are examined through comparison of the SPH-CBF results with the solutions of finite difference or finite element method. Taken the no-slip boundary condition as a special case of slip boundary condition, we demonstrate that the SPH-CBF method describes accurately both no-slip and slip conditions.

  2. Smooth Sailing.

    ERIC Educational Resources Information Center

    Price, Beverley; Pincott, Maxine; Rebman, Ashley; Northcutt, Jen; Barsanti, Amy; Silkunas, Betty; Brighton, Susan K.; Reitz, David; Winkler, Maureen

    1999-01-01

    Presents discipline tips from several teachers to keep classrooms running smoothly all year. Some of the suggestions include the following: a bear-cave warning system, peer mediation, a motivational mystery, problem students acting as the teacher's assistant, a positive-behavior-reward chain, a hallway scavenger hunt (to ensure quiet passage…

  3. Well-tempered metadynamics: a smoothly converging and tunable free-energy method.

    PubMed

    Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele

    2008-01-18

    We present a method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of an alanine dipeptide free-energy landscape. PMID:18232845

  4. Adaptive particle refinement and derefinement applied to the smoothed particle hydrodynamics method

    NASA Astrophysics Data System (ADS)

    Barcarolo, D. A.; Le Touzé, D.; Oger, G.; de Vuyst, F.

    2014-09-01

    SPH simulations are usually performed with a uniform particle distribution. New techniques have been recently proposed to enable the use of spatially varying particle distributions, which encouraged the development of automatic adaptivity and particle refinement/derefinement algorithms. All these efforts resulted in very interesting and promising procedures leading to more efficient and faster SPH simulations. In this article, a family of particle refinement techniques is reviewed and a new derefinement technique is proposed and validated through several test cases involving both free-surface and viscous flows. Besides, this new procedure allows higher resolutions in the regions requiring increased accuracy. Moreover, several levels of refinement can be used with this new technique, as often encountered in adaptive mesh refinement techniques in mesh-based methods.

  5. A method of smooth bivariate interpolation for data given on a generalized curvilinear grid

    NASA Technical Reports Server (NTRS)

    Zingg, David W.; Yarrow, Maurice

    1992-01-01

    A method of locally bicubic interpolation is presented for data given at the nodes of a two-dimensional generalized curvilinear grid. The physical domain is transformed to a computational domain in which the grid is uniform and rectangular by a generalized curvilinear coordinate transformation. The metrics of the transformation are obtained by finite differences in the computational domain. Metric derivatives are determined by repeated application of the chain rule for partial differentiation. Given the metrics and the metric derivatives, the partial derivatives required to determine a locally bicubic interpolant can be estimated at each data point using finite differences in the computational domain. A bilinear transformation is used to analytically transform the individual quadrilateral cells in the physical domain into unit squares, thus allowing the use of simple formulas for bicubic interpolation.

  6. Modern methods for calculating ground-wave field strength over a smooth spherical Earth

    NASA Astrophysics Data System (ADS)

    Eckert, R. P.

    1986-02-01

    The report makes available the computer program that produces the proposed new FCC ground-wave propagation prediction curves for the new band of standard broadcast frequencies between 1605 and 1705 kHz. The curves are included in recommendations to the U.S. Department of State in preparation for an International Telecommunication Union Radio Conference. The history of the FCC curves is traced from the early 1930's, when the Federal Radio Commission and later the FFC faced an intensifying need for technical information concerning interference distances. A family of curves satisfactorily meeting this need was published in 1940. The FCC reexamined the matter recently in connection with the planned expansion of the AM broadcast band, and the resulting new curves are a precise representation of the mathematical theory. Mathematical background is furnished so that the computer program can be critically evaluated. This will be particularly valuable to persons implementing the program on other computers or adapting it for special applications. Technical references are identified for each of the formulas used by the program, and the history of the development of mathematical methods is outlined.

  7. HS-SPME-GC-MS/MS Method for the Rapid and Sensitive Quantitation of 2-Acetyl-1-pyrroline in Single Rice Kernels.

    PubMed

    Hopfer, Helene; Jodari, Farman; Negre-Zakharov, Florence; Wylie, Phillip L; Ebeler, Susan E

    2016-05-25

    Demand for aromatic rice varieties (e.g., Basmati) is increasing in the US. Aromatic varieties typically have elevated levels of the aroma compound 2-acetyl-1-pyrroline (2AP). Due to its very low aroma threshold, analysis of 2AP provides a useful screening tool for rice breeders. Methods for 2AP analysis in rice should quantitate 2AP at or below sensory threshold level, avoid artifactual 2AP generation, and be able to analyze single rice kernels in cases where only small sample quantities are available (e.g., breeding trials). We combined headspace solid phase microextraction with gas chromatography tandem mass spectrometry (HS-SPME-GC-MS/MS) for analysis of 2AP, using an extraction temperature of 40 °C and a stable isotopologue as internal standard. 2AP calibrations were linear between the concentrations of 53 and 5380 pg/g, with detection limits below the sensory threshold of 2AP. Forty-eight aromatic and nonaromatic, milled rice samples from three harvest years were screened with the method for their 2AP content, and overall reproducibility, observed for all samples, ranged from 5% for experimental aromatic lines to 33% for nonaromatic lines. PMID:27133457

  8. Kernel mucking in top

    SciTech Connect

    LeFebvre, W.

    1994-08-01

    For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.

  9. A Real-Time Orbit Determination Method for Smooth Transition from Optical Tracking to Laser Ranging of Debris

    PubMed Central

    Li, Bin; Sang, Jizhang; Zhang, Zhongping

    2016-01-01

    A critical requirement to achieve high efficiency of debris laser tracking is to have sufficiently accurate orbit predictions (OP) in both the pointing direction (better than 20 arc seconds) and distance from the tracking station to the debris objects, with the former more important than the latter because of the narrow laser beam. When the two line element (TLE) is used to provide the orbit predictions, the resultant pointing errors are usually on the order of tens to hundreds of arc seconds. In practice, therefore, angular observations of debris objects are first collected using an optical tracking sensor, and then used to guide the laser beam pointing to the objects. The manual guidance may cause interrupts to the laser tracking, and consequently loss of valuable laser tracking data. This paper presents a real-time orbit determination (OD) and prediction method to realize smooth and efficient debris laser tracking. The method uses TLE-computed positions and angles over a short-arc of less than 2 min as observations in an OD process where simplified force models are considered. After the OD convergence, the OP is performed from the last observation epoch to the end of the tracking pass. Simulation and real tracking data processing results show that the pointing prediction errors are usually less than 10″, and the distance errors less than 100 m, therefore, the prediction accuracy is sufficient for the blind laser tracking. PMID:27347958

  10. Tsunami Simulator Integrating the Smoothed-Particle Hydrodynamics Method and the Nonlinear Shallow Water Wave Model with High Performance Computer

    NASA Astrophysics Data System (ADS)

    Suwa, T.; Imamura, F.; Sugawara, D.; Ogasawara, K.; Watanabe, M.; Hirahara, T.

    2014-12-01

    A tsunami simulator integrating a 3-D fluid simulation technology that runs on large-scale parallel computers using smoothed-particle hydrodynamics (SPH) method has been developed together with a 2-D tsunami propagation simulation technique using a nonlinear shallow water wave model. We use the 2-D simulation to calculate tsunami propagation of scale of about 1000km from epicenter to near shore. The 3-D SPH method can be used to calculate the water surface and hydraulic force that a tsunami can exert on a building, and to simulate flooding patterns at urban area of at most km scale. With our simulator we can also see three dimensional fluid feature such as complex changes a tsunami undergoes as it interacts with coastal topography or structures. As a result it is hoped that, e.g. , effect of the structures to dissipate waves energy passing over it can be elucidated. The authors utilize the simulator in the third of five fields of the Strategic Programs for Innovative Research, "Advanced Prediction Researches for Natural Disaster Prevention and Reduction," or the theme "Improvement of the tsunami forecasting system on the HPCI computer." The results of tsunami simulation using the K computer will be reported. We are going to apply it to a real problem of the disaster prevention in future.

  11. A Real-Time Orbit Determination Method for Smooth Transition from Optical Tracking to Laser Ranging of Debris.

    PubMed

    Li, Bin; Sang, Jizhang; Zhang, Zhongping

    2016-01-01

    A critical requirement to achieve high efficiency of debris laser tracking is to have sufficiently accurate orbit predictions (OP) in both the pointing direction (better than 20 arc seconds) and distance from the tracking station to the debris objects, with the former more important than the latter because of the narrow laser beam. When the two line element (TLE) is used to provide the orbit predictions, the resultant pointing errors are usually on the order of tens to hundreds of arc seconds. In practice, therefore, angular observations of debris objects are first collected using an optical tracking sensor, and then used to guide the laser beam pointing to the objects. The manual guidance may cause interrupts to the laser tracking, and consequently loss of valuable laser tracking data. This paper presents a real-time orbit determination (OD) and prediction method to realize smooth and efficient debris laser tracking. The method uses TLE-computed positions and angles over a short-arc of less than 2 min as observations in an OD process where simplified force models are considered. After the OD convergence, the OP is performed from the last observation epoch to the end of the tracking pass. Simulation and real tracking data processing results show that the pointing prediction errors are usually less than 10″, and the distance errors less than 100 m, therefore, the prediction accuracy is sufficient for the blind laser tracking. PMID:27347958

  12. An experimental investigation of kernels on graphs for collaborative recommendation and semisupervised classification.

    PubMed

    Fouss, François; Francoisse, Kevin; Yen, Luh; Pirotte, Alain; Saerens, Marco

    2012-07-01

    This paper presents a survey as well as an empirical comparison and evaluation of seven kernels on graphs and two related similarity matrices, that we globally refer to as "kernels on graphs" for simplicity. They are the exponential diffusion kernel, the Laplacian exponential diffusion kernel, the von Neumann diffusion kernel, the regularized Laplacian kernel, the commute-time (or resistance-distance) kernel, the random-walk-with-restart similarity matrix, and finally, a kernel first introduced in this paper (the regularized commute-time kernel) and two kernels defined in some of our previous work and further investigated in this paper (the Markov diffusion kernel and the relative-entropy diffusion matrix). The kernel-on-graphs approach is simple and intuitive. It is illustrated by applying the nine kernels to a collaborative-recommendation task, viewed as a link prediction problem, and to a semisupervised classification task, both on several databases. The methods compute proximity measures between nodes that help study the structure of the graph. Our comparisons suggest that the regularized commute-time and the Markov diffusion kernels perform best on the investigated tasks, closely followed by the regularized Laplacian kernel. PMID:22497802

  13. Analog forecasting with dynamics-adapted kernels

    NASA Astrophysics Data System (ADS)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  14. An Investigation of Methods for Improving Estimation of Test Score Distributions.

    ERIC Educational Resources Information Center

    Hanson, Bradley A.

    Three methods of estimating test score distributions that may improve on using the observed frequencies (OBFs) as estimates of a population test score distribution are considered: the kernel method (KM); the polynomial method (PM); and the four-parameter beta binomial method (FPBBM). The assumption each method makes about the smoothness of the…

  15. Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion.

    PubMed

    Hamsici, Onur C; Gotardo, Paulo F U; Martinez, Aleix M

    2012-01-01

    Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function. PMID:23946937

  16. The Relationship Between Single Wheat Kernel Particle Size Distribution and the Perten SKCS 4100 Hardness Index

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Perten Single Kernel Characterization System (SKCS) is the current reference method to determine single wheat kernel texture. However, the SKCS calibration method is based on bulk samples, and there is no method to determine the measurement error on single kernel hardness. The objective of thi...

  17. Robotic Intelligence Kernel: Architecture

    Energy Science and Technology Software Center (ESTSC)

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  18. Kernel structures for Clouds

    NASA Technical Reports Server (NTRS)

    Spafford, Eugene H.; Mckendry, Martin S.

    1986-01-01

    An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.

  19. Robotic Intelligence Kernel: Visualization

    Energy Science and Technology Software Center (ESTSC)

    2009-09-16

    The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.

  20. Isolation of bacterial endophytes from germinated maize kernels.

    PubMed

    Rijavec, Tomaz; Lapanje, Ales; Dermastia, Marina; Rupnik, Maja

    2007-06-01

    The germination of surface-sterilized maize kernels under aseptic conditions proved to be a suitable method for isolation of kernel-associated bacterial endophytes. Bacterial strains identified by partial 16S rRNA gene sequencing as Pantoea sp., Microbacterium sp., Frigoribacterium sp., Bacillus sp., Paenibacillus sp., and Sphingomonas sp. were isolated from kernels of 4 different maize cultivars. Genus Pantoea was associated with a specific maize cultivar. The kernels of this cultivar were often overgrown with the fungus Lecanicillium aphanocladii; however, those exhibiting Pantoea growth were never colonized with it. Furthermore, the isolated bacterium strain inhibited fungal growth in vitro. PMID:17668041

  1. A Kernel-based Account of Bibliometric Measures

    NASA Astrophysics Data System (ADS)

    Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji

    The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.

  2. The method of normal forms for singularly perturbed systems of Fredholm integro-differential equations with rapidly varying kernels

    SciTech Connect

    Bobodzhanov, A A; Safonov, V F

    2013-07-31

    The paper deals with extending the Lomov regularization method to classes of singularly perturbed Fredholm-type integro-differential systems, which have not so far been studied. In these the limiting operator is discretely noninvertible. Such systems are commonly known as problems with unstable spectrum. Separating out the essential singularities in the solutions to these problems presents great difficulties. The principal one is to give an adequate description of the singularities induced by 'instability points' of the spectrum. A methodology for separating singularities by using normal forms is developed. It is applied to the above type of systems and is substantiated in these systems. Bibliography: 10 titles.

  3. SiteSeek: Post-translational modification analysis using adaptive locality-effective kernel methods and new profiles

    PubMed Central

    Yoo, Paul D; Ho, Yung Shwen; Zhou, Bing Bing; Zomaya, Albert Y

    2008-01-01

    Background Post-translational modifications have a substantial influence on the structure and functions of protein. Post-translational phosphorylation is one of the most common modification that occur in intracellular proteins. Accurate prediction of protein phosphorylation sites is of great importance for the understanding of diverse cellular signalling processes in both the human body and in animals. In this study, we propose a new machine learning based protein phosphorylation site predictor, SiteSeek. SiteSeek is trained using a novel compact evolutionary and hydrophobicity profile to detect possible protein phosphorylation sites for a target sequence. The newly proposed method proves to be more accurate and exhibits a much stable predictive performance than currently existing phosphorylation site predictors. Results The performance of the proposed model was compared to nine existing different machine learning models and four widely known phosphorylation site predictors with the newly proposed PS-Benchmark_1 dataset to contrast their accuracy, sensitivity, specificity and correlation coefficient. SiteSeek showed better predictive performance with 86.6% accuracy, 83.8% sensitivity, 92.5% specificity and 0.77 correlation-coefficient on the four main kinase families (CDK, CK2, PKA, and PKC). Conclusion Our newly proposed methods used in SiteSeek were shown to be useful for the identification of protein phosphorylation sites as it performed much better than widely known predictors on the newly built PS-Benchmark_1 dataset. PMID:18541042

  4. Bivariate discrete beta Kernel graduation of mortality data.

    PubMed

    Mazza, Angelo; Punzo, Antonio

    2015-07-01

    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors. PMID:25084764

  5. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation. PMID:19897106

  6. A method for three-dimensional quantification of vascular smooth muscle orientation: application in viable murine carotid arteries.

    PubMed

    Spronck, Bart; Megens, Remco T A; Reesink, Koen D; Delhaas, Tammo

    2016-04-01

    When studying in vivo arterial mechanical behaviour using constitutive models, smooth muscle cells (SMCs) should be considered, while they play an important role in regulating arterial vessel tone. Current constitutive models assume a strictly circumferential SMC orientation, without any dispersion. We hypothesised that SMC orientation would show considerable dispersion in three dimensions and that helical dispersion would be greater than transversal dispersion. To test these hypotheses, we developed a method to quantify the 3D orientation of arterial SMCs. Fluorescently labelled SMC nuclei of left and right carotid arteries of ten mice were imaged using two-photon laser scanning microscopy. Arteries were imaged at a range of luminal pressures. 3D image processing was used to identify individual nuclei and their orientations. SMCs showed to be arranged in two distinct layers. Orientations were quantified by fitting a Bingham distribution to the observed orientations. As hypothesised, orientation dispersion was much larger helically than transversally. With increasing luminal pressure, transversal dispersion decreased significantly, whereas helical dispersion remained unaltered. Additionally, SMC orientations showed a statistically significant ([Formula: see text]) mean right-handed helix angle in both left and right arteries and in both layers, which is a relevant finding from a developmental biology perspective. In conclusion, vascular SMC orientation (1) can be quantified in 3D; (2) shows considerable dispersion, predominantly in the helical direction; and (3) has a distinct right-handed helical component in both left and right carotid arteries. The obtained quantitative distribution data are instrumental for constitutive modelling of the artery wall and illustrate the merit of our method. PMID:26174758

  7. Single-kernel NIR analysis for evaluating wheat samples for fusarium head blight resistance

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A method to estimate bulk deoxynivalenol (DON) content of wheat grain samples using single kernel DON levels estimated by a single kernel near infrared (SKNIR) system combined with single kernel weights is described. This method estimated bulk DON levels in 90% of 160 grain samples within 6.7 ppm DO...

  8. High speed sorting of Fusarium-damaged wheat kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recent studies have found that resistance to Fusarium fungal infection can be inherited in wheat from one generation to another. However, there is not yet available a cost effective method to separate Fusarium-damaged wheat kernels from undamaged kernels so that wheat breeders can take advantage of...

  9. Covariant Perturbation Expansion of Off-Diagonal Heat Kernel

    NASA Astrophysics Data System (ADS)

    Gou, Yu-Zi; Li, Wen-Du; Zhang, Ping; Dai, Wu-Sheng

    2016-07-01

    Covariant perturbation expansion is an important method in quantum field theory. In this paper an expansion up to arbitrary order for off-diagonal heat kernels in flat space based on the covariant perturbation expansion is given. In literature, only diagonal heat kernels are calculated based on the covariant perturbation expansion.

  10. Kernel Continuum Regression.

    PubMed

    Lee, Myung Hee; Liu, Yufeng

    2013-12-01

    The continuum regression technique provides an appealing regression framework connecting ordinary least squares, partial least squares and principal component regression in one family. It offers some insight on the underlying regression model for a given application. Moreover, it helps to provide deep understanding of various regression techniques. Despite the useful framework, however, the current development on continuum regression is only for linear regression. In many applications, nonlinear regression is necessary. The extension of continuum regression from linear models to nonlinear models using kernel learning is considered. The proposed kernel continuum regression technique is quite general and can handle very flexible regression model estimation. An efficient algorithm is developed for fast implementation. Numerical examples have demonstrated the usefulness of the proposed technique. PMID:24058224

  11. Discrimination of Maize Haploid Seeds from Hybrid Seeds Using Vis Spectroscopy and Support Vector Machine Method.

    PubMed

    Liu, Jin; Guo, Ting-ting; Li, Hao-chuan; Jia, Shi-qiang; Yan, Yan-lu; An, Dong; Zhang, Yao; Chen, Shao-jiang

    2015-11-01

    Doubled haploid (DH) lines are routinely applied in the hybrid maize breeding programs of many institutes and companies for their advantages of complete homozygosity and short breeding cycle length. A key issue in this approach is an efficient screening system to identify haploid kernels from the hybrid kernels crossed with the inducer. At present, haploid kernel selection is carried out manually using the"red-crown" kernel trait (the haploid kernel has a non-pigmented embryo and pigmented endosperm) controlled by the R1-nj gene. Manual selection is time-consuming and unreliable. Furthermore, the color of the kernel embryo is concealed by the pericarp. Here, we establish a novel approach for identifying maize haploid kernels based on visible (Vis) spectroscopy and support vector machine (SVM) pattern recognition technology. The diffuse transmittance spectra of individual kernels (141 haploid kernels and 141 hybrid kernels from 9 genotypes) were collected using a portable UV-Vis spectrometer and integrating sphere. The raw spectral data were preprocessed using smoothing and vector normalization methods. The desired feature wavelengths were selected based on the results of the Kolmogorov-Smirnov test. The wavelengths with p values above 0. 05 were eliminated because the distributions of absorbance data in these wavelengths show no significant difference between haploid and hybrid kernels. Principal component analysis was then performed to reduce the number of variables. The SVM model was evaluated by 9-fold cross-validation. In each round, samples of one genotype were used as the testing set, while those of other genotypes were used as the training set. The mean rate of correct discrimination was 92.06%. This result demonstrates the feasibility of using Vis spectroscopy to identify haploid maize kernels. The method would help develop a rapid and accurate automated screening-system for haploid kernels. PMID:26978947

  12. Semiparametric regression of multidimensional genetic pathway data: least-squares kernel machines and linear mixed models.

    PubMed

    Liu, Dawei; Lin, Xihong; Ghosh, Debashis

    2007-12-01

    We consider a semiparametric regression model that relates a normal outcome to covariates and a genetic pathway, where the covariate effects are modeled parametrically and the pathway effect of multiple gene expressions is modeled parametrically or nonparametrically using least-squares kernel machines (LSKMs). This unified framework allows a flexible function for the joint effect of multiple genes within a pathway by specifying a kernel function and allows for the possibility that each gene expression effect might be nonlinear and the genes within the same pathway are likely to interact with each other in a complicated way. This semiparametric model also makes it possible to test for the overall genetic pathway effect. We show that the LSKM semiparametric regression can be formulated using a linear mixed model. Estimation and inference hence can proceed within the linear mixed model framework using standard mixed model software. Both the regression coefficients of the covariate effects and the LSKM estimator of the genetic pathway effect can be obtained using the best linear unbiased predictor in the corresponding linear mixed model formulation. The smoothing parameter and the kernel parameter can be estimated as variance components using restricted maximum likelihood. A score test is developed to test for the genetic pathway effect. Model/variable selection within the LSKM framework is discussed. The methods are illustrated using a prostate cancer data set and evaluated using simulations. PMID:18078480

  13. Feasibility of near infrared spectroscopy for analyzing corn kernel damage and viability of soybean and corn kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The current US corn grading system accounts for the portion of damaged kernels, which is measured by time-consuming and inaccurate visual inspection. Near infrared spectroscopy (NIRS), a non-destructive and fast analytical method, was tested as a tool for discriminating corn kernels with heat and f...

  14. Smooth Programs and Languages.

    ERIC Educational Resources Information Center

    Foulk, Clinton R.; Juelich, Otto C.

    A smooth program is defined to be one which is "go to"-free in the sense that it can be represented by a flowchart consisting only of concatenation, alternation, and interation elements. Three methods of eliminating the "go to" statement from a program have been proposed: (1) the introduction of additional Boolean variables or the equivalent…

  15. Gaussian kernel width optimization for sparse Bayesian learning.

    PubMed

    Mohsenzadeh, Yalda; Sheikhzadeh, Hamid

    2015-04-01

    Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters. PMID:25794377

  16. KERNEL PHASE IN FIZEAU INTERFEROMETRY

    SciTech Connect

    Martinache, Frantz

    2010-11-20

    The detection of high contrast companions at small angular separation appears feasible in conventional direct images using the self-calibration properties of interferometric observable quantities. The friendly notion of closure phase, which is key to the recent observational successes of non-redundant aperture masking interferometry used with adaptive optics, appears to be one example of a wide family of observable quantities that are not contaminated by phase noise. In the high-Strehl regime, soon to be available thanks to the coming generation of extreme adaptive optics systems on ground-based telescopes, and already available from space, closure phase like information can be extracted from any direct image, even taken with a redundant aperture. These new phase-noise immune observable quantities, called kernel phases, are determined a priori from the knowledge of the geometry of the pupil only. Re-analysis of archive data acquired with the Hubble Space Telescope NICMOS instrument using this new kernel-phase algorithm demonstrates the power of the method as it clearly detects and locates with milliarcsecond precision a known companion to a star at angular separation less than the diffraction limit.

  17. Application of the matrix exponential kernel

    NASA Technical Reports Server (NTRS)

    Rohach, A. F.

    1972-01-01

    A point matrix kernel for radiation transport, developed by the transmission matrix method, has been used to develop buildup factors and energy spectra through slab layers of different materials for a point isotropic source. Combinations of lead-water slabs were chosen for examples because of the extreme differences in shielding properties of these two materials.

  18. A TWO-DIMENSIONAL METHOD OF MANUFACTURED SOLUTIONS BENCHMARK SUITE BASED ON VARIATIONS OF LARSEN'S BENCHMARK WITH ESCALATING ORDER OF SMOOTHNESS OF THE EXACT SOLUTION

    SciTech Connect

    Sebastian Schunert; Yousry Y. Azmy

    2011-05-01

    The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally fine mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite first eliminates the aforementioned limitation of fine mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme.

  19. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  20. Numerical solution of the nonlinear Schrödinger equation using smoothed-particle hydrodynamics.

    PubMed

    Mocz, Philip; Succi, Sauro

    2015-05-01

    We formulate a smoothed-particle hydrodynamics numerical method, traditionally used for the Euler equations for fluid dynamics in the context of astrophysical simulations, to solve the nonlinear Schrödinger equation in the Madelung formulation. The probability density of the wave function is discretized into moving particles, whose properties are smoothed by a kernel function. The traditional fluid pressure is replaced by a quantum pressure tensor, for which a robust discretization is found. We demonstrate our numerical method on a variety of numerical test problems involving the simple harmonic oscillator, soliton-soliton collision, Bose-Einstein condensates, collapsing singularities, and dark matter halos governed by the Gross-Pitaevskii-Poisson equation. Our method is conservative, applicable to unbounded domains, and is automatically adaptive in its resolution, making it well suited to study problems with collapsing solutions. PMID:26066276

  1. Numerical solution of the nonlinear Schrödinger equation using smoothed-particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Mocz, Philip; Succi, Sauro

    2015-05-01

    We formulate a smoothed-particle hydrodynamics numerical method, traditionally used for the Euler equations for fluid dynamics in the context of astrophysical simulations, to solve the nonlinear Schrödinger equation in the Madelung formulation. The probability density of the wave function is discretized into moving particles, whose properties are smoothed by a kernel function. The traditional fluid pressure is replaced by a quantum pressure tensor, for which a robust discretization is found. We demonstrate our numerical method on a variety of numerical test problems involving the simple harmonic oscillator, soliton-soliton collision, Bose-Einstein condensates, collapsing singularities, and dark matter halos governed by the Gross-Pitaevskii-Poisson equation. Our method is conservative, applicable to unbounded domains, and is automatically adaptive in its resolution, making it well suited to study problems with collapsing solutions.

  2. Study of the Impact of Tissue Density Heterogeneities on 3-Dimensional Abdominal Dosimetry: Comparison Between Dose Kernel Convolution and Direct Monte Carlo Methods

    PubMed Central

    Dieudonné, Arnaud; Hobbs, Robert F.; Lebtahi, Rachida; Maurel, Fabien; Baechler, Sébastien; Wahl, Richard L.; Boubaker, Ariane; Le Guludec, Dominique; Sgouros, Georges; Gardin, Isabelle

    2014-01-01

    Dose kernel convolution (DK) methods have been proposed to speed up absorbed dose calculations in molecular radionuclide therapy. Our aim was to evaluate the impact of tissue density heterogeneities (TDH) on dosimetry when using a DK method and to propose a simple density-correction method. Methods This study has been conducted on 3 clinical cases: case 1, non-Hodgkin lymphoma treated with 131I-tositumomab; case 2, a neuroendocrine tumor treatment simulated with 177Lu-peptides; and case 3, hepatocellular carcinoma treated with 90Y-microspheres. Absorbed dose calculations were performed using a direct Monte Carlo approach accounting for TDH (3D-RD), and a DK approach (VoxelDose, or VD). For each individual voxel, the VD absorbed dose, DVD, calculated assuming uniform density, was corrected for density, giving DVDd. The average 3D-RD absorbed dose values, D3DRD, were compared with DVD and DVDd, using the relative difference ΔVD/3DRD. At the voxel level, density-binned ΔVD/3DRD and ΔVDd/3DRD were plotted against ρ and fitted with a linear regression. Results The DVD calculations showed a good agreement with D3DRD. ΔVD/3DRD was less than 3.5%, except for the tumor of case 1 (5.9%) and the renal cortex of case 2 (5.6%). At the voxel level, the ΔVD/3DRD range was 0%–14% for cases 1 and 2, and −3% to 7% for case 3. All 3 cases showed a linear relationship between voxel bin-averaged ΔVD/3DRD and density, ρ: case 1 (Δ = −0.56ρ + 0.62, R2 = 0.93), case 2 (Δ = −0.91ρ + 0.96, R2 = 0.99), and case 3 (Δ = −0.69ρ + 0.72, R2 = 0.91). The density correction improved the agreement of the DK method with the Monte Carlo approach (ΔVDd/3DRD < 1.1%), but with a lesser extent for the tumor of case 1 (3.1%). At the voxel level, the ΔVDd/3DRD range decreased for the 3 clinical cases (case 1, −1% to 4%; case 2, −0.5% to 1.5%, and −1.5% to 2%). No more linear regression existed for cases 2 and 3, contrary to case 1 (Δ = 0.41ρ − 0.38, R2 = 0.88) although

  3. Estimating the Bias of Local Polynomial Approximations Using the Peano Kernel

    SciTech Connect

    Blair, J., and Machorro, E.

    2012-03-22

    These presentation visuals define local polynomial approximations, give formulas for bias and random components of the error, and express bias error in terms of the Peano kernel. They further derive constants that give figures of merit, and show the figures of merit for 3 common weighting functions. The Peano kernel theorem yields estimates for the bias error for local-polynomial-approximation smoothing that are superior in several ways to the error estimates in the current literature.

  4. Prediction of kernel density of corn using single-kernel near infrared spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...

  5. Diffusion tensor smoothing through weighted Karcher means.

    PubMed

    Carmichael, Owen; Chen, Jun; Paul, Debashis; Peng, Jie

    2013-01-01

    Diffusion tensor magnetic resonance imaging (MRI) quantifies the spatial distribution of water Diffusion at each voxel on a regular grid of locations in a biological specimen by Diffusion tensors- 3 × 3 positive definite matrices. Removal of noise from DTI is an important problem due to the high scientific relevance of DTI and relatively low signal to noise ratio it provides. Leading approaches to this problem amount to estimation of weighted Karcher means of Diffusion tensors within spatial neighborhoods, under various metrics imposed on the space of tensors. However, it is unclear how the behavior of these estimators varies with the magnitude of DTI sensor noise (the noise resulting from the thermal e!ects of MRI scanning) as well as the geometric structure of the underlying Diffusion tensor neighborhoods. In this paper, we combine theoretical analysis, empirical analysis of simulated DTI data, and empirical analysis of real DTI scans to compare the noise removal performance of three kernel-based DTI smoothers that are based on Euclidean, log-Euclidean, and affine-invariant metrics. The results suggest, contrary to conventional wisdom, that imposing a simplistic Euclidean metric may in fact provide comparable or superior noise removal, especially in relatively unstructured regions and/or in the presence of moderate to high levels of sensor noise. On the contrary, log-Euclidean and affine-invariant metrics may lead to better noise removal in highly structured anatomical regions, especially when the sensor noise is of low magnitude. These findings emphasize the importance of considering the interplay of sensor noise magnitude and tensor field geometric structure when assessing Diffusion tensor smoothing options. They also point to the necessity for continued development of smoothing methods that perform well across a large range of scenarios. PMID:25419264

  6. [Selection of Characteristic Wavelengths Using SPA and Qualitative Discrimination of Mildew Degree of Corn Kernels Based on SVM].

    PubMed

    Yuan, Ying; Wang, Wei; Chu, Xuan; Xi, Ming-jie

    2016-01-01

    The feasibility of Fourier transform near infrared (FT-NIR) spectroscopy with spectral range between 833 and 2 500 nm to detect the moldy corn kernels with different levels of mildew was verified in this paper. Firstly, to avoid the influence of noise, moving average smoothing was used for spectral data preprocessing after four common pretreatment methods were compared. Then to improve the prediction performance of the model, SPXY (sample set partitioning based on joint x-y distance) was selected and used for sample set partition. Furthermore, in order to reduce the dimensions of the original spectral data, successive projection algorithm (SPA) was adopted and ultimately 7 characteristic wavelengths were extracted, the characteristic wave-lengths were 833, 927, 1 208, 1 337, 1 454, 1 861, 2 280 nm. The experimental results showed when the spectrum data of the 7 characteristic wavelengths were taken as the input of SVM, the radial basic function (RBF) used as the kernel function, and kernel parameter C = 7 760 469, γ = 0.017 003, the classification accuracies of the established SVM model were 97.78% and 93.33% for the training and testing sets respectively. In addition, the independent validation set was selected in the same standard, and used to verify the model. At last, the classification accuracy of 91.11% for the independent validation set was achieved. The result indicated that it is feasible to identify and classify different degree of moldy corn grain kernels using SPA and SVM, and characteristic wavelengths selected by SPA in this paper also lay a foundation for the online NIR detection of mildew corn kernels. PMID:27228772

  7. The vectorial radiative transfer equation problem in the small angle modification of the spherical harmonics method with the determination of the solution smooth part

    NASA Astrophysics Data System (ADS)

    Budak, V. P.; Korkin, S. V.

    2006-12-01

    The paper deals with the vectorial radiative transfer equation (VRTE) problem for a homogeneous strongly anisotropic scattering slab illuminated by a plain unidirectional source of light with an arbitrary angle of irradiance and polarization state. The problem is a theoretical base for the polarized satellite remote sensing (POLDER, PARASOL and others). The VRTE boundary problem decomposition allows reducing to the nonreflecting bottom with subsequent including its polarization properties. We give the complete analysis for the solution smooth non-small angle part for the vectorial small angle modification of the spherical harmonics method (VMSH) built upon the smoothness of the spatial spectrum of the light field distribution vector-function caused by mathematical singularities of the top-boundary condition for the VRTE boundary problem and the anisotropy of many natural scattering media (clouds, ocean). The VMSH itself is described as well.

  8. Appropriate coating methods and other conditions for enzyme-linked immunosorbent assay of smooth, rough, and neutral lipopolysaccharides of Pseudomonas aeruginosa.

    PubMed

    Bantroch, S; Bühler, T; Lam, J S

    1994-01-01

    Smooth, rough, and neutral forms of lipopolysaccharide (LPS) from Pseudomonas aeruginosa were used to assess the appropriate conditions for effective enzyme-linked immunosorbent assay (ELISA) of LPS. Each of these forms of well-defined LPS was tested for the efficiency of antigen coating by various methods as well as to identify an appropriate type of microtiter plate to use. For smooth LPS, the standard carbonate-bicarbonate buffer method was as efficient as the other sensitivity-enhancing plate-coating methods compared. The rough LPS, which has an overall hydrophobic characteristic, was shown to adhere effectively, regardless of the coating method used, to only one type of microtiter plate, CovaLink. This type of plate has secondary amine groups attached on its polystyrene surface by carbon chain spacers, which likely favors hydrophobic interactions between the rough LPS and the well surfaces. Dehydration methods were effective for coating microtiter plates with the neutral LPS examined, which is composed predominantly of a D-rhamnan. For the two dehydration procedures, LPS suspended in water or the organic solvent chloroform-ethanol was added directly to the wells, and the solvent was allowed to dehydrate or evaporate overnight. Precoating of plates with either polymyxin or poly-L-lysine did not give any major improvement in coating with the various forms of LPS. The possibility of using proteinase K- and sodium dodecyl sulfate-treated LPS preparations for ELISAs was also investigated. Smooth LPS prepared by this method was as effective in ELISA as LPS prepared by the hot water-phenol method, while the rough and neutral LPSs prepared this way were not satisfactory for ELISA. PMID:7496923

  9. Arbitrary-resolution global sensitivity kernels

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Fournier, A.; Dahlen, F.

    2007-12-01

    Extracting observables out of any part of a seismogram (e.g. including diffracted phases such as Pdiff) necessitates the knowledge of 3-D time-space wavefields for the Green functions that form the backbone of Fréchet sensitivity kernels. While known for a while, this idea is still computationally intractable in 3-D, facing major simulation and storage issues when high-frequency wavefields are considered at the global scale. We recently developed a new "collapsed-dimension" spectral-element method that solves the 3-D system of elastodynamic equations in a 2-D space, based on exploring symmetry considerations of the seismic-wave radiation patterns. We will present the technical background on the computation of waveform kernels, various examples of time- and frequency-dependent sensitivity kernels and subsequently extracted time-window kernels (e.g. banana- doughnuts). Given the computationally light-weighted 2-D nature, we will explore some crucial parameters such as excitation type, source time functions, frequency, azimuth, discontinuity locations, and phase type, i.e. an a priori view into how, when, and where seismograms carry 3-D Earth signature. A once-and-for-all database of 2-D waveforms for various source depths shall then serve as a complete set of global time-space sensitivity for a given spherically symmetric background model, thereby allowing for tomographic inversions with arbitrary frequencies, observables, and phases.

  10. A meshfree unification: reproducing kernel peridynamics

    NASA Astrophysics Data System (ADS)

    Bessa, M. A.; Foster, J. T.; Belytschko, T.; Liu, Wing Kam

    2014-06-01

    This paper is the first investigation establishing the link between the meshfree state-based peridynamics method and other meshfree methods, in particular with the moving least squares reproducing kernel particle method (RKPM). It is concluded that the discretization of state-based peridynamics leads directly to an approximation of the derivatives that can be obtained from RKPM. However, state-based peridynamics obtains the same result at a significantly lower computational cost which motivates its use in large-scale computations. In light of the findings of this study, an update to the method is proposed such that the limitations regarding application of boundary conditions and the use of non-uniform grids are corrected by using the reproducing kernel approximation.

  11. A new parameter identification method to obtain change in smooth musclecontraction state due to mechanical skin irritation

    NASA Astrophysics Data System (ADS)

    Bauer, Daniela

    2005-03-01

    A light scratch with a needle induces histamine and neuropetide release on the line of stroke and in the surrounding tissue. Histamine and neuropeptides are vasodilaters. They create vasodilation by changing the contraction state of the vascular smooth muscles and hence vessel compliance. Smooth muscle contraction state is very difficult to measure. We propose an identification procedure that determines change in compliance. The procedure is based on numerical and experimental results. Blood flow is measured by Laser Doppler Velocimetry. Numerical data is obtained by a continuous model of hierarchically arranged porous media of the vascular network [1]. We show that compliance increases after the stroke in the entire tissue. Then, compliance decreases in the surrounding tissue, while it keeps increasing on the line of stroke. Hence, blood is transported from the surrounding tissue to the line of stroke. Thus, higher blood volume on the line of stroke is obtained. [1] Bauer, D., Grebe, R. Ehrlacher, A., 2004. A three layer continuous model of porous media to describe the first phase of skin irritation. J. Theoret. Bio. in press

  12. Image filtering as an alternative to the application of a different reconstruction kernel in CT imaging: Feasibility study in lung cancer screening

    SciTech Connect

    Ohkubo, Masaki; Wada, Shinichi; Kayugawa, Akihiro; Matsumoto, Toru; Murao, Kohei

    2011-07-15

    Purpose: While the acquisition of projection data in a computed tomography (CT) scanner is generally carried out once, the projection data is often removed from the system, making further reconstruction with a different reconstruction filter impossible. The reconstruction kernel is one of the most important parameters. To have access to all the reconstructions, either prior reconstructions with multiple kernels must be performed or the projection data must be stored. Each of these requirements would increase the burden on data archiving. This study aimed to design an effective method to achieve similar image quality using an image filtering technique in the image space, instead of a reconstruction filter in the projection space for CT imaging. The authors evaluated the clinical feasibility of the proposed method in lung cancer screening. Methods: The proposed technique is essentially the same as common image filtering, which performs processing in the spatial-frequency domain with a filter function. However, the filter function was determined based on the quantitative analysis of the point spread functions (PSFs) measured in the system. The modulation transfer functions (MTFs) were derived from the PSFs, and the ratio of the MTFs was used as the filter function. Therefore, using an image reconstructed with a kernel, an image reconstructed with a different kernel was obtained by filtering, which used the ratio of the MTFs obtained for the two kernels. The performance of the method was evaluated by using routine clinical images obtained from CT screening for lung cancer in five subjects. Results: Filtered images for all combinations of three types of reconstruction kernels (''smooth,''''standard,'' and ''sharp'' kernels) showed good agreement with original reconstructed images regarded as the gold standard. On the filtered images, abnormal shadows suspected as being lung cancers were identical to those on the reconstructed images. The standard deviations (SDs) for

  13. Conservative smoothing versus artificial viscosity

    SciTech Connect

    Guenther, C.; Hicks, D.L.; Swegle, J.W.

    1994-08-01

    This report was stimulated by some recent investigations of S.P.H. (Smoothed Particle Hydrodynamics method). Solid dynamics computations with S.P.H. show symptoms of instabilities which are not eliminated by artificial viscosities. Both analysis and experiment indicate that conservative smoothing eliminates the instabilities in S.P.H. computations which artificial viscosities cannot. Questions were raised as to whether conservative smoothing might smear solutions more than artificial viscosity. Conservative smoothing, properly used, can produce more accurate solutions than the von Neumann-Richtmyer-Landshoff artificial viscosity which has been the standard for many years. The authors illustrate this using the vNR scheme on a test problem with known exact solution involving a shock collision in an ideal gas. They show that the norms of the errors with conservative smoothing are significantly smaller than the norms of the errors with artificial viscosity.

  14. Seismic hazard assessment in Central Asia using smoothed seismicity approaches

    NASA Astrophysics Data System (ADS)

    Ullah, Shahid; Bindi, Dino; Zuccolo, Elisa; Mikhailova, Natalia; Danciu, Laurentiu; Parolai, Stefano

    2014-05-01

    Central Asia has a long history of large to moderate frequent seismicity and is therefore considered one of the most seismically active regions with a high hazard level in the world. In the hazard map produced at global scale by GSHAP project in 1999( Giardini, 1999), Central Asia is characterized by peak ground accelerations with return period of 475 years as high as 4.8 m/s2. Therefore Central Asia was selected as a target area for EMCA project (Earthquake Model Central Asia), a regional project of GEM (Global Earthquake Model) for this area. In the framework of EMCA, a new generation of seismic hazard maps are foreseen in terms of macro-seismic intensity, in turn to be used to obtain seismic risk maps for the region. Therefore Intensity Prediction Equation (IPE) had been developed for the region based on the distribution of intensity data for different earthquakes occurred in Central Asia since the end of 19th century (Bindi et al. 2011). The same observed intensity distribution had been used to assess the seismic hazard following the site approach (Bindi et al. 2012). In this study, we present the probabilistic seismic hazard assessment of Central Asia in terms of MSK-64 based on two kernel estimation methods. We consider the smoothed seismicity approaches of Frankel (1995), modified for considering the adaptive kernel proposed by Stock and Smith (2002), and of Woo (1996), modified for considering a grid of sites and estimating a separate bandwidth for each site. The activity rate maps are shown from Frankel approach showing the effects of fixed and adaptive kernel. The hazard is estimated for rock site condition based on 10% probability of exceedance in 50 years. Maximum intensity of about 9 is observed in the Hindukush region.

  15. Classification of maize kernels using NIR hyperspectral imaging.

    PubMed

    Williams, Paul J; Kucheryavskiy, Sergey

    2016-10-15

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual kernels and did not give acceptable results because of high misclassification. However by using a predefined threshold and classifying entire kernels based on the number of correctly predicted pixels, improved results were achieved (sensitivity and specificity of 0.75 and 0.97). Object-wise classification was performed using two methods for feature extraction - score histograms and mean spectra. The model based on score histograms performed better for hard kernel classification (sensitivity and specificity of 0.93 and 0.97), while that of mean spectra gave better results for medium kernels (sensitivity and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale. PMID:27173544

  16. Robotic intelligence kernel

    SciTech Connect

    Bruemmer, David J.

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  17. Progress in smooth particle hydrodynamics

    SciTech Connect

    Wingate, C.A.; Dilts, G.A.; Mandell, D.A.; Crotzer, L.A.; Knapp, C.E.

    1998-07-01

    Smooth Particle Hydrodynamics (SPH) is a meshless, Lagrangian numerical method for hydrodynamics calculations where calculational elements are fuzzy particles which move according to the hydrodynamic equations of motion. Each particle carries local values of density, temperature, pressure and other hydrodynamic parameters. A major advantage of SPH is that it is meshless, thus large deformation calculations can be easily done with no connectivity complications. Interface positions are known and there are no problems with advecting quantities through a mesh that typical Eulerian codes have. These underlying SPH features make fracture physics easy and natural and in fact, much of the applications work revolves around simulating fracture. Debris particles from impacts can be easily transported across large voids with SPH. While SPH has considerable promise, there are some problems inherent in the technique that have so far limited its usefulness. The most serious problem is the well known instability in tension leading to particle clumping and numerical fracture. Another problem is that the SPH interpolation is only correct when particles are uniformly spaced a half particle apart leading to incorrect strain rates, accelerations and other quantities for general particle distributions. SPH calculations are also sensitive to particle locations. The standard artificial viscosity treatment in SPH leads to spurious viscosity in shear flows. This paper will demonstrate solutions for these problems that they and others have been developing. The most promising is to replace the SPH interpolant with the moving least squares (MLS) interpolant invented by Lancaster and Salkauskas in 1981. SPH and MLS are closely related with MLS being essentially SPH with corrected particle volumes. When formulated correctly, JLS is conservative, stable in both compression and tension, does not have the SPH boundary problems and is not sensitive to particle placement. The other approach to

  18. A Kernel Classification Framework for Metric Learning.

    PubMed

    Wang, Faqiang; Zuo, Wangmeng; Zhang, Lei; Meng, Deyu; Zhang, David

    2015-09-01

    Learning a distance metric from the given training samples plays a crucial role in many machine learning tasks, and various models and optimization algorithms have been proposed in the past decade. In this paper, we generalize several state-of-the-art metric learning methods, such as large margin nearest neighbor (LMNN) and information theoretic metric learning (ITML), into a kernel classification framework. First, doublets and triplets are constructed from the training samples, and a family of degree-2 polynomial kernel functions is proposed for pairs of doublets or triplets. Then, a kernel classification framework is established to generalize many popular metric learning methods such as LMNN and ITML. The proposed framework can also suggest new metric learning methods, which can be efficiently implemented, interestingly, using the standard support vector machine (SVM) solvers. Two novel metric learning methods, namely, doublet-SVM and triplet-SVM, are then developed under the proposed framework. Experimental results show that doublet-SVM and triplet-SVM achieve competitive classification accuracies with state-of-the-art metric learning methods but with significantly less training time. PMID:25347887

  19. Effectiveness of Analytic Smoothing in Equipercentile Equating.

    ERIC Educational Resources Information Center

    Kolen, Michael J.

    1984-01-01

    An analytic procedure for smoothing in equipercentile equating using cubic smoothing splines is described and illustrated. The effectiveness of the procedure is judged by comparing the results from smoothed equipercentile equating with those from other equating methods using multiple cross-validations for a variety of sample sizes. (Author/JKS)

  20. Large-eddy simulations of 3D Taylor-Green vortex: comparison of Smoothed Particle Hydrodynamics, Lattice Boltzmann and Finite Volume methods

    NASA Astrophysics Data System (ADS)

    Kajzer, A.; Pozorski, J.; Szewc, K.

    2014-08-01

    In the paper we present Large-eddy simulation (LES) results of 3D Taylor- Green vortex obtained by the three different computational approaches: Smoothed Particle Hydrodynamics (SPH), Lattice Boltzmann Method (LBM) and Finite Volume Method (FVM). The Smagorinsky model was chosen as a subgrid-scale closure in LES for all considered methods and a selection of spatial resolutions have been investigated. The SPH and LBM computations have been carried out with the use of the in-house codes executed on GPU and compared, for validation purposes, with the FVM results obtained using the open-source CFD software OpenFOAM. A comparative study in terms of one-point statistics and turbulent energy spectra shows a good agreement of LES results for all methods. An analysis of the GPU code efficiency and implementation difficulties has been made. It is shown that both SPH and LBM may offer a significant advantage over mesh-based CFD methods.

  1. An Ensemble Approach to Building Mercer Kernels with Prior Information

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  2. Smoothed Standardization Assessment of Testlet Level DIF on a Math Free-Response Item Type.

    ERIC Educational Resources Information Center

    Lyu, C. Felicia; And Others

    A smoothed version of standardization, which merges kernel smoothing with the traditional standardization differential item functioning (DIF) approach, was used to examine DIF for student-produced response (SPR) items on the Scholastic Assessment Test (SAT) I mathematics test at both the item and testlet levels. This nonparametric technique avoids…

  3. A three-step estimation procedure using local polynomial smoothing for inconsistently sampled longitudinal data.

    PubMed

    Ye, Lei; Youk, Ada O; Sereika, Susan M; Anderson, Stewart J; Burke, Lora E

    2016-09-10

    Parametric mixed-effects models are useful in longitudinal data analysis when the sampling frequencies of a response variable and the associated covariates are the same. We propose a three-step estimation procedure using local polynomial smoothing and demonstrate with data where the variables to be assessed are repeatedly sampled with different frequencies within the same time frame. We first insert pseudo data for the less frequently sampled variable based on the observed measurements to create a new dataset. Then standard simple linear regressions are fitted at each time point to obtain raw estimates of the association between dependent and independent variables. Last, local polynomial smoothing is applied to smooth the raw estimates. Rather than use a kernel function to assign weights, only analytical weights that reflect the importance of each raw estimate are used. The standard errors of the raw estimates and the distance between the pseudo data and the observed data are considered as the measure of the importance of the raw estimates. We applied the proposed method to a weight loss clinical trial, and it efficiently estimated the correlation between the inconsistently sampled longitudinal data. Our approach was also evaluated via simulations. The results showed that the proposed method works better when the residual variances of the standard linear regressions are small and the within-subjects correlations are high. Also, using analytic weights instead of kernel function during local polynomial smoothing is important when raw estimates have extreme values, or the association between the dependent and independent variable is nonlinear. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27122363

  4. A Study of Diffusion Tensor Imaging by Tissue-Specific, Smoothing-Compensated Voxel-Based Analysis

    PubMed Central

    Lee, Jee Eun; Chung, Moo K.; Lazar, Mariana; DuBray, Molly B.; Kim, Jinsuh; Bigler, Erin D.; Lainhart, Janet E.; Alexander, Andrew L.

    2009-01-01

    Voxel-based analysis (VBA) is commonly used for statistical analysis of image data, including the detection of significant signal differences between groups. Typically, images are co-registered and then smoothed with an isotropic Gaussian kernel to compensate for image misregistration, to improve the signal-to-noise ratio (SNR), to reduce the number of multiple comparisons, and to apply random field theory. Problems with typical implementations of VBA include poor tissue specificity from image misregistration and smoothing. In this study, we developed a new tissue-specific, smoothing-compensated (T-SPOON) method for the VBA of diffusion tensor imaging (DTI) data with improved tissue specificity and compensation for image misregistration and smoothing. When compared with conventional VBA methods, the T-SPOON method introduced substantially less errors in the normalized and smoothed DTI maps. Another confound of the conventional DTI-VBA is that it is difficult to differentiate between differences in morphometry and DTI measures that describe tissue microstructure. T-SPOON VBA decreased the effects of differential morphometry in the DTI VBA studies. T-SPOON and conventional VBA were applied to a DTI study of white matter in autism. T-SPOON VBA results were found to be more consistent with region of interest (ROI) measurements in the corpus callosum and temporal lobe regions. The T-SPOON method may be also applicable to other quantitative imaging maps such as T1 or T2 relaxometry, magnetization transfer, or PET tracer maps. PMID:18976713

  5. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 8 2011-01-01 2011-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...

  6. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...

  7. Variational Dirichlet Blur Kernel Estimation.

    PubMed

    Zhou, Xu; Mateos, Javier; Zhou, Fugen; Molina, Rafael; Katsaggelos, Aggelos K

    2015-12-01

    Blind image deconvolution involves two key objectives: 1) latent image and 2) blur estimation. For latent image estimation, we propose a fast deconvolution algorithm, which uses an image prior of nondimensional Gaussianity measure to enforce sparsity and an undetermined boundary condition methodology to reduce boundary artifacts. For blur estimation, a linear inverse problem with normalization and nonnegative constraints must be solved. However, the normalization constraint is ignored in many blind image deblurring methods, mainly because it makes the problem less tractable. In this paper, we show that the normalization constraint can be very naturally incorporated into the estimation process by using a Dirichlet distribution to approximate the posterior distribution of the blur. Making use of variational Dirichlet approximation, we provide a blur posterior approximation that considers the uncertainty of the estimate and removes noise in the estimated kernel. Experiments with synthetic and real data demonstrate that the proposed method is very competitive to the state-of-the-art blind image restoration methods. PMID:26390458

  8. Cusp Kernels for Velocity-Changing Collisions

    NASA Astrophysics Data System (ADS)

    McGuyer, B. H.; Marsland, R., III; Olsen, B. A.; Happer, W.

    2012-05-01

    We introduce an analytical kernel, the “cusp” kernel, to model the effects of velocity-changing collisions on optically pumped atoms in low-pressure buffer gases. Like the widely used Keilson-Storer kernel [J. Keilson and J. E. Storer, Q. Appl. Math. 10, 243 (1952)QAMAAY0033-569X], cusp kernels are characterized by a single parameter and preserve a Maxwellian velocity distribution. Cusp kernels and their superpositions are more useful than Keilson-Storer kernels, because they are more similar to real kernels inferred from measurements or theory and are easier to invert to find steady-state velocity distributions.

  9. A Gabor-block-based kernel discriminative common vector approach using cosine kernels for human face recognition.

    PubMed

    Kar, Arindam; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita; Kundu, Mahantapas

    2012-01-01

    In this paper a nonlinear Gabor Wavelet Transform (GWT) discriminant feature extraction approach for enhanced face recognition is proposed. Firstly, the low-energized blocks from Gabor wavelet transformed images are extracted. Secondly, the nonlinear discriminating features are analyzed and extracted from the selected low-energized blocks by the generalized Kernel Discriminative Common Vector (KDCV) method. The KDCV method is extended to include cosine kernel function in the discriminating method. The KDCV with the cosine kernels is then applied on the extracted low-energized discriminating feature vectors to obtain the real component of a complex quantity for face recognition. In order to derive positive kernel discriminative vectors, we apply only those kernel discriminative eigenvectors that are associated with nonzero eigenvalues. The feasibility of the low-energized Gabor-block-based generalized KDCV method with cosine kernel function models has been successfully tested for classification using the L(1), L(2) distance measures; and the cosine similarity measure on both frontal and pose-angled face recognition. Experimental results on the FRAV2D and the FERET database demonstrate the effectiveness of this new approach. PMID:23365559

  10. Kernel spectral clustering with memory effect

    NASA Astrophysics Data System (ADS)

    Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.

    2013-05-01

    Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.

  11. Nonlinear stochastic system identification of skin using volterra kernels.

    PubMed

    Chen, Yi; Hunter, Ian W

    2013-04-01

    Volterra kernel stochastic system identification is a technique that can be used to capture and model nonlinear dynamics in biological systems, including the nonlinear properties of skin during indentation. A high bandwidth and high stroke Lorentz force linear actuator system was developed and used to test the mechanical properties of bulk skin and underlying tissue in vivo using a non-white input force and measuring an output position. These short tests (5 s) were conducted in an indentation configuration normal to the skin surface and in an extension configuration tangent to the skin surface. Volterra kernel solution methods were used including a fast least squares procedure and an orthogonalization solution method. The practical modifications, such as frequency domain filtering, necessary for working with low-pass filtered inputs are also described. A simple linear stochastic system identification technique had a variance accounted for (VAF) of less than 75%. Representations using the first and second Volterra kernels had a much higher VAF (90-97%) as well as a lower Akaike information criteria (AICc) indicating that the Volterra kernel models were more efficient. The experimental second Volterra kernel matches well with results from a dynamic-parameter nonlinearity model with fixed mass as a function of depth as well as stiffness and damping that increase with depth into the skin. A study with 16 subjects showed that the kernel peak values have mean coefficients of variation (CV) that ranged from 3 to 8% and showed that the kernel principal components were correlated with location on the body, subject mass, body mass index (BMI), and gender. These fast and robust methods for Volterra kernel stochastic system identification can be applied to the characterization of biological tissues, diagnosis of skin diseases, and determination of consumer product efficacy. PMID:23264003

  12. Smoothing and Equating Methods Applied to Different Types of Test Score Distributions and Evaluated with Respect to Multiple Equating Criteria. Research Report. ETS RR-11-20

    ERIC Educational Resources Information Center

    Moses, Tim; Liu, Jinghua

    2011-01-01

    In equating research and practice, equating functions that are smooth are typically assumed to be more accurate than equating functions with irregularities. This assumption presumes that population test score distributions are relatively smooth. In this study, two examples were used to reconsider common beliefs about smoothing and equating. The…

  13. A Collocation Method for Volterra Integral Equations with Diagonal and Boundary Singularities

    NASA Astrophysics Data System (ADS)

    Kolk, Marek; Pedas, Arvet; Vainikko, Gennadi

    2009-08-01

    We propose a smoothing technique associated with piecewise polynomial collocation methods for solving linear weakly singular Volterra integral equations of the second kind with kernels which, in addition to a diagonal singularity, may have a singularity at the initial point of the interval of integration.

  14. A kernel adaptive algorithm for quaternion-valued inputs.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations. PMID:25594982

  15. A maximum entropy kernel density estimator with applications to function interpolation and texture segmentation

    NASA Astrophysics Data System (ADS)

    Balakrishnan, Nikhil; Schonfeld, Dan

    2006-02-01

    In this paper, we develop a new algorithm to estimate an unknown probability density function given a finite data sample using a tree shaped kernel density estimator. The algorithm formulates an integrated squared error based cost function which minimizes the quadratic divergence between the kernel density and the Parzen density estimate. The cost function reduces to a quadratic programming problem which is minimized within the maximum entropy framework. The maximum entropy principle acts as a regularizer which yields a smooth solution. A smooth density estimate enables better generalization to unseen data and offers distinct advantages in high dimensions and cases where there is limited data. We demonstrate applications of the hierarchical kernel density estimator for function interpolation and texture segmentation problems. When applied to function interpolation, the kernel density estimator improves performance considerably in situations where the posterior conditional density of the dependent variable is multimodal. The kernel density estimator allows flexible non parametric modeling of textures which improves performance in texture segmentation algorithms. We demonstrate performance on a text labeling problem which shows performance of the algorithm in high dimensions. The hierarchical nature of the density estimator enables multiresolution solutions depending on the complexity of the data. The algorithm is fast and has at most quadratic scaling in the number of kernels.

  16. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    NASA Astrophysics Data System (ADS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  17. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    SciTech Connect

    M Ali, M. K. E-mail: eutoco@gmail.com; Ruslan, M. H. E-mail: eutoco@gmail.com; Muthuvalu, M. S. E-mail: jumat@ums.edu.my; Wong, J. E-mail: jumat@ums.edu.my; Sulaiman, J. E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md. E-mail: hafidzruslan@eng.ukm.my

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  18. Progress on development and application of single kernel NIR sorting technology for assessment of FHB resistance in wheat germplasm

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Plant breeders working on developing Fusarium resistant wheat varieties need to evaluate kernels coming from a multitude of crosses for Fusarium Damaged Kernels (FDKs). We are developing Near Infrared (NIR) spectroscopic methods to sort FDKs from sound kernels and to determine DON levels of FDKs non...

  19. A Comparison of Methods for Estimating Conditional Item Score Differences in Differential Item Functioning (DIF) Assessments. Research Report. ETS RR-10-15

    ERIC Educational Resources Information Center

    Moses, Tim; Miao, Jing; Dorans, Neil

    2010-01-01

    This study compared the accuracies of four differential item functioning (DIF) estimation methods, where each method makes use of only one of the following: raw data, logistic regression, loglinear models, or kernel smoothing. The major focus was on the estimation strategies' potential for estimating score-level, conditional DIF. A secondary focus…

  20. Determining the optimal smoothing length scale for actuator line models of wind turbine blades

    NASA Astrophysics Data System (ADS)

    Martinez, Luis; Meneveau, Charles

    2015-11-01

    The actuator line model (ALM) is a widely used tool for simulating wind turbines when performing Large-Eddy Simulations. The ALM uses a smearing kernel ηɛ = 1 /ɛ3π 3 / 2 exp (-r2 /ɛ2) , where r is the distance to an actuator point, and ɛ is the smoothing length scale which establishes the kernel width, to project the lift and drag forces onto the grid. In this work, we develop formulations to establish the optimum value of the smoothing length scale ɛ, based on physical arguments, instead of purely numerical constraints. This parameter has a very important role in the ALM, to provide a length scale, which may, for example, be related to the chord of the airfoil being studied. In the proposed approach, we compare features (such as vertical pressure gradient) of a potential flow solution for flow over a lifting surface with features of the solution of the Euler equations with a body force term. The potential flow solution over a lifting surface is used as a general representation of an airfoil. The method presented aims to minimize the difference between these features of the flow fields as a function of the smearing length scale (ɛ), in order to obtain the optimum value. This work is supported by NSF (IGERT and IIA-1243482) and computations use XSEDE resources.

  1. A new classification algorithm based on multi-kernel Support Vector Machine on infrared cloud background image

    NASA Astrophysics Data System (ADS)

    Wang, Tiebing; Zhou, Yiyu; Xu, Shenda; Cheng, Chuxiong

    2015-11-01

    A new classification algorithm based on multi-kernel support vector machine (SVM) was proposed for classification problems on infrared cloud background image. The experimental results show that the method integrates the advantages of polynomial kernel functions, Gaussian radial kernel functions and multilayer perception kernel functions. Compared with the traditional single-kernel SVM classification method, the proposed method has better performance both in local interpolation and global extrapolation, and is more suitable for SVM classification problems when the training sample size is small. Experimental results show the superiority of the proposed algorithm..

  2. Multiple kernel learning for dimensionality reduction.

    PubMed

    Lin, Yen-Yu; Liu, Tyng-Luh; Fuh, Chiou-Shann

    2011-06-01

    In solving complex visual learning tasks, adopting multiple descriptors to more precisely characterize the data has been a feasible way for improving performance. The resulting data representations are typically high-dimensional and assume diverse forms. Hence, finding a way of transforming them into a unified space of lower dimension generally facilitates the underlying tasks such as object recognition or clustering. To this end, the proposed approach (termed MKL-DR) generalizes the framework of multiple kernel learning for dimensionality reduction, and distinguishes itself with the following three main contributions: first, our method provides the convenience of using diverse image descriptors to describe useful characteristics of various aspects about the underlying data. Second, it extends a broad set of existing dimensionality reduction techniques to consider multiple kernel learning, and consequently improves their effectiveness. Third, by focusing on the techniques pertaining to dimensionality reduction, the formulation introduces a new class of applications with the multiple kernel learning framework to address not only the supervised learning problems but also the unsupervised and semi-supervised ones. PMID:20921580

  3. Semi-Supervised Kernel Mean Shift Clustering.

    PubMed

    Anand, Saket; Mittal, Sushil; Tuzel, Oncel; Meer, Peter

    2014-06-01

    Mean shift clustering is a powerful nonparametric technique that does not require prior knowledge of the number of clusters and does not constrain the shape of the clusters. However, being completely unsupervised, its performance suffers when the original distance metric fails to capture the underlying cluster structure. Despite recent advances in semi-supervised clustering methods, there has been little effort towards incorporating supervision into mean shift. We propose a semi-supervised framework for kernel mean shift clustering (SKMS) that uses only pairwise constraints to guide the clustering procedure. The points are first mapped to a high-dimensional kernel space where the constraints are imposed by a linear transformation of the mapped points. This is achieved by modifying the initial kernel matrix by minimizing a log det divergence-based objective function. We show the advantages of SKMS by evaluating its performance on various synthetic and real datasets while comparing with state-of-the-art semi-supervised clustering algorithms. PMID:26353281

  4. Resummed memory kernels in generalized system-bath master equations

    SciTech Connect

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  5. Resummed memory kernels in generalized system-bath master equations

    NASA Astrophysics Data System (ADS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  6. Resummed memory kernels in generalized system-bath master equations.

    PubMed

    Mavros, Michael G; Van Voorhis, Troy

    2014-08-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics. PMID:25106575

  7. Sparse kernel learning with LASSO and Bayesian inference algorithm.

    PubMed

    Gao, Junbin; Kwan, Paul W; Shi, Daming

    2010-03-01

    Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two separate recent papers [Gao, J., Antolovich, M., & Kwan, P. H. (2008). L1 LASSO and its Bayesian inference. In W. Wobcke, & M. Zhang (Eds.), Lecture notes in computer science: Vol. 5360 (pp. 318-324); Wang, G., Yeung, D. Y., & Lochovsky, F. (2007). The kernel path in kernelized LASSO. In International conference on artificial intelligence and statistics (pp. 580-587). San Juan, Puerto Rico: MIT Press]. This paper is concerned with learning kernels under the LASSO formulation via adopting a generative Bayesian learning and inference approach. A new robust learning algorithm is proposed which produces a sparse kernel model with the capability of learning regularized parameters and kernel hyperparameters. A comparison with state-of-the-art methods for constructing sparse regression models such as the relevance vector machine (RVM) and the local regularization assisted orthogonal least squares regression (LROLS) is given. The new algorithm is also demonstrated to possess considerable computational advantages. PMID:19604671

  8. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    PubMed

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. PMID:22376200

  9. Smooth halos in the cosmic web

    NASA Astrophysics Data System (ADS)

    Gaite, José

    2015-04-01

    Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in cold dark matter N-body simulations. These entropic measures provide us with a quantitative description of the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these ``smoothness sizes'' have no direct relation to the virial radii. Finally, we discuss the influence of N-body discreteness parameters on smoothness.

  10. Classification of corn kernels contaminated with aflatoxins using fluorescence and reflectance hyperspectral images analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas

    2015-05-01

    Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.

  11. RTOS kernel in portable electrocardiograph

    NASA Astrophysics Data System (ADS)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  12. A non-intrusive partitioned approach to couple smoothed particle hydrodynamics and finite element methods for transient fluid-structure interaction problems with large interface motion

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Leduc, Julien; Nunez-Ramirez, Jorge; Combescure, Alain; Marongiu, Jean-Christophe

    2015-04-01

    We propose a non-intrusive numerical coupling method for transient fluid-structure interaction (FSI) problems simulated by means of different discretization methods: smoothed particle hydrodynamics (SPH) and finite element (FE) methods for the fluid and the solid sub-domains, respectively. As a partitioned coupling method, the present algorithm can ensure a zero interface energy during the whole period of numerical simulation, even in the presence of large interface motion. In other words, the time integrations of the two sub-domains (second order Runge-Kutta scheme for fluid and Newmark integrator for solid) are synchronized. Thanks to this energy-conserving feature, one can preserve the minimal order of accuracy in time and the numerical stability of the FSI simulations, which are validated with a 1D and a 2D trivial numerical test cases. Additionally, some other 2D FSI simulations involving large interface motion have also been carried out with the proposed SPH-FE coupling method. Finally, an example of aquaplaning problem is given in order to show the feasibility of such coupling method in multi-dimensional applications with complicated structural geometries.

  13. Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models

    NASA Astrophysics Data System (ADS)

    Kusche, Jürgen

    2007-11-01

    We discuss a new method for approximately decorrelating and non-isotropically filtering the monthly gravity fields provided by the gravity recovery and climate experiment (GRACE) twin-satellite mission. The procedure is more efficient than conventional Gaussian-type isotropic filters in reducing stripes and spurious patterns, while retaining the signal magnitudes. One of the problems that users of GRACE level 2 monthly gravity field solutions fight is the effect of increasing noise in higher frequencies. Simply truncating the spherical harmonic solution at low degrees causes the loss of a significant portion of signal, which is not an option if one is interested in geophysical phenomena on a scale of few hundred to few thousand km. The common approach is to filter the published solutions, that is to convolve them with an isotropic kernel that allows an interpretation as smoothed averaging. The downside of this approach is an amplitude bias and the fact that it neither accounts for the variable data density that increases towards the poles where the orbits converge nor for the anisotropic error correlation structure that the solutions exhibit. Here a relatively simple regularization procedure will be outlined, which allows one to take the latter two effects into account, on the basis of published level 2 products. This leads to a series of approximate decorrelation transformations applied to the monthly solutions, which enable a successive smoothing to reduce the noise in the higher frequencies. This smoothing effect may be used to generate solutions that behave, on average over all possible directions, very close to Gaussian-type filtered ones. The localizing and smoothing properties of our non-isotropic kernels are compared with Gaussian kernels in terms of the kernel variance and the resulting amplitude bias for a standard signal. Examples involving real GRACE level 2 fields as well as geophysical models are used to demonstrate the techniques. With the new method

  14. Removing blur kernel noise via a hybrid ℓp norm

    NASA Astrophysics Data System (ADS)

    Yu, Xin; Zhang, Shunli; Zhao, Xiaolin; Zhang, Li

    2015-01-01

    When estimating a sharp image from a blurred one, blur kernel noise often leads to inaccurate recovery. We develop an effective method to estimate a blur kernel which is able to remove kernel noise and prevent the production of an overly sparse kernel. Our method is based on an iterative framework which alternatingly recovers the sharp image and estimates the blur kernel. In the image recovery step, we utilize the total variation (TV) regularization to recover latent images. In solving TV regularization, we propose a new criterion which adaptively terminates the iterations before convergence. While improving the efficiency, the quality of the final results is not degraded. In the kernel estimation step, we develop a metric to measure the usefulness of image edges, by which we can reduce the ambiguity of kernel estimation caused by small-scale edges. We also propose a hybrid ℓp norm, which is composed of ℓ2 norm and ℓp norm with 0.7≤p<1, to construct a sparsity constraint. Using the hybrid ℓp norm, we reduce a wider range of kernel noise and recover a more accurate blur kernel. The experiments show that the proposed method achieves promising results on both synthetic and real images.

  15. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. PMID:21441012

  16. Smooth eigenvalue correction

    NASA Astrophysics Data System (ADS)

    Hendrikse, Anne; Veldhuis, Raymond; Spreeuwers, Luuk

    2013-12-01

    Second-order statistics play an important role in data modeling. Nowadays, there is a tendency toward measuring more signals with higher resolution (e.g., high-resolution video), causing a rapid increase of dimensionality of the measured samples, while the number of samples remains more or less the same. As a result the eigenvalue estimates are significantly biased as described by the Marčenko Pastur equation for the limit of both the number of samples and their dimensionality going to infinity. By introducing a smoothness factor, we show that the Marčenko Pastur equation can be used in practical situations where both the number of samples and their dimensionality remain finite. Based on this result we derive methods, one already known and one new to our knowledge, to estimate the sample eigenvalues when the population eigenvalues are known. However, usually the sample eigenvalues are known and the population eigenvalues are required. We therefore applied one of the these methods in a feedback loop, resulting in an eigenvalue bias correction method. We compare this eigenvalue correction method with the state-of-the-art methods and show that our method outperforms other methods particularly in real-life situations often encountered in biometrics: underdetermined configurations, high-dimensional configurations, and configurations where the eigenvalues are exponentially distributed.

  17. Polar lipids from oat kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Oat (Avena sativa L.) kernels appear to contain much higher polar lipid concentrations than other plant tissues. We have extracted, identified, and quantified polar lipids from 18 oat genotypes grown in replicated plots in three environments in order to determine genotypic or environmental variation...

  18. Accelerating the Original Profile Kernel

    PubMed Central

    Hamp, Tobias; Goldberg, Tatyana; Rost, Burkhard

    2013-01-01

    One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel. PMID:23825697

  19. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  20. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  1. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  2. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  3. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  4. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  5. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. PMID:25008538

  6. ibr: Iterative bias reduction multivariate smoothing

    SciTech Connect

    Hengartner, Nicholas W; Cornillon, Pierre-andre; Matzner - Lober, Eric

    2009-01-01

    Regression is a fundamental data analysis tool for relating a univariate response variable Y to a multivariate predictor X {element_of} E R{sup d} from the observations (X{sub i}, Y{sub i}), i = 1,...,n. Traditional nonparametric regression use the assumption that the regression function varies smoothly in the independent variable x to locally estimate the conditional expectation m(x) = E[Y|X = x]. The resulting vector of predicted values {cflx Y}{sub i} at the observed covariates X{sub i} is called a regression smoother, or simply a smoother, because the predicted values {cflx Y}{sub i} are less variable than the original observations Y{sub i}. Linear smoothers are linear in the response variable Y and are operationally written as {cflx m} = X{sub {lambda}}Y, where S{sub {lambda}} is a n x n smoothing matrix. The smoothing matrix S{sub {lambda}} typically depends on a tuning parameter which we denote by {lambda}, and that governs the tradeoff between the smoothness of the estimate and the goodness-of-fit of the smoother to the data by controlling the effective size of the local neighborhood over which the responses are averaged. We parameterize the smoothing matrix such that large values of {lambda} are associated to smoothers that averages over larger neighborhood and produce very smooth curves, while small {lambda} are associated to smoothers that average over smaller neighborhood to produce a more wiggly curve that wants to interpolate the data. The parameter {lambda} is the bandwidth for kernel smoother, the span size for running-mean smoother, bin smoother, and the penalty factor {lambda} for spline smoother.

  7. Verification of Chare-kernel programs

    SciTech Connect

    Bhansali, S.; Kale, L.V. )

    1989-01-01

    Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.

  8. Smooth, seamless, and structured grid generation with flexibility in resolution distribution on a sphere based on conformal mapping and the spring dynamics method

    NASA Astrophysics Data System (ADS)

    Iga, Shin-ichi

    2015-09-01

    A generation method for smooth, seamless, and structured triangular grids on a sphere with flexibility in resolution distribution is proposed. This method is applicable to many fields that deal with a sphere on which the required resolution is not uniform. The grids were generated using the spring dynamics method, and adjustments were made using analytical functions. The mesh topology determined its resolution distribution, derived from a combination of conformal mapping factors: polar stereographic projection (PSP), Lambert conformal conic projection (LCCP), and Mercator projection (MP). Their combination generated, for example, a tropically fine grid that had a nearly constant high-resolution belt around the equator, with a gradual decrease in resolution distribution outside of the belt. This grid can be applied to boundary-less simulations of tropical meteorology. The other example involves a regionally fine grid with a nearly constant high-resolution circular region and a gradually decreasing resolution distribution outside of the region. This is applicable to regional atmospheric simulations without grid nesting. The proposed grids are compatible with computer architecture because they possess a structured form. Each triangle of the proposed grids was highly regular, implying a high local isotropy in resolution. Finally, the proposed grids were examined by advection and shallow water simulations.

  9. Delimiting Areas of Endemism through Kernel Interpolation

    PubMed Central

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  10. Phase discontinuity predictions using a machine-learning trained kernel.

    PubMed

    Sawaf, Firas; Groves, Roger M

    2014-08-20

    Phase unwrapping is one of the key steps of interferogram analysis, and its accuracy relies primarily on the correct identification of phase discontinuities. This can be especially challenging for inherently noisy phase fields, such as those produced through shearography and other speckle-based interferometry techniques. We showed in a recent work how a relatively small 10×10 pixel kernel was trained, through machine learning methods, for predicting the locations of phase discontinuities within noisy wrapped phase maps. We describe here how this kernel can be applied in a sliding-window fashion, such that each pixel undergoes 100 phase-discontinuity examinations--one test for each of its possible positions relative to its neighbors within the kernel's extent. We explore how the resulting predictions can be accumulated, and aggregated through a voting system, and demonstrate that the reliability of this method outperforms processing the image by segmenting it into more conventional 10×10 nonoverlapping tiles. When used in this way, we demonstrate that our 10×10 pixel kernel is large enough for effective processing of full-field interferograms. Avoiding, thus, the need for substantially more formidable computational resources which otherwise would have been necessary for training a kernel of a significantly larger size. PMID:25321117

  11. Multiple kernel sparse representations for supervised and unsupervised learning.

    PubMed

    Thiagarajan, Jayaraman J; Ramamurthy, Karthikeyan Natesan; Spanias, Andreas

    2014-07-01

    In complex visual recognition tasks, it is typical to adopt multiple descriptors, which describe different aspects of the images, for obtaining an improved recognition performance. Descriptors that have diverse forms can be fused into a unified feature space in a principled manner using kernel methods. Sparse models that generalize well to the test data can be learned in the unified kernel space, and appropriate constraints can be incorporated for application in supervised and unsupervised learning. In this paper, we propose to perform sparse coding and dictionary learning in the multiple kernel space, where the weights of the ensemble kernel are tuned based on graph-embedding principles such that class discrimination is maximized. In our proposed algorithm, dictionaries are inferred using multiple levels of 1D subspace clustering in the kernel space, and the sparse codes are obtained using a simple levelwise pursuit scheme. Empirical results for object recognition and image clustering show that our algorithm outperforms existing sparse coding based approaches, and compares favorably to other state-of-the-art methods. PMID:24833593

  12. Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Theodoridis, Sergios

    2008-12-01

    Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.

  13. Thermal-to-visible face recognition using multiple kernel learning

    NASA Astrophysics Data System (ADS)

    Hu, Shuowen; Gurram, Prudhvi; Kwon, Heesung; Chan, Alex L.

    2014-06-01

    Recognizing faces acquired in the thermal spectrum from a gallery of visible face images is a desired capability for the military and homeland security, especially for nighttime surveillance and intelligence gathering. However, thermal-tovisible face recognition is a highly challenging problem, due to the large modality gap between thermal and visible imaging. In this paper, we propose a thermal-to-visible face recognition approach based on multiple kernel learning (MKL) with support vector machines (SVMs). We first subdivide the face into non-overlapping spatial regions or blocks using a method based on coalitional game theory. For comparison purposes, we also investigate uniform spatial subdivisions. Following this subdivision, histogram of oriented gradients (HOG) features are extracted from each block and utilized to compute a kernel for each region. We apply sparse multiple kernel learning (SMKL), which is a MKLbased approach that learns a set of sparse kernel weights, as well as the decision function of a one-vs-all SVM classifier for each of the subjects in the gallery. We also apply equal kernel weights (non-sparse) and obtain one-vs-all SVM models for the same subjects in the gallery. Only visible images of each subject are used for MKL training, while thermal images are used as probe images during testing. With subdivision generated by game theory, we achieved Rank-1 identification rate of 50.7% for SMKL and 93.6% for equal kernel weighting using a multimodal dataset of 65 subjects. With uniform subdivisions, we achieved a Rank-1 identification rate of 88.3% for SMKL, but 92.7% for equal kernel weighting.

  14. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    PubMed Central

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  15. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  16. Fast image search with locality-sensitive hashing and homogeneous kernels map.

    PubMed

    Li, Jun-yi; Li, Jian-hua

    2015-01-01

    Fast image search with efficient additive kernels and kernel locality-sensitive hashing has been proposed. As to hold the kernel functions, recent work has probed methods to create locality-sensitive hashing, which guarantee our approach's linear time; however existing methods still do not solve the problem of locality-sensitive hashing (LSH) algorithm and indirectly sacrifice the loss in accuracy of search results in order to allow fast queries. To improve the search accuracy, we show how to apply explicit feature maps into the homogeneous kernels, which help in feature transformation and combine it with kernel locality-sensitive hashing. We prove our method on several large datasets and illustrate that it improves the accuracy relative to commonly used methods and make the task of object classification and, content-based retrieval more fast and accurate. PMID:25893210

  17. Kernel-based variance component estimation and whole-genome prediction of pre-corrected phenotypes and progeny tests for dairy cow health traits

    PubMed Central

    Morota, Gota; Boddhireddy, Prashanth; Vukasinovic, Natascha; Gianola, Daniel; DeNise, Sue

    2014-01-01

    Prediction of complex trait phenotypes in the presence of unknown gene action is an ongoing challenge in animals, plants, and humans. Development of flexible predictive models that perform well irrespective of genetic and environmental architectures is desirable. Methods that can address non-additive variation in a non-explicit manner are gaining attention for this purpose and, in particular, semi-parametric kernel-based methods have been applied to diverse datasets, mostly providing encouraging results. On the other hand, the gains obtained from these methods have been smaller when smoothed values such as estimated breeding value (EBV) have been used as response variables. However, less emphasis has been placed on the choice of phenotypes to be used in kernel-based whole-genome prediction. This study aimed to evaluate differences between semi-parametric and parametric approaches using two types of response variables and molecular markers as inputs. Pre-corrected phenotypes (PCP) and EBV obtained for dairy cow health traits were used for this comparison. We observed that non-additive genetic variances were major contributors to total genetic variances in PCP, whereas additivity was the largest contributor to variability of EBV, as expected. Within the kernels evaluated, non-parametric methods yielded slightly better predictive performance across traits relative to their additive counterparts regardless of the type of response variable used. This reinforces the view that non-parametric kernels aiming to capture non-linear relationships between a panel of SNPs and phenotypes are appealing for complex trait prediction. However, like past studies, the gain in predictive correlation was not large for either PCP or EBV. We conclude that capturing non-additive genetic variation, especially epistatic variation, in a cross-validation framework remains a significant challenge even when it is important, as seems to be the case for health traits in dairy cows. PMID:24715901

  18. Kernel regression estimation of fiber orientation mixtures in diffusion MRI.

    PubMed

    Cabeen, Ryan P; Bastin, Mark E; Laidlaw, David H

    2016-02-15

    We present and evaluate a method for kernel regression estimation of fiber orientations and associated volume fractions for diffusion MR tractography and population-based atlas construction in clinical imaging studies of brain white matter. This is a model-based image processing technique in which representative fiber models are estimated from collections of component fiber models in model-valued image data. This extends prior work in nonparametric image processing and multi-compartment processing to provide computational tools for image interpolation, smoothing, and fusion with fiber orientation mixtures. In contrast to related work on multi-compartment processing, this approach is based on directional measures of divergence and includes data-adaptive extensions for model selection and bilateral filtering. This is useful for reconstructing complex anatomical features in clinical datasets analyzed with the ball-and-sticks model, and our framework's data-adaptive extensions are potentially useful for general multi-compartment image processing. We experimentally evaluate our approach with both synthetic data from computational phantoms and in vivo clinical data from human subjects. With synthetic data experiments, we evaluate performance based on errors in fiber orientation, volume fraction, compartment count, and tractography-based connectivity. With in vivo data experiments, we first show improved scan-rescan reproducibility and reliability of quantitative fiber bundle metrics, including mean length, volume, streamline count, and mean volume fraction. We then demonstrate the creation of a multi-fiber tractography atlas from a population of 80 human subjects. In comparison to single tensor atlasing, our multi-fiber atlas shows more complete features of known fiber bundles and includes reconstructions of the lateral projections of the corpus callosum and complex fronto-parietal connections of the superior longitudinal fasciculus I, II, and III. PMID:26691524

  19. A Novel Method for Quantifying Smooth Regional Variations in Myocardial Contractility Within an Infarcted Human Left Ventricle Based on Delay-Enhanced Magnetic Resonance Imaging.

    PubMed

    Genet, Martin; Chuan Lee, Lik; Ge, Liang; Acevedo-Bolton, Gabriel; Jeung, Nick; Martin, Alastair; Cambronero, Neil; Boyle, Andrew; Yeghiazarians, Yerem; Kozerke, Sebastian; Guccione, Julius M

    2015-08-01

    Heart failure is increasing at an alarming rate, making it a worldwide epidemic. As the population ages and life expectancy increases, this trend is not likely to change. Myocardial infarction (MI)-induced adverse left ventricular (LV) remodeling is responsible for nearly 70% of heart failure cases. The adverse remodeling process involves an extension of the border zone (BZ) adjacent to an MI, which is normally perfused but shows myofiber contractile dysfunction. To improve patient-specific modeling of cardiac mechanics, we sought to create a finite element model of the human LV with BZ and MI morphologies integrated directly from delayed-enhancement magnetic resonance (DE-MR) images. Instead of separating the LV into discrete regions (e.g., the MI, BZ, and remote regions) with each having a homogeneous myocardial material property, we assumed a functional relation between the DE-MR image pixel intensity and myocardial stiffness and contractility--we considered a linear variation of material properties as a function of DE-MR image pixel intensity, which is known to improve the accuracy of the model's response. The finite element model was then calibrated using measurements obtained from the same patient--namely, 3D strain measurements-using complementary spatial modulation of magnetization magnetic resonance (CSPAMM-MR) images. This led to an average circumferential strain error of 8.9% across all American Heart Association (AHA) segments. We demonstrate the utility of our method for quantifying smooth regional variations in myocardial contractility using cardiac DE-MR and CSPAMM-MR images acquired from a 78-yr-old woman who experienced an MI approximately 1 yr prior. We found a remote myocardial diastolic stiffness of C(0) = 0.102 kPa, and a remote myocardial contractility of T(max) = 146.9 kPa, which are both in the range of previously published normal human values. Moreover, we found a normalized pixel intensity range of 30% for the BZ, which is consistent with

  20. A Comparison of Methods for Nonparametric Estimation of Item Characteristic Curves for Binary Items

    ERIC Educational Resources Information Center

    Lee, Young-Sun

    2007-01-01

    This study compares the performance of three nonparametric item characteristic curve (ICC) estimation procedures: isotonic regression, smoothed isotonic regression, and kernel smoothing. Smoothed isotonic regression, employed along with an appropriate kernel function, provides better estimates and also satisfies the assumption of strict…

  1. The Kernel Adaptive Autoregressive-Moving-Average Algorithm.

    PubMed

    Li, Kan; Príncipe, José C

    2016-02-01

    In this paper, we present a novel kernel adaptive recurrent filtering algorithm based on the autoregressive-moving-average (ARMA) model, which is trained with recurrent stochastic gradient descent in the reproducing kernel Hilbert spaces. This kernelized recurrent system, the kernel adaptive ARMA (KAARMA) algorithm, brings together the theories of adaptive signal processing and recurrent neural networks (RNNs), extending the current theory of kernel adaptive filtering (KAF) using the representer theorem to include feedback. Compared with classical feedforward KAF methods, the KAARMA algorithm provides general nonlinear solutions for complex dynamical systems in a state-space representation, with a deferred teacher signal, by propagating forward the hidden states. We demonstrate its capabilities to provide exact solutions with compact structures by solving a set of benchmark nondeterministic polynomial-complete problems involving grammatical inference. Simulation results show that the KAARMA algorithm outperforms equivalent input-space recurrent architectures using first- and second-order RNNs, demonstrating its potential as an effective learning solution for the identification and synthesis of deterministic finite automata. PMID:25935049

  2. [Utilizable value of wild economic plant resource--acron kernel].

    PubMed

    He, R; Wang, K; Wang, Y; Xiong, T

    2000-04-01

    Peking whites breeding hens were selected. Using true metabolizable energy method (TME) to evaluate the available nutritive value of acorn kernel, while maize and rice were used as control. The results showed that the contents of gross energy (GE), apparent metabolizable energy (AME), true metabolizable energy (TME) and crude protein (CP) in the acorn kernel were 16.53 mg/kg-1, 11.13 mg.kg-1, 11.66 mg.kg-1 and 10.63%, respectively. The apparent availability and true availability of crude protein were 45.55% and 49.83%. The gross content of 17 amino acids, essential amino acids and semiessential amino acids were 9.23% and 4.84%. The true availability of amino acid and the content of true available amino acid were 60.85% and 6.09%. The contents of tannin and hydrocyanic acid were 4.55% and 0.98% in acorn kernel. The available nutritive value of acorn kernel is similar to maize or slightly lower, but slightly higher than that of rice. Acorn kernel is a wild economic plant resource to exploit and utilize but it contains higher tannin and hydrocyanic acid. PMID:11767593

  3. SMOOTH MUSCLE STEM CELLS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Vascular smooth muscle cells (SMCs) originate from multiple types of progenitor cells. In the embryo, the most well-studied SMC progenitor is the cardiac neural crest stem cell. Smooth muscle differentiation in the neural crest lineage is controlled by a combination of cell intrinsic factors, includ...

  4. Low cost real time sorting of in-shell pistachio nuts from kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A high speed, non-destructive, low cost sorting machine has been developed to separate pistachio product streams. An optical method was used to differentiate pistachio kernels from in-shell nuts, with recognition rates of 97.9% for kernels (2.1% false negatives) and 99.3% for in-shell nuts (0.7% fa...

  5. Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain

    ERIC Educational Resources Information Center

    Hannagan, Thomas; Grainger, Jonathan

    2012-01-01

    It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jakel, Scholkopf, & Wichmann, 2009). We point out that "String kernels," initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal…

  6. DFT calculations of molecular excited states using an orbital-dependent nonadiabatic exchange kernel

    SciTech Connect

    Ipatov, A. N.

    2010-02-15

    A density functional method for computing molecular excitation spectra is presented that uses a frequency-dependent kernel and takes into account the nonlocality of exchange interaction. Owing to its high numerical stability and the use of a nonadiabatic (frequency-dependent) exchange kernel, the proposed approach provides a qualitatively correct description of the asymptotic behavior of charge-transfer excitation energies.

  7. Resistant-starch Formation in High-amylose Maize Starch During Kernel Development

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this study was to understand the resistant-starch (RS) formation during the kernel development of high-amylose maize, GEMS-0067 line. RS content of the starch, determined using AOAC Method 991.43 for total dietary fiber, increased with kernel maturation and the increase in amylose/...

  8. Volterra kernels measurement and the application in post calibration of ADCs

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Yueyang, Chen; Shun'an, Zhong; Minshu, Ma

    As the proper choice for the digital post-calibration of the Analog to Digital Converters (ADCs), Volterra series has developed on a firm mathematical and logical foundation for decades. The most important issue in the Volterra series method is how to measure the Volterra kernels. This paper describes how frequency domain Volterra kernels effectively separated by compounding the Vandermonde matrix.

  9. Straight-chain halocarbon forming fluids for TRISO fuel kernel production - Tests with yttria-stabilized zirconia microspheres

    NASA Astrophysics Data System (ADS)

    Baker, M. P.; King, J. C.; Gorman, B. P.; Braley, J. C.

    2015-03-01

    Current methods of TRISO fuel kernel production in the United States use a sol-gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.

  10. Diamond Smoothing Tools

    NASA Technical Reports Server (NTRS)

    Voronov, Oleg

    2007-01-01

    Diamond smoothing tools have been proposed for use in conjunction with diamond cutting tools that are used in many finish-machining operations. Diamond machining (including finishing) is often used, for example, in fabrication of precise metal mirrors. A diamond smoothing tool according to the proposal would have a smooth spherical surface. For a given finish machining operation, the smoothing tool would be mounted next to the cutting tool. The smoothing tool would slide on the machined surface left behind by the cutting tool, plastically deforming the surface material and thereby reducing the roughness of the surface, closing microcracks and otherwise generally reducing or eliminating microscopic surface and subsurface defects, and increasing the microhardness of the surface layer. It has been estimated that if smoothing tools of this type were used in conjunction with cutting tools on sufficiently precise lathes, it would be possible to reduce the roughness of machined surfaces to as little as 3 nm. A tool according to the proposal would consist of a smoothing insert in a metal holder. The smoothing insert would be made from a diamond/metal functionally graded composite rod preform, which, in turn, would be made by sintering together a bulk single-crystal or polycrystalline diamond, a diamond powder, and a metallic alloy at high pressure. To form the spherical smoothing tip, the diamond end of the preform would be subjected to flat grinding, conical grinding, spherical grinding using diamond wheels, and finally spherical polishing and/or buffing using diamond powders. If the diamond were a single crystal, then it would be crystallographically oriented, relative to the machining motion, to minimize its wear and maximize its hardness. Spherically polished diamonds could also be useful for purposes other than smoothing in finish machining: They would likely also be suitable for use as heat-resistant, wear-resistant, unlubricated sliding-fit bearing inserts.

  11. Estimations of the smoothing operator response characteristics

    NASA Technical Reports Server (NTRS)

    Yatskiv, Y. S.

    1974-01-01

    The mean response characteristic of the graphical smoothing method is discussed. The method is illustrated by analysis of latitude observations at Washington from 1915.9 to 1941.0. Spectral density, frequency distribution, and distribution functions are also discussed.

  12. Kernel Near Principal Component Analysis

    SciTech Connect

    MARTIN, SHAWN B.

    2002-07-01

    We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.

  13. Kernel CMAC with improved capability.

    PubMed

    Horváth, Gábor; Szabó, Tamás

    2007-02-01

    The cerebellar model articulation controller (CMAC) has some attractive features, namely fast learning capability and the possibility of efficient digital hardware implementation. Although CMAC was proposed many years ago, several open questions have been left even for today. The most important ones are about its modeling and generalization capabilities. The limits of its modeling capability were addressed in the literature, and recently, certain questions of its generalization property were also investigated. This paper deals with both the modeling and the generalization properties of CMAC. First, a new interpolation model is introduced. Then, a detailed analysis of the generalization error is given, and an analytical expression of this error for some special cases is presented. It is shown that this generalization error can be rather significant, and a simple regularized training algorithm to reduce this error is proposed. The results related to the modeling capability show that there are differences between the one-dimensional (1-D) and the multidimensional versions of CMAC. This paper discusses the reasons of this difference and suggests a new kernel-based interpretation of CMAC. The kernel interpretation gives a unified framework. Applying this approach, both the 1-D and the multidimensional CMACs can be constructed with similar modeling capability. Finally, this paper shows that the regularized training algorithm can be applied for the kernel interpretations too, which results in a network with significantly improved approximation capabilities. PMID:17278566

  14. RKRD: Runtime Kernel Rootkit Detection

    NASA Astrophysics Data System (ADS)

    Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.

    In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.

  15. Iris Image Blur Detection with Multiple Kernel Learning

    NASA Astrophysics Data System (ADS)

    Pan, Lili; Xie, Mei; Mao, Ling

    In this letter, we analyze the influence of motion and out-of-focus blur on both frequency spectrum and cepstrum of an iris image. Based on their characteristics, we define two new discriminative blur features represented by Energy Spectral Density Distribution (ESDD) and Singular Cepstrum Histogram (SCH). To merge the two features for blur detection, a merging kernel which is a linear combination of two kernels is proposed when employing Support Vector Machine. Extensive experiments demonstrate the validity of our method by showing the improved blur detection performance on both synthetic and real datasets.

  16. Kernel approximate Bayesian computation in population genetic inferences.

    PubMed

    Nakagome, Shigeki; Fukumizu, Kenji; Mano, Shuhei

    2013-12-01

    Approximate Bayesian computation (ABC) is a likelihood-free approach for Bayesian inferences based on a rejection algorithm method that applies a tolerance of dissimilarity between summary statistics from observed and simulated data. Although several improvements to the algorithm have been proposed, none of these improvements avoid the following two sources of approximation: 1) lack of sufficient statistics: sampling is not from the true posterior density given data but from an approximate posterior density given summary statistics; and 2) non-zero tolerance: sampling from the posterior density given summary statistics is achieved only in the limit of zero tolerance. The first source of approximation can be improved by adding a summary statistic, but an increase in the number of summary statistics could introduce additional variance caused by the low acceptance rate. Consequently, many researchers have attempted to develop techniques to choose informative summary statistics. The present study evaluated the utility of a kernel-based ABC method [Fukumizu, K., L. Song and A. Gretton (2010): "Kernel Bayes' rule: Bayesian inference with positive definite kernels," arXiv, 1009.5736 and Fukumizu, K., L. Song and A. Gretton (2011): "Kernel Bayes' rule. Advances in Neural Information Processing Systems 24." In: J. Shawe-Taylor and R. S. Zemel and P. Bartlett and F. Pereira and K. Q. Weinberger, (Eds.), pp. 1549-1557., NIPS 24: 1549-1557] for complex problems that demand many summary statistics. Specifically, kernel ABC was applied to population genetic inference. We demonstrate that, in contrast to conventional ABCs, kernel ABC can incorporate a large number of summary statistics while maintaining high performance of the inference. PMID:24150124

  17. Broadband Waveform Sensitivity Kernels for Large-Scale Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Stähler, S. C.; van Driel, M.; Hosseini, K.; Auer, L.; Sigloch, K.

    2015-12-01

    Seismic sensitivity kernels, i.e. the basis for mapping misfit functionals to structural parameters in seismic inversions, have received much attention in recent years. Their computation has been conducted via ray-theory based approaches (Dahlen et al., 2000) or fully numerical solutions based on the adjoint-state formulation (e.g. Tromp et al., 2005). The core problem is the exuberant computational cost due to the large number of source-receiver pairs, each of which require solutions to the forward problem. This is exacerbated in the high-frequency regime where numerical solutions become prohibitively expensive. We present a methodology to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (abstract ID# 77891, www.axisem.info), and thus on spherically symmetric models. As a consequence of this method's numerical efficiency even in high-frequency regimes, kernels can be computed in a time- and frequency-dependent manner, thus providing the full generic mapping from perturbed waveform to perturbed structure. Such waveform kernels can then be used for a variety of misfit functions, structural parameters and refiltered into bandpasses without recomputing any wavefields. A core component of the kernel method presented here is the mapping from numerical wavefields to inversion meshes. This is achieved by a Monte-Carlo approach, allowing for convergent and controllable accuracy on arbitrarily shaped tetrahedral and hexahedral meshes. We test and validate this accuracy by comparing to reference traveltimes, show the projection onto various locally adaptive inversion meshes and discuss computational efficiency for ongoing tomographic applications in the range of millions of observed body-wave data between periods of 2-30s.

  18. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. PMID:24929345

  19. Physics Integration KErnels (PIKE)

    Energy Science and Technology Software Center (ESTSC)

    2014-07-31

    Pike is a software library for coupling and solving multiphysics applications. It provides basic interfaces and utilities for performing code-to-code coupling. It provides simple “black-box” Picard iteration methods for solving the coupled system of equations including Jacobi and Gauss-Seidel solvers. Pike was developed originally to couple neutronics and thermal fluids codes to simulate a light water nuclear reactor for the Consortium for Simulation of Light-water Reactors (CASL) DOE Energy Innovation Hub. The Pike library containsmore » no physics and just provides interfaces and utilities for coupling codes. It will be released open source under a BSD license as part of the Trilinos solver framework (trilinos.org) which is also BSD. This code provides capabilities similar to other open source multiphysics coupling libraries such as LIME, AMP, and MOOSE.« less

  20. Physics Integration KErnels (PIKE)

    SciTech Connect

    Pawlowski, Roger

    2014-07-31

    Pike is a software library for coupling and solving multiphysics applications. It provides basic interfaces and utilities for performing code-to-code coupling. It provides simple “black-box” Picard iteration methods for solving the coupled system of equations including Jacobi and Gauss-Seidel solvers. Pike was developed originally to couple neutronics and thermal fluids codes to simulate a light water nuclear reactor for the Consortium for Simulation of Light-water Reactors (CASL) DOE Energy Innovation Hub. The Pike library contains no physics and just provides interfaces and utilities for coupling codes. It will be released open source under a BSD license as part of the Trilinos solver framework (trilinos.org) which is also BSD. This code provides capabilities similar to other open source multiphysics coupling libraries such as LIME, AMP, and MOOSE.

  1. Popping the Kernel Modeling the States of Matter

    ERIC Educational Resources Information Center

    Hitt, Austin; White, Orvil; Hanson, Debbie

    2005-01-01

    This article discusses how to use popcorn to engage students in model building and to teach them about the nature of matter. Popping kernels is a simple and effective method to connect the concepts of heat, motion, and volume with the different phases of matter. Before proceeding with the activity the class should discuss the nature of scientific…

  2. Metabolite identification through multiple kernel learning on fragmentation trees

    PubMed Central

    Shen, Huibin; Dührkop, Kai; Böcker, Sebastian; Rousu, Juho

    2014-01-01

    Motivation: Metabolite identification from tandem mass spectrometric data is a key task in metabolomics. Various computational methods have been proposed for the identification of metabolites from tandem mass spectra. Fragmentation tree methods explore the space of possible ways in which the metabolite can fragment, and base the metabolite identification on scoring of these fragmentation trees. Machine learning methods have been used to map mass spectra to molecular fingerprints; predicted fingerprints, in turn, can be used to score candidate molecular structures. Results: Here, we combine fragmentation tree computations with kernel-based machine learning to predict molecular fingerprints and identify molecular structures. We introduce a family of kernels capturing the similarity of fragmentation trees, and combine these kernels using recently proposed multiple kernel learning approaches. Experiments on two large reference datasets show that the new methods significantly improve molecular fingerprint prediction accuracy. These improvements result in better metabolite identification, doubling the number of metabolites ranked at the top position of the candidates list. Contact: huibin.shen@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931979

  3. Molecular method for sex identification of half-smooth tongue sole (Cynoglossus semilaevis) using a novel sex-linked microsatellite marker.

    PubMed

    Liao, Xiaolin; Xu, Genbo; Chen, Song-Lin

    2014-01-01

    Half-smooth tongue sole (Cynoglossus semilaevis) is one of the most important flatfish species for aquaculture in China. To produce a monosex population, we attempted to develop a marker-assisted sex control technique in this sexually size dimorphic fish. In this study, we identified a co-dominant sex-linked marker (i.e., CyseSLM) by screening genomic microsatellites and further developed a novel molecular method for sex identification in the tongue sole. CyseSLM has a sequence similarity of 73%-75% with stickleback, medaka, Fugu and Tetraodon. At this locus, two alleles (i.e., A244 and A234) were amplified from 119 tongue sole individuals with primer pairs CyseSLM-F1 and CyseSLM-R. Allele A244 was present in all individuals, while allele A234 (female-associated allele, FAA) was mostly present in females with exceptions in four male individuals. Compared with the sequence of A244, A234 has a 10-bp deletion and 28 SNPs. A specific primer (CyseSLM-F2) was then designed based on the A234 sequence, which amplified a 204 bp fragment in all females and four males with primer CyseSLM-R. A time-efficient multiplex PCR program was developed using primers CyseSLM-F2, CyseSLM-R and the newly designed primer CyseSLM-F3. The multiplex PCR products with co-dominant pattern could be detected by agarose gel electrophoresis, which accurately identified the genetic sex of the tongue sole. Therefore, we have developed a rapid and reliable method for sex identification in tongue sole with a newly identified sex-linked microsatellite marker. PMID:25054319

  4. A Novel Method for Differentiation of Human Mesenchymal Stem Cells into Smooth Muscle-Like Cells on Clinically Deliverable Thermally Induced Phase Separation Microspheres

    PubMed Central

    Parmar, Nina; Ahmadi, Raheleh

    2015-01-01

    Muscle degeneration is a prevalent disease, particularly in aging societies where it has a huge impact on quality of life and incurs colossal health costs. Suitable donor sources of smooth muscle cells are limited and minimally invasive therapeutic approaches are sought that will augment muscle volume by delivering cells to damaged or degenerated areas of muscle. For the first time, we report the use of highly porous microcarriers produced using thermally induced phase separation (TIPS) to expand and differentiate adipose-derived mesenchymal stem cells (AdMSCs) into smooth muscle-like cells in a format that requires minimal manipulation before clinical delivery. AdMSCs readily attached to the surface of TIPS microcarriers and proliferated while maintained in suspension culture for 12 days. Switching the incubation medium to a differentiation medium containing 2 ng/mL transforming growth factor beta-1 resulted in a significant increase in both the mRNA and protein expression of cell contractile apparatus components caldesmon, calponin, and myosin heavy chains, indicative of a smooth muscle cell-like phenotype. Growth of smooth muscle cells on the surface of the microcarriers caused no change to the integrity of the polymer microspheres making them suitable for a cell-delivery vehicle. Our results indicate that TIPS microspheres provide an ideal substrate for the expansion and differentiation of AdMSCs into smooth muscle-like cells as well as a microcarrier delivery vehicle for the attached cells ready for therapeutic applications. PMID:25205072

  5. Image texture analysis of crushed wheat kernels

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Martin, C. R.; Steele, James L.; Dempster, Richard E.

    1992-03-01

    The development of new approaches for wheat hardness assessment may impact the grain industry in marketing, milling, and breeding. This study used image texture features for wheat hardness evaluation. Application of digital imaging to grain for grading purposes is principally based on morphometrical (shape and size) characteristics of the kernels. A composite sample of 320 kernels for 17 wheat varieties were collected after testing and crushing with a single kernel hardness characterization meter. Six wheat classes where represented: HRW, HRS, SRW, SWW, Durum, and Club. In this study, parameters which characterize texture or spatial distribution of gray levels of an image were determined and used to classify images of crushed wheat kernels. The texture parameters of crushed wheat kernel images were different depending on class, hardness and variety of the wheat. Image texture analysis of crushed wheat kernels showed promise for use in class, hardness, milling quality, and variety discrimination.

  6. Single aflatoxin contaminated corn kernel analysis with fluorescence hyperspectral image

    NASA Astrophysics Data System (ADS)

    Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Ononye, Ambrose; Brown, Robert L.; Cleveland, Thomas E.

    2010-04-01

    Aflatoxins are toxic secondary metabolites of the fungi Aspergillus flavus and Aspergillus parasiticus, among others. Aflatoxin contaminated corn is toxic to domestic animals when ingested in feed and is a known carcinogen associated with liver and lung cancer in humans. Consequently, aflatoxin levels in food and feed are regulated by the Food and Drug Administration (FDA) in the US, allowing 20 ppb (parts per billion) limits in food and 100 ppb in feed for interstate commerce. Currently, aflatoxin detection and quantification methods are based on analytical tests including thin-layer chromatography (TCL) and high performance liquid chromatography (HPLC). These analytical tests require the destruction of samples, and are costly and time consuming. Thus, the ability to detect aflatoxin in a rapid, nondestructive way is crucial to the grain industry, particularly to corn industry. Hyperspectral imaging technology offers a non-invasive approach toward screening for food safety inspection and quality control based on its spectral signature. The focus of this paper is to classify aflatoxin contaminated single corn kernels using fluorescence hyperspectral imagery. Field inoculated corn kernels were used in the study. Contaminated and control kernels under long wavelength ultraviolet excitation were imaged using a visible near-infrared (VNIR) hyperspectral camera. The imaged kernels were chemically analyzed to provide reference information for image analysis. This paper describes a procedure to process corn kernels located in different images for statistical training and classification. Two classification algorithms, Maximum Likelihood and Binary Encoding, were used to classify each corn kernel into "control" or "contaminated" through pixel classification. The Binary Encoding approach had a slightly better performance with accuracy equals to 87% or 88% when 20 ppb or 100 ppb was used as classification threshold, respectively.

  7. Near-infrared spectroscopic method for the identification of Fusarium head blight damage and prediction of deoxynivalenol in single wheat kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fusarium Head Blight (FHB), or scab, can result in significant crop yield losses and contaminated grain in wheat (Triticum aestivum L.). Growing less susceptible varieties is one of the most effective methods for managing FHB and for reducing deoxynivalenol (DON) levels in grain, but breeding progra...

  8. FABRICATION PROCESS AND PRODUCT QUALITY IMPROVEMENTS IN ADVANCED GAS REACTOR UCO KERNELS

    SciTech Connect

    Charles M Barnes

    2008-09-01

    A major element of the Advanced Gas Reactor (AGR) program is developing fuel fabrication processes to produce high quality uranium-containing kernels, TRISO-coated particles and fuel compacts needed for planned irradiation tests. The goals of the AGR program also include developing the fabrication technology to mass produce this fuel at low cost. Kernels for the first AGR test (“AGR-1) consisted of uranium oxycarbide (UCO) microspheres that werre produced by an internal gelation process followed by high temperature steps tot convert the UO3 + C “green” microspheres to first UO2 + C and then UO2 + UCx. The high temperature steps also densified the kernels. Babcock and Wilcox (B&W) fabricated UCO kernels for the AGR-1 irradiation experiment, which went into the Advance Test Reactor (ATR) at Idaho National Laboratory in December 2006. An evaluation of the kernel process following AGR-1 kernel production led to several recommendations to improve the fabrication process. These recommendations included testing alternative methods of dispersing carbon during broth preparation, evaluating the method of broth mixing, optimizing the broth chemistry, optimizing sintering conditions, and demonstrating fabrication of larger diameter UCO kernels needed for the second AGR irradiation test. Based on these recommendations and requirements, a test program was defined and performed. Certain portions of the test program were performed by Oak Ridge National Laboratory (ORNL), while tests at larger scale were performed by B&W. The tests at B&W have demonstrated improvements in both kernel properties and process operation. Changes in the form of carbon black used and the method of mixing the carbon prior to forming kernels led to improvements in the phase distribution in the sintered kernels, greater consistency in kernel properties, a reduction in forming run time, and simplifications to the forming process. Process parameter variation tests in both forming and sintering steps led

  9. Privacy preserving RBF kernel support vector machine.

    PubMed

    Li, Haoran; Xiong, Li; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2014-01-01

    Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data. PMID:25013805

  10. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions

    NASA Astrophysics Data System (ADS)

    Song, Bongyong; Park, Justin C.; Song, William Y.

    2014-11-01

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires ‘at most one function evaluation’ in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a ‘smoothed TV’ or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT

  11. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions.

    PubMed

    Song, Bongyong; Park, Justin C; Song, William Y

    2014-11-01

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires 'at most one function evaluation' in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a 'smoothed TV' or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for

  12. Molecular Hydrodynamics from Memory Kernels.

    PubMed

    Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin

    2016-04-01

    The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t^{-3/2}. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius. PMID:27104730

  13. Spline-Based Smoothing of Airfoil Curvatures

    NASA Technical Reports Server (NTRS)

    Li, W.; Krist, S.

    2008-01-01

    Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been

  14. Kernel-based least squares policy iteration for reinforcement learning.

    PubMed

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating

  15. Smoothed particle hydrodynamics with smoothed pseudo-density

    NASA Astrophysics Data System (ADS)

    Yamamoto, Satoko; Saitoh, Takayuki R.; Makino, Junichiro

    2015-06-01

    In this paper, we present a new formulation of smoothed particle hydrodynamics (SPH), which, unlike the standard SPH (SSPH), is well behaved at the contact discontinuity. The SSPH scheme cannot handle discontinuities in density (e.g., the contact discontinuity and the free surface), because it requires that the density of fluid is positive and continuous everywhere. Thus there is inconsistency in the formulation of the SSPH scheme at discontinuities of the fluid density. To solve this problem, we introduce a new quantity associated with particles and the "density" of that quantity. This "density" evolves through the usual continuity equation with an additional artificial diffusion term, in order to guarantee the continuity of the "density." We use this "density," or pseudo-density, instead of the mass density, to formulate our SPH scheme. We call our new method SPH with smoothed pseudo-density, and we show that it is physically consistent and can handle discontinuities quite well.

  16. Computer-aided identification of the water diffusion coefficient for maize kernels dried in a thin layer

    NASA Astrophysics Data System (ADS)

    Kujawa, Sebastian; Weres, Jerzy; Olek, Wiesław

    2016-07-01

    Uncertainties in mathematical modelling of water transport in cereal grain kernels during drying and storage are mainly due to implementing unreliable values of the water diffusion coefficient and simplifying the geometry of kernels. In the present study an attempt was made to reduce the uncertainties by developing a method for computer-aided identification of the water diffusion coefficient and more accurate 3D geometry modelling for individual kernels using original inverse finite element algorithms. The approach was exemplified by identifying the water diffusion coefficient for maize kernels subjected to drying. On the basis of the developed method, values of the water diffusion coefficient were estimated, 3D geometry of a maize kernel was represented by isoparametric finite elements, and the moisture content inside maize kernels dried in a thin layer was predicted. Validation of the results against experimental data showed significantly lower error values than in the case of results obtained for the water diffusion coefficient values available in the literature.

  17. A novel kernel extreme learning machine algorithm based on self-adaptive artificial bee colony optimisation strategy

    NASA Astrophysics Data System (ADS)

    Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Ji, Jin-Chao

    2016-04-01

    In this paper, we propose a novel learning algorithm, named SABC-MKELM, based on a kernel extreme learning machine (KELM) method for single-hidden-layer feedforward networks. In SABC-MKELM, the combination of Gaussian kernels is used as the activate function of KELM instead of simple fixed kernel learning, where the related parameters of kernels and the weights of kernels can be optimised by a novel self-adaptive artificial bee colony (SABC) approach simultaneously. SABC-MKELM outperforms six other state-of-the-art approaches in general, as it could effectively determine solution updating strategies and suitable parameters to produce a flexible kernel function involved in SABC. Simulations have demonstrated that the proposed algorithm not only self-adaptively determines suitable parameters and solution updating strategies learning from the previous experiences, but also achieves better generalisation performances than several related methods, and the results show good stability of the proposed algorithm.

  18. Online multiple kernel similarity learning for visual search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2014-03-01

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in content-based image retrieval (CBIR). Despite their successes, most existing methods on distance metric learning are limited in two aspects. First, they usually assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multimodal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel similarity learning framework for learning kernel-based proximity functions which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel online multiple kernel similarity (OMKS) learning method which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets in which encouraging results show that OMKS outperforms the state-of-the-art techniques significantly. PMID:24457509

  19. Heat kernel asymptotic expansions for the Heisenberg sub-Laplacian and the Grushin operator

    PubMed Central

    Chang, Der-Chen; Li, Yutian

    2015-01-01

    The sub-Laplacian on the Heisenberg group and the Grushin operator are typical examples of sub-elliptic operators. Their heat kernels are both given in the form of Laplace-type integrals. By using Laplace's method, the method of stationary phase and the method of steepest descent, we derive the small-time asymptotic expansions for these heat kernels, which are related to the geodesic structure of the induced geometries. PMID:25792966

  20. FFBSKAT: fast family-based sequence kernel association test.

    PubMed

    Svishcheva, Gulnara R; Belonogova, Nadezhda M; Axenovich, Tatiana I

    2014-01-01

    The kernel machine-based regression is an efficient approach to region-based association analysis aimed at identification of rare genetic variants. However, this method is computationally complex. The running time of kernel-based association analysis becomes especially long for samples with genetic (sub) structures, thus increasing the need to develop new and effective methods, algorithms, and software packages. We have developed a new R-package called fast family-based sequence kernel association test (FFBSKAT) for analysis of quantitative traits in samples of related individuals. This software implements a score-based variance component test to assess the association of a given set of single nucleotide polymorphisms with a continuous phenotype. We compared the performance of our software with that of two existing software for family-based sequence kernel association testing, namely, ASKAT and famSKAT, using the Genetic Analysis Workshop 17 family sample. Results demonstrate that FFBSKAT is several times faster than other available programs. In addition, the calculations of the three-compared software were similarly accurate. With respect to the available analysis modes, we combined the advantages of both ASKAT and famSKAT and added new options to empower FFBSKAT users. The FFBSKAT package is fast, user-friendly, and provides an easy-to-use method to perform whole-exome kernel machine-based regression association analysis of quantitative traits in samples of related individuals. The FFBSKAT package, along with its manual, is available for free download at http://mga.bionet.nsc.ru/soft/FFBSKAT/. PMID:24905468

  1. Point-Kernel Shielding Code System.

    Energy Science and Technology Software Center (ESTSC)

    1982-02-17

    Version 00 QAD-BSA is a three-dimensional, point-kernel shielding code system based upon the CCC-48/QAD series. It is designed to calculate photon dose rates and heating rates using exponential attenuation and infinite medium buildup factors. Calculational provisions include estimates of fast neutron penetration using data computed by the moments method. Included geometry routines can describe complicated source and shield geometries. An internal library contains data for many frequently used structural and shielding materials, enabling the codemore » to solve most problems with only source strengths and problem geometry required as input. This code system adapts especially well to problems requiring multiple sources and sources with asymmetrical geometry. In addition to being edited separately, the total interaction rates from many sources may be edited at each detector point. Calculated photon interaction rates agree closely with those obtained using QAD-P5A.« less

  2. Statistical Analysis of Photopyroelectric Signals using Histogram and Kernel Density Estimation for differentiation of Maize Seeds

    NASA Astrophysics Data System (ADS)

    Rojas-Lima, J. E.; Domínguez-Pacheco, A.; Hernández-Aguilar, C.; Cruz-Orea, A.

    2016-09-01

    Considering the necessity of photothermal alternative approaches for characterizing nonhomogeneous materials like maize seeds, the objective of this research work was to analyze statistically the amplitude variations of photopyroelectric signals, by means of nonparametric techniques such as the histogram and the kernel density estimator, and the probability density function of the amplitude variations of two genotypes of maize seeds with different pigmentations and structural components: crystalline and floury. To determine if the probability density function had a known parametric form, the histogram was determined which did not present a known parametric form, so the kernel density estimator using the Gaussian kernel, with an efficiency of 95 % in density estimation, was used to obtain the probability density function. The results obtained indicated that maize seeds could be differentiated in terms of the statistical values for floury and crystalline seeds such as the mean (93.11, 159.21), variance (1.64× 103, 1.48× 103), and standard deviation (40.54, 38.47) obtained from the amplitude variations of photopyroelectric signals in the case of the histogram approach. For the case of the kernel density estimator, seeds can be differentiated in terms of kernel bandwidth or smoothing constant h of 9.85 and 6.09 for floury and crystalline seeds, respectively.

  3. Excitons in solids with time-dependent density-functional theory: the bootstrap kernel and beyond

    NASA Astrophysics Data System (ADS)

    Byun, Young-Moo; Yang, Zeng-Hui; Ullrich, Carsten

    Time-dependent density-functional theory (TDDFT) is an efficient method to describe the optical properties of solids. Lately, a series of bootstrap-type exchange-correlation (xc) kernels have been reported to produce accurate excitons in solids, but different bootstrap-type kernels exist in the literature, with mixed results. In this presentation, we reveal the origin of the confusion and show a new empirical TDDFT xc kernel to compute excitonic properties of semiconductors and insulators efficiently and accurately. Our method can be used for high-throughput screening calculations and large unit cell calculations. Work supported by NSF Grant DMR-1408904.

  4. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  5. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  6. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...

  7. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  8. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...

  9. Single infrared image super-resolution combining non-local means with kernel regression

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Chen, Fu-sheng; Zhang, Zhi-jie; Wang, Chen-sheng

    2013-11-01

    In many infrared imaging systems, the focal plane array is not sufficient dense to adequately sample the scene with the desired field of view. Therefore, there are not enough high frequency details in the infrared image generally. Super-resolution (SR) technology can be used to increase the resolution of low-resolution (LR) infrared image. In this paper, a novel super-resolution algorithm is proposed based on non-local means (NLM) and steering kernel regression (SKR). Based on that there are a large number of similar patches within an infrared image, NLM method can abstract the non-local similarity information and then the value of high-resolution (HR) pixel can be estimated. SKR method is derived based on the local smoothness of the natural images. In this paper the SKR is used to give the regularization term which can restrict the image noise and protect image edges. The estimated SR image is obtained by minimizing a cost function. In the experiments the proposed algorithm is compared with state-of-the-art algorithms. The comparison results show that the proposed method is robust to the noise and it can restore higher quality image both in quantitative term and visual effect.

  10. Improved beam smoothing with SSD using generalized phase modulation

    SciTech Connect

    Rothenberg, J.E.

    1997-01-01

    The smoothing of the spatial illumination of an inertial confinement fusion target is examined by its spatial frequency content. It is found that the smoothing by spectral dispersion method, although efficient for glass lasers, can yield poor smoothing at low spatial frequency. The dependence of the smoothed spatial spectrum on the characteristics of phase modulation and dispersion is examined for both sinusoidal and more general phase modulation. It is shown that smoothing with non-sinusoidal phase modulation can result in spatial spectra which are substantially identical to that obtained with the induced spatial incoherence or similar method where random phase plates are present in both methods and identical beam divergence is assumed.

  11. Corn kernel oil and corn fiber oil

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Unlike most edible plant oils that are obtained directly from oil-rich seeds by either pressing or solvent extraction, corn seeds (kernels) have low levels of oil (4%) and commercial corn oil is obtained from the corn germ (embryo) which is an oil-rich portion of the kernel. Commercial corn oil cou...

  12. Classification of oat and groat kernels using NIR hyperspectral imaging.

    PubMed

    Serranti, Silvia; Cesare, Daniela; Marini, Federico; Bonifazi, Giuseppe

    2013-01-15

    An innovative procedure to classify oat and groat kernels based on coupling hyperspectral imaging (HSI) in the near infrared (NIR) range (1006-1650 nm) and chemometrics was designed, developed and validated. According to market requirements, the amount of groat, that is the hull-less oat kernels, is one of the most important quality characteristics of oats. Hyperspectral images of oat and groat samples have been acquired by using a NIR spectral camera (Specim, Finland) and the resulting data hypercubes were analyzed applying Principal Component Analysis (PCA) for exploratory purposes and Partial Least Squares-Discriminant Analysis (PLS-DA) to build the classification models to discriminate the two kernel typologies. Results showed that it is possible to accurately recognize oat and groat single kernels by HSI (prediction accuracy was almost 100%). The study demonstrated also that good classification results could be obtained using only three wavelengths (1132, 1195 and 1608 nm), selected by means of a bootstrap-VIP procedure, allowing to speed up the classification processing for industrial applications. The developed objective and non-destructive method based on HSI can be utilized for quality control purposes and/or for the definition of innovative sorting logics of oat grains. PMID:23200388

  13. Classification of Microarray Data Using Kernel Fuzzy Inference System

    PubMed Central

    Kumar Rath, Santanu

    2014-01-01

    The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature selection method. Kernel functions are used to map original data points into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function ϕ through a mathematical process called the kernel trick. This paper also presents a comparative study for classification using K-FIS along with support vector machine (SVM) for different set of features (genes). Performance parameters available in the literature such as precision, recall, specificity, F-measure, ROC curve, and accuracy are considered to analyze the efficiency of the classification model. From the proposed approach, it is apparent that K-FIS model obtains similar results when compared with SVM model. This is an indication that the proposed approach relies on kernel function.

  14. Effective face recognition using bag of features with additive kernels

    NASA Astrophysics Data System (ADS)

    Yang, Shicai; Bebis, George; Chu, Yongjie; Zhao, Lindu

    2016-01-01

    In past decades, many techniques have been used to improve face recognition performance. The most common and well-studied ways are to use the whole face image to build a subspace based on the reduction of dimensionality. Differing from methods above, we consider face recognition as an image classification problem. The face images of the same person are considered to fall into the same category. Each category and each face image could be both represented by a simple pyramid histogram. Spatial dense scale-invariant feature transform features and bag of features method are used to build categories and face representations. In an effort to make the method more efficient, a linear support vector machine solver, Pegasos, is used for the classification in the kernel space with additive kernels instead of nonlinear SVMs. Our experimental results demonstrate that the proposed method can achieve very high recognition accuracy on the ORL, YALE, and FERET databases.

  15. Undersampled dynamic magnetic resonance imaging using kernel principal component analysis.

    PubMed

    Wang, Yanhua; Ying, Leslie

    2014-01-01

    Compressed sensing (CS) is a promising approach to accelerate dynamic magnetic resonance imaging (MRI). Most existing CS methods employ linear sparsifying transforms. The recent developments in non-linear or kernel-based sparse representations have been shown to outperform the linear transforms. In this paper, we present an iterative non-linear CS dynamic MRI reconstruction framework that uses the kernel principal component analysis (KPCA) to exploit the sparseness of the dynamic image sequence in the feature space. Specifically, we apply KPCA to represent the temporal profiles of each spatial location and reconstruct the images through a modified pre-image problem. The underlying optimization algorithm is based on variable splitting and fixed-point iteration method. Simulation results show that the proposed method outperforms conventional CS method in terms of aliasing artifact reduction and kinetic information preservation. PMID:25570262

  16. Reproducing kernel hilbert space based single infrared image super resolution

    NASA Astrophysics Data System (ADS)

    Chen, Liangliang; Deng, Liangjian; Shen, Wei; Xi, Ning; Zhou, Zhanxin; Song, Bo; Yang, Yongliang; Cheng, Yu; Dong, Lixin

    2016-07-01

    The spatial resolution of Infrared (IR) images is limited by lens optical diffraction, sensor array pitch size and pixel dimension. In this work, a robust model is proposed to reconstruct high resolution infrared image via a single low resolution sampling, where the image features are discussed and classified as reflective, cooled emissive and uncooled emissive based on infrared irradiation source. A spline based reproducing kernel hilbert space and approximative heaviside function are deployed to model smooth part and edge component of image respectively. By adjusting the parameters of heaviside function, the proposed model can enhance distinct part of images. The experimental results show that the model is applicable on both reflective and emissive low resolution infrared images to improve thermal contrast. The overall outcome produces a high resolution IR image, which makes IR camera better measurement accuracy and observes more details at long distance.

  17. Fast metabolite identification with Input Output Kernel Regression

    PubMed Central

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-01-01

    Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628

  18. SIFT fusion of kernel eigenfaces for face recognition

    NASA Astrophysics Data System (ADS)

    Kisku, Dakshina R.; Tistarelli, Massimo; Gupta, Phalguni; Sing, Jamuna K.

    2015-10-01

    In this paper, we investigate an application that integrates holistic appearance based method and feature based method for face recognition. The automatic face recognition system makes use of multiscale Kernel PCA (Principal Component Analysis) characterized approximated face images and reduced the number of invariant SIFT (Scale Invariant Feature Transform) keypoints extracted from face projected feature space. To achieve higher variance in the inter-class face images, we compute principal components in higher-dimensional feature space to project a face image onto some approximated kernel eigenfaces. As long as feature spaces retain their distinctive characteristics, reduced number of SIFT points are detected for a number of principal components and keypoints are then fused using user-dependent weighting scheme and form a feature vector. The proposed method is tested on ORL face database, and the efficacy of the system is proved by the test results computed using the proposed algorithm.

  19. A Classification of Remote Sensing Image Based on Improved Compound Kernels of Svm

    NASA Astrophysics Data System (ADS)

    Zhao, Jianing; Gao, Wanlin; Liu, Zili; Mou, Guifen; Lu, Lin; Yu, Lina

    The accuracy of RS classification based on SVM which is developed from statistical learning theory is high under small number of train samples, which results in satisfaction of classification on RS using SVM methods. The traditional RS classification method combines visual interpretation with computer classification. The accuracy of the RS classification, however, is improved a lot based on SVM method, because it saves much labor and time which is used to interpret images and collect training samples. Kernel functions play an important part in the SVM algorithm. It uses improved compound kernel function and therefore has a higher accuracy of classification on RS images. Moreover, compound kernel improves the generalization and learning ability of the kernel.

  20. Protein analysis meets visual word recognition: a case for string kernels in the brain.

    PubMed

    Hannagan, Thomas; Grainger, Jonathan

    2012-01-01

    It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jäkel, Schölkopf, & Wichmann, 2009). We point out that ''String kernels,'' initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal for how the brain encodes orthographic information during reading. We suggest some reasons for this connection and we derive new ideas for visual word recognition that are successfully put to the test. We argue that the versatility and performance of String kernels makes a compelling case for their implementation in the brain. PMID:22433060

  1. Nonequilibrium flows with smooth particle applied mechanics

    SciTech Connect

    Kum, O.

    1995-07-01

    Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separately controlled. The gradient algorithm, based on differentiating the smooth particle expression for (u{rho}) and (T{rho}), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.

  2. Walk-weighted subsequence kernels for protein-protein interaction extraction

    PubMed Central

    2010-01-01

    Background The construction of interaction networks between proteins is central to understanding the underlying biological processes. However, since many useful relations are excluded in databases and remain hidden in raw text, a study on automatic interaction extraction from text is important in bioinformatics field. Results Here, we suggest two kinds of kernel methods for genic interaction extraction, considering the structural aspects of sentences. First, we improve our prior dependency kernel by modifying the kernel function so that it can involve various substructures in terms of (1) e-walks, (2) partial match, (3) non-contiguous paths, and (4) different significance of substructures. Second, we propose the walk-weighted subsequence kernel to parameterize non-contiguous syntactic structures as well as semantic roles and lexical features, which makes learning structural aspects from a small amount of training data effective. Furthermore, we distinguish the significances of parameters such as syntactic locality, semantic roles, and lexical features by varying their weights. Conclusions We addressed the genic interaction problem with various dependency kernels and suggested various structural kernel scenarios based on the directed shortest dependency path connecting two entities. Consequently, we obtained promising results over genic interaction data sets with the walk-weighted subsequence kernel. The results are compared using automatically parsed third party protein-protein interaction (PPI) data as well as perfectly syntactic labeled PPI data. PMID:20184736

  3. Origin-Destination Flow Data Smoothing and Mapping.

    PubMed

    Guo, Diansheng; Zhu, Xi

    2014-12-01

    This paper presents a new approach to flow mapping that extracts inherent patterns from massive geographic mobility data and constructs effective visual representations of the data for the understanding of complex flow trends. This approach involves a new method for origin-destination flow density estimation and a new method for flow map generalization, which together can remove spurious data variance, normalize flows with control population, and detect high-level patterns that are not discernable with existing approaches. The approach achieves three main objectives in addressing the challenges for analyzing and mapping massive flow data. First, it removes the effect of size differences among spatial units via kernel-based density estimation, which produces a measurement of flow volume between each pair of origin and destination. Second, it extracts major flow patterns in massive flow data through a new flow sampling method, which filters out duplicate information in the smoothed flows. Third, it enables effective flow mapping and allows intuitive perception of flow patterns among origins and destinations without bundling or altering flow paths. The approach can work with both point-based flow data (such as taxi trips with GPS locations) and area-based flow data (such as county-to-county migration). Moreover, the approach can be used to detect and compare flow patterns at different scales or in relatively sparse flow datasets, such as migration for each age group. We evaluate and demonstrate the new approach with case studies of U.S. migration data and experiments with synthetic data. PMID:26356918

  4. Predicting activity approach based on new atoms similarity kernel function.

    PubMed

    Abu El-Atta, Ahmed H; Moussa, M I; Hassanien, Aboul Ella

    2015-07-01

    Drug design is a high cost and long term process. To reduce time and costs for drugs discoveries, new techniques are needed. Chemoinformatics field implements the informational techniques and computer science like machine learning and graph theory to discover the chemical compounds properties, such as toxicity or biological activity. This is done through analyzing their molecular structure (molecular graph). To overcome this problem there is an increasing need for algorithms to analyze and classify graph data to predict the activity of molecules. Kernels methods provide a powerful framework which combines machine learning with graph theory techniques. These kernels methods have led to impressive performance results in many several chemoinformatics problems like biological activity prediction. This paper presents a new approach based on kernel functions to solve activity prediction problem for chemical compounds. First we encode all atoms depending on their neighbors then we use these codes to find a relationship between those atoms each other. Then we use relation between different atoms to find similarity between chemical compounds. The proposed approach was compared with many other classification methods and the results show competitive accuracy with these methods. PMID:26117822

  5. Backward smoothing for precise GNSS applications

    NASA Astrophysics Data System (ADS)

    Vaclavovic, Pavel; Dousa, Jan

    2015-10-01

    The Extended Kalman filter is widely used for its robustness and simple implementation. Parameters estimated for solving dynamical systems usually require certain time to converge and need to be smoothed by a dedicated algorithms. The purpose of our study was to implement smoothing algorithms for processing both code and carrier phase observations with Precise Point Positioning method. We implemented and used the well known Rauch-Tung-Striebel smoother (RTS). It has been found out that the RTS suffer from significant numerical instability in smoothed state covariance matrix determination. We improved the processing with algorithms based on Singular Value Decomposition, which was more robust. Observations from many permanent stations have been processed with final orbits and clocks provided by the International GNSS service (IGS), and the smoothing improved stability and precision in every cases. Moreover, (re)convergence of the parameters were always successfully eliminated.

  6. Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1992-01-01

    Recently, Freund and Nachtigal have proposed a novel polynominal-based iteration, the quasi-minimal residual algorithm (QMR), for solving general nonsingular non-Hermitian linear systems. Motivated by the QMR method, we have introduced the general concept of quasi-kernel polynomials, and we have shown that the QMR algorithm is based on a particular instance of quasi-kernel polynomials. In this paper, we continue our study of quasi-kernel polynomials. In particular, we derive bounds for the norms of quasi-kernel polynomials. These results are then applied to obtain convergence theorems both for the QMR method and for a transpose-free variant of QMR, the TFQMR algorithm.

  7. Reduction of Aflatoxins in Apricot Kernels by Electronic and Manual Color Sorting

    PubMed Central

    Zivoli, Rosanna; Gambacorta, Lucia; Piemontese, Luca; Solfrizzo, Michele

    2016-01-01

    The efficacy of color sorting on reducing aflatoxin levels in shelled apricot kernels was assessed. Naturally-contaminated kernels were submitted to an electronic optical sorter or blanched, peeled, and manually sorted to visually identify and sort discolored kernels (dark and spotted) from healthy ones. The samples obtained from the two sorting approaches were ground, homogenized, and analysed by HPLC-FLD for their aflatoxin content. A mass balance approach was used to measure the distribution of aflatoxins in the collected fractions. Aflatoxin B1 and B2 were identified and quantitated in all collected fractions at levels ranging from 1.7 to 22,451.5 µg/kg of AFB1 + AFB2, whereas AFG1 and AFG2 were not detected. Excellent results were obtained by manual sorting of peeled kernels since the removal of discolored kernels (2.6%–19.9% of total peeled kernels) removed 97.3%–99.5% of total aflatoxins. The combination of peeling and visual/manual separation of discolored kernels is a feasible strategy to remove 97%–99% of aflatoxins accumulated in naturally-contaminated samples. Electronic optical sorter gave highly variable results since the amount of AFB1 + AFB2 measured in rejected fractions (15%–18% of total kernels) ranged from 13% to 59% of total aflatoxins. An improved immunoaffinity-based HPLC-FLD method having low limits of detection for the four aflatoxins (0.01–0.05 µg/kg) was developed and used to monitor the occurrence of aflatoxins in 47 commercial products containing apricot kernels and/or almonds commercialized in Italy. Low aflatoxin levels were found in 38% of the tested samples and ranged from 0.06 to 1.50 μg/kg for AFB1 and from 0.06 to 1.79 μg/kg for total aflatoxins. PMID:26797635

  8. Reduction of Aflatoxins in Apricot Kernels by Electronic and Manual Color Sorting.

    PubMed

    Zivoli, Rosanna; Gambacorta, Lucia; Piemontese, Luca; Solfrizzo, Michele

    2016-01-01

    The efficacy of color sorting on reducing aflatoxin levels in shelled apricot kernels was assessed. Naturally-contaminated kernels were submitted to an electronic optical sorter or blanched, peeled, and manually sorted to visually identify and sort discolored kernels (dark and spotted) from healthy ones. The samples obtained from the two sorting approaches were ground, homogenized, and analysed by HPLC-FLD for their aflatoxin content. A mass balance approach was used to measure the distribution of aflatoxins in the collected fractions. Aflatoxin B₁ and B₂ were identified and quantitated in all collected fractions at levels ranging from 1.7 to 22,451.5 µg/kg of AFB₁ + AFB₂, whereas AFG₁ and AFG₂ were not detected. Excellent results were obtained by manual sorting of peeled kernels since the removal of discolored kernels (2.6%-19.9% of total peeled kernels) removed 97.3%-99.5% of total aflatoxins. The combination of peeling and visual/manual separation of discolored kernels is a feasible strategy to remove 97%-99% of aflatoxins accumulated in naturally-contaminated samples. Electronic optical sorter gave highly variable results since the amount of AFB₁ + AFB₂ measured in rejected fractions (15%-18% of total kernels) ranged from 13% to 59% of total aflatoxins. An improved immunoaffinity-based HPLC-FLD method having low limits of detection for the four aflatoxins (0.01-0.05 µg/kg) was developed and used to monitor the occurrence of aflatoxins in 47 commercial products containing apricot kernels and/or almonds commercialized in Italy. Low aflatoxin levels were found in 38% of the tested samples and ranged from 0.06 to 1.50 μg/kg for AFB₁ and from 0.06 to 1.79 μg/kg for total aflatoxins. PMID:26797635

  9. Rapid and Nondestructive Determination of Moisture Content in Peanut Kernels from Microwave Measurement of Dielectric Properties of Pods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A method for moisture determination in peanut kernels from measurement of the dielectric properties of peanut pods at microwave frequencies is presented. The dielectric properties of peanut kernels and pods were measured in free space with a vector network analyzer and a pair of focused beam horn-l...

  10. Measurement of Wheat Hardness by Seed Scarifier and Barley Pearler And Comparison with Single-Kernel Characterization System

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A new procedure based on a seed scarifier (SS) for measuring wheat hardness was described and investigated along with methods of barley pearler (BP) and single kernel characterization system (SKCS). Hardness measured by SS and BP was expressed as percentage of kernel weight remained after abrading ...

  11. Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.

    PubMed

    Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D

    2016-04-01

    Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the

  12. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2013-01-01 2013-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...

  13. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296... STANDARDS) United States Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2296 Three-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more...

  14. UPDATE OF GRAY KERNEL DISEASE OF MACADAMIA - 2006

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Gray kernel is an important disease of macadamia that affects the quality of kernels with gray discoloration and a permeating, foul odor that can render entire batches of nuts unmarketable. We report on the successful production of gray kernel in raw macadamia kernels artificially inoculated with s...

  15. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...

  16. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...

  17. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...

  18. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...

  19. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...

  20. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...

  1. Kernel MAD Algorithm for Relative Radiometric Normalization

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Tang, Ping; Hu, Changmiao

    2016-06-01

    The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  2. Three-body-continuum Coulomb problem using a compact-kernel-integral-equation approach

    NASA Astrophysics Data System (ADS)

    Silenou Mengoue, M.

    2013-02-01

    We present an approach associated with the Jacobi matrix method to calculate a three-body wave function that describes the double continuum of an atomic two-electron system. In this approach, a symmetrized product of two Coulomb waves is used to describe the asymptotic wave function, while a smooth cutoff function is introduced to the dielectronic potential that enters its integral part in order to have a compact kernel of the corresponding Lippmann-Schwinger-type equation to be solved. As an application, the integral equation for the (e-,e-,He2+) system is solved numerically; the fully fivefold differential cross sections (FDCSs) for (e,3e) processes in helium are presented within the first-order Born approximation. The calculation is performed for a coplanar geometry in which the incident electron is fast (˜6 keV) and for a symmetric energy sharing between both slow ejected electrons at excess energy of 20 eV. The experimental and theoretical FDCSs agree satisfactorily both in shape and in magnitude. Full convergence in terms of the basis size is reached and presented.

  3. Finite-frequency sensitivity kernels of seismic waves to fault zone structures

    NASA Astrophysics Data System (ADS)

    Allam, A. A.; Tape, C.; Ben-Zion, Y.

    2015-12-01

    We analyse the volumetric sensitivity of fault zone seismic head and trapped waves by constructing finite-frequency sensitivity (Fréchet) kernels for these phases using a suite of idealized and tomographically derived velocity models of fault zones. We first validate numerical calculations by waveform comparisons with analytical results for two simple fault zone models: a vertical bimaterial interface separating two solids of differing elastic properties, and a `vertical sandwich' with a vertical low velocity zone surrounded on both sides by higher velocity media. Establishing numerical accuracy up to 12 Hz, we compute sensitivity kernels for various phases that arise in these and more realistic models. In contrast to direct P body waves, which have little or no sensitivity to the internal fault zone structure, the sensitivity kernels for head waves have sharp peaks with high values near the fault in the faster medium. Surface wave kernels show the broadest spatial distribution of sensitivity, while trapped wave kernels are extremely narrow with sensitivity focused entirely inside the low-velocity fault zone layer. Trapped waves are shown to exhibit sensitivity patterns similar to Love waves, with decreasing width as a function of frequency and multiple Fresnel zones of alternating polarity. In models that include smoothing of the boundaries of the low velocity zone, there is little effect on the trapped wave kernels, which are focused in the central core of the low velocity zone. When the source is located outside a shallow fault zone layer, trapped waves propagate through the surrounding medium with body wave sensitivity before becoming confined. The results provide building blocks for full waveform tomography of fault zone regions combining high-frequency head, trapped, body, and surface waves. Such an imaging approach can constrain fault zone structure across a larger range of scales than has previously been possible.

  4. KITTEN Lightweight Kernel 0.1 Beta

    Energy Science and Technology Software Center (ESTSC)

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten providesmore » unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.« less

  5. Knowledge Driven Image Mining with Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels; which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper, we present the theory of Mercer Kernels, describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  6. Anti-smooth muscle antibody

    MedlinePlus

    ... medlineplus.gov/ency/article/003531.htm Anti-smooth muscle antibody To use the sharing features on this page, please enable JavaScript. Anti-smooth muscle antibody is a blood test that detects the ...

  7. TICK: Transparent Incremental Checkpointing at Kernel Level

    Energy Science and Technology Software Center (ESTSC)

    2004-10-25

    TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5

  8. Utility of a novel error-stepping method to improve gradient-based parameter identification by increasing the smoothness of the local objective surface: a case-study of pulmonary mechanics.

    PubMed

    Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut

    2014-05-01

    Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. PMID:23910223

  9. Integrating semantic information into multiple kernels for protein-protein interaction extraction from biomedical literatures.

    PubMed

    Li, Lishuang; Zhang, Panpan; Zheng, Tianfu; Zhang, Hongying; Jiang, Zhenchao; Huang, Degen

    2014-01-01

    Protein-Protein Interaction (PPI) extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT) by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH). We evaluate our method with Support Vector Machine (SVM) and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information. PMID:24622773

  10. Integrating Semantic Information into Multiple Kernels for Protein-Protein Interaction Extraction from Biomedical Literatures

    PubMed Central

    Li, Lishuang; Zhang, Panpan; Zheng, Tianfu; Zhang, Hongying; Jiang, Zhenchao; Huang, Degen

    2014-01-01

    Protein-Protein Interaction (PPI) extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT) by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH). We evaluate our method with Support Vector Machine (SVM) and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information. PMID:24622773

  11. Analyzing Sparse Dictionaries for Online Learning With Kernels

    NASA Astrophysics Data System (ADS)

    Honeine, Paul

    2015-12-01

    Many signal processing and machine learning methods share essentially the same linear-in-the-parameter model, with as many parameters as available samples as in kernel-based machines. Sparse approximation is essential in many disciplines, with new challenges emerging in online learning with kernels. To this end, several sparsity measures have been proposed in the literature to quantify sparse dictionaries and constructing relevant ones, the most prolific ones being the distance, the approximation, the coherence and the Babel measures. In this paper, we analyze sparse dictionaries based on these measures. By conducting an eigenvalue analysis, we show that these sparsity measures share many properties, including the linear independence condition and inducing a well-posed optimization problem. Furthermore, we prove that there exists a quasi-isometry between the parameter (i.e., dual) space and the dictionary's induced feature space.

  12. Heat kernel expansion in the background field formalism

    NASA Astrophysics Data System (ADS)

    Barvinsky, Andrei O.

    2015-06-01

    Heat kernel expansion and background field formalism represent the combination of two calculational methods within the functional approach to quantum field theory. This approach implies construction of generating functionals for matrix elements and expectation values of physical observables. These are functionals of arbitrary external sources or the mean field of a generic configuration -- the background field. Exact calculation of quantum effects on a generic background is impossible. However, a special integral (proper time) representation for the Green's function of the wave operator -- the propagator of the theory -- and its expansion in the ultraviolet and infrared limits of respectively short and late proper time parameter allow one to construct approximations which are valid on generic background fields. Current progress of quantum field theory, its renormalization properties, model building in unification of fundamental physical interactions and QFT applications in high energy physics, gravitation and cosmology critically rely on efficiency of the heat kernel expansion and background field formalism.

  13. Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework

    PubMed Central

    Guo, Qing; Dong, Fangmin; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi

    2016-01-01

    A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images. PMID:27148597

  14. Beam-smoothing investigation on Heaven I

    NASA Astrophysics Data System (ADS)

    Xiang, Yi-huai; Gao, Zhi-xing; Tong, Xiao-hui; Dai, Hui; Tang, Xiu-zhang; Shan, Yu-sheng

    2007-01-01

    Directly driven targets for inertial confinement fusion (ICF) require laser beams with extremely smooth irradiance profiles to prevent hydrodynamic instabilities that destroy the spherical symmetry of the target during implosion. Such instabilities can break up and mix together the target's wall and fuel material, preventing it from reaching the density and temperature required for fusion ignition. 1,2 Measurements in the equation of state (EOS) experiments require laser beams with flat-roofed profiles to generate uniform shockwave 3. Some method for beam smooth, is thus needed. A technique called echelon-free induced spatial incoherence (EFISI) is proposed for producing smooth target beam profiles with large KrF lasers. The idea is basically an image projection technique that projects the desired time-averaged spatial profile onto the target via the laser system, using partially coherent broadband lighe. Utilize the technique, we developing beam- smoothing investigation on "Heaven I". At China Institute of Atomic Energy , a new angular multiplexing providing with beam-smoothing function has been developed, the total energy is 158J, the stability of energy is 4%, the pulse duration is 25ns, the effective diameter of focusing spot is 400um, and the ununiformity is about 1.6%, the power density on the target is about 3.7×10 12W/cm2. At present, the system have provided steady and smooth laser irradiation for EOS experiments.

  15. X-ray photoelectron spectroscopic analysis of rice kernels and flours: Measurement of surface chemical composition.

    PubMed

    Nawaz, Malik A; Gaiani, Claire; Fukai, Shu; Bhandari, Bhesh

    2016-12-01

    The objectives of this study were to evaluate the ability of X-ray photoelectron spectroscopy (XPS) to differentiate rice macromolecules and to calculate the surface composition of rice kernels and flours. The uncooked kernels and flours surface composition of the two selected rice varieties, Thadokkham-11 (TDK11) and Doongara (DG) demonstrated an over-expression of lipids and proteins and an under-expression of starch compared to the bulk composition. The results of the study showed that XPS was able to differentiate rice polysaccharides (mainly starch), proteins and lipids in uncooked rice kernels and flours. Nevertheless, it was unable to distinguish components in cooked rice samples possibly due to complex interactions between gelatinized starch, denatured proteins and lipids. High resolution imaging methods (Scanning Electron Microscopy and Confocal Laser Scanning Microscopy) were employed to obtain complementary information about the properties and location of starch, proteins and lipids in rice kernels and flours. PMID:27374542

  16. A Gaussian-like immersed-boundary kernel with three continuous derivatives and improved translational invariance

    NASA Astrophysics Data System (ADS)

    Bao, Yuanxun; Kaye, Jason; Peskin, Charles S.

    2016-07-01

    The immersed boundary (IB) method is a general mathematical framework for studying problems involving fluid-structure interactions in which an elastic structure is immersed in a viscous incompressible fluid. In the IB formulation, the fluid described by Eulerian variables is coupled with the immersed structure described by Lagrangian variables via the use of the Dirac delta function. From a numerical standpoint, the Lagrangian force spreading and the Eulerian velocity interpolation are carried out by a regularized, compactly supported discrete delta function, which is assumed to be a tensor product of a single-variable immersed-boundary kernel. IB kernels are derived from a set of postulates designed to achieve approximate grid translational invariance, interpolation accuracy and computational efficiency. In this note, we present a new 6-point immersed-boundary kernel that is C3 and yields a substantially improved translational invariance compared to other common IB kernels.

  17. New analytical TEMOM solutions for a class of collision kernels in the theory of Brownian coagulation

    NASA Astrophysics Data System (ADS)

    He, Qing; Shchekin, Alexander K.; Xie, Ming-Liang

    2015-06-01

    New analytical solutions in the theory of the Brownian coagulation with a wide class of collision kernels have been found with using the Taylor-series expansion method of moments (TEMOM). It has been shown at different power exponents in the collision kernels from this class and at arbitrary initial conditions that the relative rates of changing zeroth and second moments of the particle volume distribution have the same long time behavior with power exponent -1, while the dimensionless particle moment related to the geometric standard deviation tends to the constant value which equals 2. The power exponent in the collision kernel in the class studied affects the time of approaching the self-preserving distribution, the smaller the value of the index, the longer time. It has also been shown that constant collision kernel gives for the moments in the Brownian coagulation the results which are very close to that in the continuum regime.

  18. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    PubMed Central

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  19. Phylodynamic Inference with Kernel ABC and Its Application to HIV Epidemiology

    PubMed Central

    Poon, Art F.Y.

    2015-01-01

    The shapes of phylogenetic trees relating virus populations are determined by the adaptation of viruses within each host, and by the transmission of viruses among hosts. Phylodynamic inference attempts to reverse this flow of information, estimating parameters of these processes from the shape of a virus phylogeny reconstructed from a sample of genetic sequences from the epidemic. A key challenge to phylodynamic inference is quantifying the similarity between two trees in an efficient and comprehensive way. In this study, I demonstrate that a new distance measure, based on a subset tree kernel function from computational linguistics, confers a significant improvement over previous measures of tree shape for classifying trees generated under different epidemiological scenarios. Next, I incorporate this kernel-based distance measure into an approximate Bayesian computation (ABC) framework for phylodynamic inference. ABC bypasses the need for an analytical solution of model likelihood, as it only requires the ability to simulate data from the model. I validate this “kernel-ABC” method for phylodynamic inference by estimating parameters from data simulated under a simple epidemiological model. Results indicate that kernel-ABC attained greater accuracy for parameters associated with virus transmission than leading software on the same data sets. Finally, I apply the kernel-ABC framework to study a recent outbreak of a recombinant HIV subtype in China. Kernel-ABC provides a versatile framework for phylodynamic inference because it can fit a broader range of models than methods that rely on the computation of exact likelihoods. PMID:26006189

  20. Dusty gas with one fluid in smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Laibe, Guillaume; Price, Daniel J.

    2014-05-01

    In a companion paper we have shown how the equations describing gas and dust as two fluids coupled by a drag term can be re-formulated to describe the system as a single-fluid mixture. Here, we present a numerical implementation of the one-fluid dusty gas algorithm using smoothed particle hydrodynamics (SPH). The algorithm preserves the conservation properties of the SPH formalism. In particular, the total gas and dust mass, momentum, angular momentum and energy are all exactly conserved. Shock viscosity and conductivity terms are generalized to handle the two-phase mixture accordingly. The algorithm is benchmarked against a comprehensive suit of problems: DUSTYBOX, DUSTYWAVE, DUSTYSHOCK and DUSTYOSCILL, each of them addressing different properties of the method. We compare the performance of the one-fluid algorithm to the standard two-fluid approach. The one-fluid algorithm is found to solve both of the fundamental limitations of the two-fluid algorithm: it is no longer possible to concentrate dust below the resolution of the gas (they have the same resolution by definition), and the spatial resolution criterion h < csts, required in two-fluid codes to avoid over-damping of kinetic energy, is unnecessary. Implicit time-stepping is straightforward. As a result, the algorithm is up to ten billion times more efficient for 3D simulations of small grains. Additional benefits include the use of half as many particles, a single kernel and fewer SPH interpolations. The only limitation is that it does not capture multi-streaming of dust in the limit of zero coupling, suggesting that in this case a hybrid approach may be required.