A method of smoothed particle hydrodynamics using spheroidal kernels
NASA Technical Reports Server (NTRS)
Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.
1995-01-01
We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.
Cen, Guanjun; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao
2015-01-01
In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks’ rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby’s growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods. PMID:26546689
A high-order fast method for computing convolution integral with smooth kernel
Qiang, Ji
2009-09-28
In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h{sup 4}), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.
Pérez, Isidro A; Sánchez, M Luisa; García, M Ángeles; Pardo, Nuria
2013-07-01
CO₂ concentrations recorded for two years using a Picarro G1301 analyser at a rural site were studied applying two procedures. Firstly, the smoothing kernel method, which to date has been used with one linear and another circular variable, was used with pairs of circular variables: wind direction, time of day, and time of year, providing that the daily cycle was the prevailing cyclical evolution and that the highest concentrations were justified by the influence of one nearby city source, which was only revealed by directional analysis. Secondly, histograms were obtained, and these revealed most observations to be located between 380 and 410 ppm, and that there was a sharp contrast during the year. Finally, histograms were fitted to 14 distributions, the best known using analytical procedures, and the remainder using numerical procedures. RMSE was used as the goodness of fit indicator to compare and select distributions. Most functions provided similar RMSE values. However, the best fits were obtained using numerical procedures due to their greater flexibility, the triangular distribution being the simplest function of this kind. This distribution allowed us to identify directions and months of noticeable CO₂ input (SSE and April-May, respectively) as well as the daily cycle of the distribution symmetry. Among the functions whose parameters were calculated using an analytical expression, Erlang distributions provided satisfactory fits for monthly analysis, and gamma for the rest. By contrast, the Rayleigh and Weibull distributions gave the worst RMSE values. PMID:23602977
A short- time beltrami kernel for smoothing images and manifolds.
Spira, Alon; Kimmel, Ron; Sochen, Nir
2007-06-01
We introduce a short-time kernel for the Beltrami image enhancing flow. The flow is implemented by "convolving" the image with a space dependent kernel in a similar fashion to the solution of the heat equation by a convolution with a Gaussian kernel. The kernel is appropriate for smoothing regular (flat) 2-D images, for smoothing images painted on manifolds, and for simultaneously smoothing images and the manifolds they are painted on. The kernel combines the geometry of the image and that of the manifold into one metric tensor, thus enabling a natural unified approach for the manipulation of both. Additionally, the derivation of the kernel gives a better geometrical understanding of the Beltrami flow and shows that the bilateral filter is a Euclidean approximation of it. On a practical level, the use of the kernel allows arbitrarily large time steps as opposed to the existing explicit numerical schemes for the Beltrami flow. In addition, the kernel works with equal ease on regular 2-D images and on images painted on parametric or triangulated manifolds. We demonstrate the denoising properties of the kernel by applying it to various types of images and manifolds. PMID:17547140
Estimating Mixture of Gaussian Processes by Kernel Smoothing.
Huang, Mian; Li, Runze; Wang, Hansheng; Yao, Weixin
2014-01-01
When the functional data are not homogeneous, e.g., there exist multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this paper, we propose a new estimation procedure for the Mixture of Gaussian Processes, to incorporate both functional and inhomogeneous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from EM algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset. PMID:24976675
Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K
2015-05-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. PMID:25791435
Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.
2014-01-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435
Smoothing Methods for Estimating Test Score Distributions.
ERIC Educational Resources Information Center
Kolen, Michael J.
1991-01-01
Estimation/smoothing methods that are flexible enough to fit a wide variety of test score distributions are reviewed: kernel method, strong true-score model-based method, and method that uses polynomial log-linear models. Applications of these methods include describing/comparing test score distributions, estimating norms, and estimating…
Jointly optimal bandwidth selection for the planar kernel-smoothed density-ratio.
Davies, Tilman M
2013-06-01
The kernel-smoothed density-ratio or 'relative risk' function for planar point data is a useful tool for examining disease rates over a certain geographical region. Instrumental to the quality of the resulting risk surface estimate is the choice of bandwidth for computation of the required numerator and denominator densities. The challenge associated with finding some 'optimal' smoothing parameter for standalone implementation of the kernel estimator given observed data is compounded when we deal with the density-ratio per se. To date, only one method specifically designed for calculation of density-ratio optimal bandwidths has received any notable attention in the applied literature. However, this method exhibits significant variability in the estimated smoothing parameters. In this work, the first practical comparison of this selector with a little-known alternative technique is provided. The possibility of exploiting an asymptotic MISE formulation in an effort to control excess variability is also examined, and numerical results seem promising. PMID:23725887
Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels.
Jayasumana, Sadeep; Hartley, Richard; Salzmann, Mathieu; Li, Hongdong; Harandi, Mehrtash
2015-12-01
In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. PMID:26539851
Adaptive Optimal Kernel Smooth-Windowed Wigner-Ville Distribution for Digital Communication Signal
NASA Astrophysics Data System (ADS)
Tan, Jo Lynn; Sha'ameri, Ahmad Zuribin
2009-12-01
Time-frequency distributions (TFDs) are powerful tools to represent the energy content of time-varying signal in both time and frequency domains simultaneously but they suffer from interference due to cross-terms. Various methods have been described to remove these cross-terms and they are typically signal-dependent. Thus, there is no single TFD with a fixed window or kernel that can produce accurate time-frequency representation (TFR) for all types of signals. In this paper, a globally adaptive optimal kernel smooth-windowed Wigner-Ville distribution (AOK-SWWVD) is designed for digital modulation signals such as ASK, FSK, and M-ary FSK, where its separable kernel is determined automatically from the input signal, without prior knowledge of the signal. This optimum kernel is capable of removing the cross-terms and maintaining accurate time-frequency representation at SNR as low as 0 dB. It is shown that this system is comparable to the system with prior knowledge of the signal.
NASA Astrophysics Data System (ADS)
García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin
2014-10-01
Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.
PET image reconstruction using kernel method.
Wang, Guobao; Qi, Jinyi
2015-01-01
Image reconstruction from low-count positron emission tomography (PET) projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4-D dynamic PET patient dataset showed promising results. PMID:25095249
PET Image Reconstruction Using Kernel Method
Wang, Guobao; Qi, Jinyi
2014-01-01
Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249
NASA Astrophysics Data System (ADS)
Juan-Mian, Lei; Xue-Ying, Peng
2016-02-01
Kernel gradient free-smoothed particle hydrodynamics (KGF-SPH) is a modified smoothed particle hydrodynamics (SPH) method which has higher precision than the conventional SPH. However, the Laplacian in KGF-SPH is approximated by the two-pass model which increases computational cost. A new kind of discretization scheme for the Laplacian is proposed in this paper, then a method with higher precision and better stability, called Improved KGF-SPH, is developed by modifying KGF-SPH with this new Laplacian model. One-dimensional (1D) and two-dimensional (2D) heat conduction problems are used to test the precision and stability of the Improved KGF-SPH. The numerical results demonstrate that the Improved KGF-SPH is more accurate than SPH, and stabler than KGF-SPH. Natural convection in a closed square cavity at different Rayleigh numbers are modeled by the Improved KGF-SPH with shifting particle position, and the Improved KGF-SPH results are presented in comparison with those of SPH and finite volume method (FVM). The numerical results demonstrate that the Improved KGF-SPH is a more accurate method to study and model the heat transfer problems.
ERIC Educational Resources Information Center
Zheng, Yinggan; Gierl, Mark J.; Cui, Ying
2010-01-01
This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…
Kernel method and linear recurrence system
NASA Astrophysics Data System (ADS)
Hou, Qing-Hu; Mansour, Toufik
2008-06-01
Based on the kernel method, we present systematic methods to solve equation systems on generating functions of two variables. Using these methods, we get the generating functions for the number of permutations which avoid 1234 and 12k(k-1)...3 and permutations which avoid 1243 and 12...k.
Nonlinear projection trick in kernel methods: an alternative to the kernel trick.
Kwak, Nojun
2013-12-01
In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach. PMID:24805227
Multiobjective Optimization for Model Selection in Kernel Methods in Regression
You, Di; Benitez-Quiroz, C. Fabian; Martinez, Aleix M.
2016-01-01
Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-vs-variance trade-off. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a trade-off between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared to methods in the state of the art. PMID:25291740
Multiobjective optimization for model selection in kernel methods in regression.
You, Di; Benitez-Quiroz, Carlos Fabian; Martinez, Aleix M
2014-10-01
Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-versus-variance tradeoff. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a tradeoff between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition, and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared with methods in the state of the art. PMID:25291740
Modified wavelet kernel methods for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Hsu, Pai-Hui; Huang, Xiu-Man
2015-10-01
Hyperspectral images have the capability of acquiring images of earth surface with several hundred of spectral bands. Providing such abundant spectral data should increase the abilities in classifying land use/cover type. However, due to the high dimensionality of hyperspectral data, traditional classification methods are not suitable for hyperspectral data classification. The common method to solve this problem is dimensionality reduction by using feature extraction before classification. Kernel methods such as support vector machine (SVM) and multiple kernel learning (MKL) have been successfully applied to hyperspectral images classification. In kernel methods applications, the selection of kernel function plays an important role. The wavelet kernel with multidimensional wavelet functions can find the optimal approximation of data in feature space for classification. The SVM with wavelet kernels (called WSVM) have been also applied to hyperspectral data and improve classification accuracy. In this study, wavelet kernel method combined multiple kernel learning algorithm and wavelet kernels was proposed for hyperspectral image classification. After the appropriate selection of a linear combination of kernel functions, the hyperspectral data will be transformed to the wavelet feature space, which should have the optimal data distribution for kernel learning and classification. Finally, the proposed methods were compared with the existing methods. A real hyperspectral data set was used to analyze the performance of wavelet kernel method. According to the results the proposed wavelet kernel methods in this study have well performance, and would be an appropriate tool for hyperspectral image classification.
Application of smoothed particle hydrodynamics method in aerodynamics
NASA Astrophysics Data System (ADS)
Cortina, Miguel
2014-11-01
Smoothed Particle Hydrodynamics (SPH) is a meshless Lagrangian method in which the domain is represented by particles. Each particle is assigned properties such as mass, pressure, density, temperature, and velocity. These properties are then evaluated at the particle positions using a smoothing kernel that integrates over the values of the surrounding particles. In the present study the SPH method is first used to obtain numerical solutions for fluid flows over a cylinder and then we are going to apply the same principle over an airfoil obstacle.
ERIC Educational Resources Information Center
Holland, Paul W.; Thayer, Dorothy T.
A new and unified approach to test equating is described that is based on log-linear models for smoothing score distributions and on the kernel method of nonparametric density estimation. The new method contains both linear and standard equipercentile methods as special cases and can handle several important equating data collection designs. An…
Kernel map compression for speeding the execution of kernel-based methods.
Arif, Omar; Vela, Patricio A
2011-06-01
The use of Mercer kernel methods in statistical learning theory provides for strong learning capabilities, as seen in kernel principal component analysis and support vector machines. Unfortunately, after learning, the computational complexity of execution through a kernel is of the order of the size of the training set, which is quite large for many applications. This paper proposes a two-step procedure for arriving at a compact and computationally efficient execution procedure. After learning in the kernel space, the proposed extension exploits the universal approximation capabilities of generalized radial basis function neural networks to efficiently approximate and replace the projections onto the empirical kernel map used during execution. Sample applications demonstrate significant compression of the kernel representation with graceful performance loss. PMID:21550884
Huang, Lulu; Massa, Lou
2010-01-01
The Kernel Energy Method (KEM) provides a way to calculate the ab-initio energy of very large biological molecules. The results are accurate, and the computational time reduced. However, by use of a list of double kernel interactions a significant additional reduction of computational effort may be achieved, still retaining ab-initio accuracy. A numerical comparison of the indices that name the known double interactions in question, allow one to list higher order interactions having the property of topological continuity within the full molecule of interest. When, that list of interactions is unpacked, as a kernel expansion, which weights the relative importance of each kernel in an expression for the total molecular energy, high accuracy, and a further significant reduction in computational effort results. A KEM molecular energy calculation based upon the HF/STO3G chemical model, is applied to the protein insulin, as an illustration. PMID:21243065
Comparison of Kernel Equating and Item Response Theory Equating Methods
ERIC Educational Resources Information Center
Meng, Yu
2012-01-01
The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…
Introduction to Kernel Methods: Classification of Multivariate Data
NASA Astrophysics Data System (ADS)
Fauvel, M.
2016-05-01
In this chapter, kernel methods are presented for the classification of multivariate data. An introduction example is given to enlighten the main idea of kernel methods. Then emphasis is done on the Support Vector Machine. Structural risk minimization is presented, and linear and non-linear SVM are described. Finally, a full example of SVM classification is given on simulated hyperspectral data.
Puso, M A; Laursen, T A
2002-05-02
Smoothing of contact surfaces can be used to eliminate the chatter typically seen with node on facet contact and give a better representation of the actual contact surface. The latter affect is well demonstrated for problems with interference fits. In this work we present two methods for the smoothing of contact surfaces for 3D finite element contact. In the first method, we employ Gregory patches to smooth the faceted surface in a node on facet implementation. In the second method, we employ a Bezier interpolation of the faceted surface in a mortar method implementation of contact. As is well known, node on facet approaches can exhibit locking due to the failure of the Babuska-Brezzi condition and in some instances fail the patch test. The mortar method implementation is stable and provides optimal convergence in the energy of error. In the this work we demonstrate the superiority of the smoothed versus the non-smoothed node on facet implementations. We also show where the node on facet method fails and some results from the smoothed mortar method implementation.
NASA Astrophysics Data System (ADS)
Fomin, Fedor V.
Preprocessing (data reduction or kernelization) as a strategy of coping with hard problems is universally used in almost every implementation. The history of preprocessing, like applying reduction rules simplifying truth functions, can be traced back to the 1950's [6]. A natural question in this regard is how to measure the quality of preprocessing rules proposed for a specific problem. For a long time the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this anomaly was that if we start with an instance I of an NP-hard problem and can show that in polynomial time we can replace this with an equivalent instance I' with |I'| < |I| then that would imply P=NP in classical complexity.
Intelligent classification methods of grain kernels using computer vision analysis
NASA Astrophysics Data System (ADS)
Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo
2011-06-01
In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.
Constructing Bayesian formulations of sparse kernel learning methods.
Cawley, Gavin C; Talbot, Nicola L C
2005-01-01
We present here a simple technique that simplifies the construction of Bayesian treatments of a variety of sparse kernel learning algorithms. An incomplete Cholesky factorisation is employed to modify the dual parameter space, such that the Gaussian prior over the dual model parameters is whitened. The regularisation term then corresponds to the usual weight-decay regulariser, allowing the Bayesian analysis to proceed via the evidence framework of MacKay. There is in addition a useful by-product associated with the incomplete Cholesky factorisation algorithm, it also identifies a subset of the training data forming an approximate basis for the entire dataset in the kernel-induced feature space, resulting in a sparse model. Bayesian treatments of the kernel ridge regression (KRR) algorithm, with both constant and heteroscedastic (input dependent) variance structures, and kernel logistic regression (KLR) are provided as illustrative examples of the proposed method, which we hope will be more widely applicable. PMID:16085387
Local coding based matching kernel method for image classification.
Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong
2014-01-01
This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method. PMID:25119982
Chebyshev moment problems: Maximum entropy and kernel polynomial methods
Silver, R.N.; Roeder, H.; Voter, A.F.; Kress, J.D.
1995-12-31
Two Chebyshev recursion methods are presented for calculations with very large sparse Hamiltonians, the kernel polynomial method (KPM) and the maximum entropy method (MEM). They are applicable to physical properties involving large numbers of eigenstates such as densities of states, spectral functions, thermodynamics, total energies for Monte Carlo simulations and forces for tight binding molecular dynamics. this paper emphasizes efficient algorithms.
Multiple predictor smoothing methods for sensitivity analysis.
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
Input space versus feature space in kernel-based methods.
Schölkopf, B; Mika, S; Burges, C C; Knirsch, P; Müller, K R; Rätsch, G; Smola, A J
1999-01-01
This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the Kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data. PMID:18252603
Method for producing smooth inner surfaces
Cooper, Charles A.
2016-05-17
The invention provides a method for preparing superconducting cavities, the method comprising causing polishing media to tumble by centrifugal barrel polishing within the cavities for a time sufficient to attain a surface smoothness of less than 15 nm root mean square roughness over approximately a 1 mm.sup.2 scan area. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media bound to a carrier to tumble within the cavities. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media in a slurry to tumble within the cavities.
A locally adaptive kernel regression method for facies delineation
NASA Astrophysics Data System (ADS)
Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.
2015-12-01
Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.
Smooth electrode and method of fabricating same
Weaver, Stanton Earl; Kennerly, Stacey Joy; Aimi, Marco Francesco
2012-08-14
A smooth electrode is provided. The smooth electrode includes at least one metal layer having thickness greater than about 1 micron; wherein an average surface roughness of the smooth electrode is less than about 10 nm.
An Extended Method of SIRMs Connected Fuzzy Inference Method Using Kernel Method
NASA Astrophysics Data System (ADS)
Seki, Hirosato; Mizuguchi, Fuhito; Watanabe, Satoshi; Ishii, Hiroaki; Mizumoto, Masaharu
The single input rule modules connected fuzzy inference method (SIRMs method) by Yubazaki et al. can decrease the number of fuzzy rules drastically in comparison with the conventional fuzzy inference methods. Moreover, Seki et al. have proposed a functional-type SIRMs method which generalizes the consequent part of the SIRMs method to function. However, these SIRMs methods can not be applied to XOR (Exclusive OR). In this paper, we propose a “kernel-type SIRMs method” which uses the kernel trick to the SIRMs method, and show that this method can treat XOR. Further, a learning algorithm of the proposed SIRMs method is derived by using the steepest descent method, and compared with the one of conventional SIRMs method and kernel perceptron by applying to identification of nonlinear functions, medical diagnostic system and discriminant analysis of Iris data.
Hardness methods for testing maize kernels.
Fox, Glen; Manley, Marena
2009-07-01
Maize is a highly important crop to many countries around the world, through the sale of the maize crop to domestic processors and subsequent production of maize products and also provides a staple food to subsistance farms in undeveloped countries. In many countries, there have been long-term research efforts to develop a suitable hardness method that could assist the maize industry in improving efficiency in processing as well as possibly providing a quality specification for maize growers, which could attract a premium. This paper focuses specifically on hardness and reviews a number of methodologies as well as important biochemical aspects of maize that contribute to maize hardness used internationally. Numerous foods are produced from maize, and hardness has been described as having an impact on food quality. However, the basis of hardness and measurement of hardness are very general and would apply to any use of maize from any country. From the published literature, it would appear that one of the simpler methods used to measure hardness is a grinding step followed by a sieving step, using multiple sieve sizes. This would allow the range in hardness within a sample as well as average particle size and/or coarse/fine ratio to be calculated. Any of these parameters could easily be used as reference values for the development of near-infrared (NIR) spectroscopy calibrations. The development of precise NIR calibrations will provide an excellent tool for breeders, handlers, and processors to deliver specific cultivars in the case of growers and bulk loads in the case of handlers, thereby ensuring the most efficient use of maize by domestic and international processors. This paper also considers previous research describing the biochemical aspects of maize that have been related to maize hardness. Both starch and protein affect hardness, with most research focusing on the storage proteins (zeins). Both the content and composition of the zein fractions affect
A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896
ERIC Educational Resources Information Center
Ferrando, Pere J.
2004-01-01
This study used kernel-smoothing procedures to estimate the item characteristic functions (ICFs) of a set of continuous personality items. The nonparametric ICFs were compared with the ICFs estimated (a) by the linear model and (b) by Samejima's continuous-response model. The study was based on a conditioned approach and used an error-in-variables…
Kernel methods for large-scale genomic data analysis
Xing, Eric P.; Schaid, Daniel J.
2015-01-01
Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743
Kernel weights optimization for error diffusion halftoning method
NASA Astrophysics Data System (ADS)
Fedoseev, Victor
2015-02-01
This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods
Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.
Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
Estimating the Bias of Local Polynomial Approximation Methods Using the Peano Kernel
Blair, J.; Machorro, E.; Luttman, A.
2013-03-01
The determination of uncertainty of an estimate requires both the variance and the bias of the estimate. Calculating the variance of local polynomial approximation (LPA) estimates is straightforward. We present a method, using the Peano Kernel Theorem, to estimate the bias of LPA estimates and show how this can be used to optimize the LPA parameters in terms of the bias-variance tradeoff. Figures of merit are derived and values calculated for several common methods. The results in the literature are expanded by giving bias error bounds that are valid for all lengths of the smoothing interval, generalizing the currently available asymptotic results that are only valid in the limit as the length of this interval goes to zero.
Optimal Bandwidth Selection in Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Häggström, Jenny; Wiberg, Marie
2014-01-01
The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…
Linear and kernel methods for multi- and hypervariate change detection
NASA Astrophysics Data System (ADS)
Nielsen, Allan A.; Canty, Morton J.
2010-10-01
The iteratively re-weighted multivariate alteration detection (IR-MAD) algorithm may be used both for unsuper- vised change detection in multi- and hyperspectral remote sensing imagery as well as for automatic radiometric normalization of multi- or hypervariate multitemporal image sequences. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA), kernel MAF and kernel MNF analyses handle nonlinearities by implicitly transforming data into high (even innite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In image analysis the Gram matrix is often prohibitively large (its size is the number of pixels in the image squared). In this case we may sub-sample the image and carry out the kernel eigenvalue analysis on a set of training data samples only. To obtain a transformed version of the entire image we then project all pixels, which we call the test data, mapped nonlinearly onto the primal eigenvectors. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric normalization and kernel PCA/MAF/MNF transformations have been written
Tracking flame base movement and interaction with ignition kernels using topological methods
NASA Astrophysics Data System (ADS)
Mascarenhas, A.; Grout, R. W.; Yoo, C. S.; Chen, J. H.
2009-07-01
We segment the stabilization region in a simulation of a lifted jet flame based on its topology induced by the YOH field. Our segmentation method yields regions that correspond to the flame base and to potential auto-ignition kernels. We apply a region overlap based tracking method to follow the flame-base and the kernels over time, to study the evolution of kernels, and to detect when the kernels merge with the flame. The combination of our segmentation and tracking methods allow us observe flame stabilization via merging between the flame base and kernels; we also obtain YCH2O histories inside the kernels and detect a distinct decrease in radical concentration during transition to a developed flame.
Decoding intracranial EEG data with multiple kernel learning method
Schrouff, Jessica; Mourão-Miranda, Janaina; Phillips, Christophe; Parvizi, Josef
2016-01-01
Background Machine learning models have been successfully applied to neuroimaging data to make predictions about behavioral and cognitive states of interest. While these multivariate methods have greatly advanced the field of neuroimaging, their application to electrophysiological data has been less common especially in the analysis of human intracranial electroencephalography (iEEG, also known as electrocorticography or ECoG) data, which contains a rich spectrum of signals recorded from a relatively high number of recording sites. New method In the present work, we introduce a novel approach to determine the contribution of different bandwidths of EEG signal in different recording sites across different experimental conditions using the Multiple Kernel Learning (MKL) method. Comparison with existing method To validate and compare the usefulness of our approach, we applied this method to an ECoG dataset that was previously analysed and published with univariate methods. Results Our findings proved the usefulness of the MKL method in detecting changes in the power of various frequency bands during a given task and selecting automatically the most contributory signal in the most contributory site(s) of recording. Conclusions With a single computation, the contribution of each frequency band in each recording site in the estimated multivariate model can be highlighted, which then allows formulation of hypotheses that can be tested a posteriori with univariate methods if needed. PMID:26692030
Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization
Zhao, Qiangfu; Liu, Yong
2015-01-01
A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050
Simulating non-Newtonian flows with the moving particle semi-implicit method with an SPH kernel
NASA Astrophysics Data System (ADS)
Xiang, Hao; Chen, Bin
2015-02-01
The moving particle semi-implicit (MPS) method and smoothed particle hydrodynamics (SPH) are commonly used mesh-free particle methods for free surface flows. The MPS method has superiority in incompressible flow simulation and simple programing. However, the crude kernel function is not accurate enough for the discretization of the divergence of the shear stress tensor by the particle inconsistency when the MPS method is extended to non-Newtonian flows. This paper presents an improved MPS method with an SPH kernel to simulate non-Newtonian flows. To improve the consistency of the partial derivative, the SPH cubic spline kernel and the Taylor series expansion are combined with the MPS method. This approach is suitable for all non-Newtonian fluids that can be described with τ = μ(|γ|) Δ (where τ is the shear stress tensor, μ is the viscosity, |γ| is the shear rate, and Δ is the strain tensor), e.g., the Casson and Cross fluids. Two examples are simulated including the Newtonian Poiseuille flow and container filling process of the Cross fluid. The results of Poiseuille flow are more accurate than the traditional MPS method, and different filling processes are obtained with good agreement with previous results, which verified the validation of the new algorithm. For the Cross fluid, the jet fracture length can be correlated with We0.28Fr0.78 (We is the Weber number, Fr is the Froude number).
Huang, Jessie Y.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.; Eklund, David; Childress, Nathan L.
2013-12-15
Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm.Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels.Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found
Improvements to the kernel function method of steady, subsonic lifting surface theory
NASA Technical Reports Server (NTRS)
Medan, R. T.
1974-01-01
The application of a kernel function lifting surface method to three dimensional, thin wing theory is discussed. A technique for determining the influence functions is presented. The technique is shown to require fewer quadrature points, while still calculating the influence functions accurately enough to guarantee convergence with an increasing number of spanwise quadrature points. The method also treats control points on the wing leading and trailing edges. The report introduces and employs an aspect of the kernel function method which apparently has never been used before and which significantly enhances the efficiency of the kernel function approach.
On the collocation methods for singular integral equations with Hilbert kernel
NASA Astrophysics Data System (ADS)
Du, Jinyuan
2009-06-01
In the present paper, we introduce some singular integral operators, singular quadrature operators and discretization matrices of singular integral equations with Hilbert kernel. These results both improve the classical theory of singular integral equations and develop the theory of singular quadrature with Hilbert kernel. Then by using them a unified framework for various collocation methods of numerical solutions of singular integral equations with Hilbert kernel is given. Under the framework, it is very simple and obvious to obtain the coincidence theorem of collocation methods, then the existence and convergence for constructing approximate solutions are also given based on the coincidence theorem.
LoCoH: Non-parameteric kernel methods for constructing home ranges and utilization distributions
Getz, Wayne M.; Fortmann-Roe, Scott; Cross, Paul C.; Lyons, Andrew J.; Ryan, Sadie J.; Wilmers, Christopher C.
2007-01-01
Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: ‘‘fixed sphere-of-influence,’’ or r -LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an ‘‘adaptive sphere-of-influence,’’ or a -LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a ), and compare them to the original ‘‘fixed-number-of-points,’’ or k -LoCoH (all kernels constructed from k -1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a -LoCoH is generally superior to k - and r -LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).
LoCoH: Nonparameteric Kernel Methods for Constructing Home Ranges and Utilization Distributions
Getz, Wayne M.; Fortmann-Roe, Scott; Wilmers, Christopher C.
2007-01-01
Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: “fixed sphere-of-influence,” or r-LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an “adaptive sphere-of-influence,” or a-LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a), and compare them to the original “fixed-number-of-points,” or k-LoCoH (all kernels constructed from k-1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a-LoCoH is generally superior to k- and r-LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu). PMID:17299587
Technology Transfer Automated Retrieval System (TEKTRAN)
Solid-phase microextraction (SPME) in conjunction with GC/MS was used to distinguish non-aromatic rice (Oryza sativa, L.) kernels from aromatic rice kernels. In this method, single kernels along with 10 µl of 0.1 ng 2,4,6-Trimethylpyridine (TMP) were placed in sealed vials and heated to 80oC for 18...
A Non-smooth Newton Method for Multibody Dynamics
Erleben, K.; Ortiz, R.
2008-09-01
In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.
Postprocessing Fourier spectral methods: The case of smooth solutions
Garcia-Archilla, B.; Novo, J.; Titi, E.S.
1998-11-01
A postprocessing technique to improve the accuracy of Galerkin methods, when applied to dissipative partial differential equations, is examined in the particular case of smooth solutions. Pseudospectral methods are shown to perform poorly. This performance is analyzed and a refined postprocessing technique is proposed.
A Comprehensive Benchmark of Kernel Methods to Extract Protein–Protein Interactions from Literature
Tikk, Domonkos; Thomas, Philippe; Palaga, Peter; Hakenberg, Jörg; Leser, Ulf
2010-01-01
The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein–protein interactions (PPIs) reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study shows that three
Hyperbolic Divergence Cleaning Method for Godunov Smoothed Particle Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Iwasaki, K.; Inutsuka, S.-I.
2013-04-01
In this paper, we implement a divergence cleaning method into Godunov smoothed particle magnetohydrodynamics (GSPM). In the GSPM, to describe MHD shocks accurately, a Riemann solver is applied to the SPH method instead of artificial viscosity and resistivity that have been used in previous works. We confirmed that the divergence cleaning method reduces divergence errors significantly. The performance of the method is demonstrated in the numerical simulations of a strongly magnetized gas and bipolar outflow from the first core.
A Simple Method for Solving the SVM Regularization Path for Semidefinite Kernels.
Sentelle, Christopher G; Anagnostopoulos, Georgios C; Georgiopoulos, Michael
2016-04-01
The support vector machine (SVM) remains a popular classifier for its excellent generalization performance and applicability of kernel methods; however, it still requires tuning of a regularization parameter, C , to achieve optimal performance. Regularization path-following algorithms efficiently solve the solution at all possible values of the regularization parameter relying on the fact that the SVM solution is piece-wise linear in C . The SVMPath originally introduced by Hastie et al., while representing a significant theoretical contribution, does not work with semidefinite kernels. Ong et al. introduce a method improved SVMPath (ISVMP) algorithm, which addresses the semidefinite kernel; however, Singular Value Decomposition or QR factorizations are required, and a linear programming solver is required to find the next C value at each iteration. We introduce a simple implementation of the path-following algorithm that automatically handles semidefinite kernels without requiring a method to detect singular matrices nor requiring specialized factorizations or an external solver. We provide theoretical results showing how this method resolves issues associated with the semidefinite kernel as well as discuss, in detail, the potential sources of degeneracy and cycling and how cycling is resolved. Moreover, we introduce an initialization method for unequal class sizes based upon artificial variables that work within the context of the existing path-following algorithm and do not require an external solver. Experiments compare performance with the ISVMP algorithm introduced by Ong et al. and show that the proposed method is competitive in terms of training time while also maintaining high accuracy. PMID:26011894
Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.
ERIC Educational Resources Information Center
Butler, Ronald W.
The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…
Smoothness Evaluation of Cotton Nonwovens Using Quality Energy Method
Technology Transfer Automated Retrieval System (TEKTRAN)
Nonwovens are finding enhanced use in next-to-skin application such as wipes. The global wipe industry is estimated somewhere between $6-8 billion. One important attributes of the wipes is its smoothness as it determines it end use applications. Although there are a number of methods and techniques ...
A detailed error analysis of 13 kernel methods for protein–protein interaction extraction
2013-01-01
Background Kernel-based classification is the current state-of-the-art for extracting pairs of interacting proteins (PPIs) from free text. Various proposals have been put forward, which diverge especially in the specific kernel function, the type of input representation, and the feature sets. These proposals are regularly compared to each other regarding their overall performance on different gold standard corpora, but little is known about their respective performance on the instance level. Results We report on a detailed analysis of the shared characteristics and the differences between 13 current methods using five PPI corpora. We identified a large number of rather difficult (misclassified by most methods) and easy (correctly classified by most methods) PPIs. We show that kernels using the same input representation perform similarly on these pairs and that building ensembles using dissimilar kernels leads to significant performance gain. However, our analysis also reveals that characteristics shared between difficult pairs are few, which lowers the hope that new methods, if built along the same line as current ones, will deliver breakthroughs in extraction performance. Conclusions Our experiments show that current methods do not seem to do very well in capturing the shared characteristics of positive PPI pairs, which must also be attributed to the heterogeneity of the (still very few) available corpora. Our analysis suggests that performance improvements shall be sought after rather in novel feature sets than in novel kernel functions. PMID:23323857
NASA Astrophysics Data System (ADS)
Stein, David B.; Guy, Robert D.; Thomases, Becca
2016-01-01
The Immersed Boundary method is a simple, efficient, and robust numerical scheme for solving PDE in general domains, yet it only achieves first-order spatial accuracy near embedded boundaries. In this paper, we introduce a new high-order numerical method which we call the Immersed Boundary Smooth Extension (IBSE) method. The IBSE method achieves high-order accuracy by smoothly extending the unknown solution of the PDE from a given smooth domain to a larger computational domain, enabling the use of simple Cartesian-grid discretizations (e.g. Fourier spectral methods). The method preserves much of the flexibility and robustness of the original IB method. In particular, it requires minimal geometric information to describe the boundary and relies only on convolution with regularized delta-functions to communicate information between the computational grid and the boundary. We present a fast algorithm for solving elliptic equations, which forms the basis for simple, high-order implicit-time methods for parabolic PDE and implicit-explicit methods for related nonlinear PDE. We apply the IBSE method to solve the Poisson, heat, Burgers', and Fitzhugh-Nagumo equations, and demonstrate fourth-order pointwise convergence for Dirichlet problems and third-order pointwise convergence for Neumann problems.
Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.
2014-01-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood
Early discriminant method of infected kernel based on the erosion effects of laser ultrasonics
NASA Astrophysics Data System (ADS)
Fan, Chao
2015-07-01
To discriminate the infected kernel of the wheat as early as possible, a new kind of detection method of hidden insects, especially in their egg and larvae stage, was put forward based on the erosion effect of the laser ultrasonic in this paper. The surface of the grain is exposured by the pulsed laser, the energy of which is absorbed and the ultrasonic is excited, and the infected kernel can be recognized by appropriate signal analyzing. Firstly, the detection principle was given based on the classical wave equation and the platform was established. Then, the detected ultrasonic signal was processed both in the time domain and the frequency domain by using FFT and DCT , and six significant features were selected as the characteristic parameters of the signal by the method of stepwise discriminant analysis. Finally, a BP neural network was designed by using these six parameters as the input to classify the infected kernels from the normal ones. Numerous experiments were performed by using twenty wheat varieties, the results shown that the the infected kernels can be recognized effectively, and the false negative error and the false positive error was 12% and 9% respectively, the discriminant method of the infected kernels based on the erosion effect of laser ultrasonics is feasible.
NASA Astrophysics Data System (ADS)
Woolley, J. W.; Wilson, H. B.; Woodbury, K. A.
2008-11-01
Thermocouples or other measuring devices are often imbedded into a solid to provide data for an inverse calculation. It is well-documented that such installations will result in erroneous (biased) sensor readings, unless the thermal properties of the measurement wires and surrounding insulation can be carefully matched to those of the parent domain. Since this rarely can be done, or doing so is prohibitively expensive, an alternative is to include a sensor model in the solution of the inverse problem. In this paper we consider a technique in which a thermocouple model is used to generate a correction kernel for use in the inverse solver. The technique yields a kernel function with terms in the Laplace domain. The challenge of determining the values of the correction kernel function is the focus of this paper. An adaptation of the sequential function specification method[1] as well as numerical Laplace transform inversion techniques are considered for determination of the kernel function values. Each inversion method is evaluated with analytical test functions which provide simulated "measurements". Reconstruction of the undisturbed temperature from the "measured" temperature and the correction kernel is demonstrated.
Chemical method for producing smooth surfaces on silicon wafers
Yu, Conrad
2003-01-01
An improved method for producing optically smooth surfaces in silicon wafers during wet chemical etching involves a pre-treatment rinse of the wafers before etching and a post-etching rinse. The pre-treatment with an organic solvent provides a well-wetted surface that ensures uniform mass transfer during etching, which results in optically smooth surfaces. The post-etching treatment with an acetic acid solution stops the etching instantly, preventing any uneven etching that leads to surface roughness. This method can be used to etch silicon surfaces to a depth of 200 .mu.m or more, while the finished surfaces have a surface roughness of only 15-50 .ANG. (RMS).
A Fourier-series-based kernel-independent fast multipole method
Zhang Bo; Huang Jingfang; Pitsianis, Nikos P.; Sun Xiaobai
2011-07-01
We present in this paper a new kernel-independent fast multipole method (FMM), named as FKI-FMM, for pairwise particle interactions with translation-invariant kernel functions. FKI-FMM creates, using numerical techniques, sufficiently accurate and compressive representations of a given kernel function over multi-scale interaction regions in the form of a truncated Fourier series. It provides also economic operators for the multipole-to-multipole, multipole-to-local, and local-to-local translations that are typical and essential in the FMM algorithms. The multipole-to-local translation operator, in particular, is readily diagonal and does not dominate in arithmetic operations. FKI-FMM provides an alternative and competitive option, among other kernel-independent FMM algorithms, for an efficient application of the FMM, especially for applications where the kernel function consists of multi-physics and multi-scale components as those arising in recent studies of biological systems. We present the complexity analysis and demonstrate with experimental results the FKI-FMM performance in accuracy and efficiency.
NASA Astrophysics Data System (ADS)
Yang, Chunwei; Yao, Junping; Sun, Dawei; Wang, Shicheng; Liu, Huaping
2016-05-01
Automatic target recognition in infrared imagery is a challenging problem. In this paper, a kernel sparse coding method for infrared target recognition using covariance descriptor is proposed. First, covariance descriptor combining gray intensity and gradient information of the infrared target is extracted as a feature representation. Then, due to the reason that covariance descriptor lies in non-Euclidean manifold, kernel sparse coding theory is used to solve this problem. We verify the efficacy of the proposed algorithm in terms of the confusion matrices on the real images consisting of seven categories of infrared vehicle targets.
Standard Errors of the Kernel Equating Methods under the Common-Item Design.
ERIC Educational Resources Information Center
Liou, Michelle; And Others
This research derives simplified formulas for computing the standard error of the frequency estimation method for equating score distributions that are continuized using a uniform or Gaussian kernel function (P. W. Holland, B. F. King, and D. T. Thayer, 1989; Holland and Thayer, 1987). The simplified formulas are applicable to equating both the…
Method for smoothing the surface of a protective coating
Sangeeta, D.; Johnson, Curtis Alan; Nelson, Warren Arthur
2001-01-01
A method for smoothing the surface of a ceramic-based protective coating which exhibits roughness is disclosed. The method includes the steps of applying a ceramic-based slurry or gel coating to the protective coating surface; heating the slurry/gel coating to remove volatile material; and then further heating the slurry/gel coating to cure the coating and bond it to the underlying protective coating. The slurry/gel coating is often based on yttria-stabilized zirconia, and precursors of an oxide matrix. Related articles of manufacture are also described.
ERIC Educational Resources Information Center
Wang, Tianyou
2008-01-01
Von Davier, Holland, and Thayer (2004) laid out a five-step framework of test equating that can be applied to various data collection designs and equating methods. In the continuization step, they presented an adjusted Gaussian kernel method that preserves the first two moments. This article proposes an alternative continuization method that…
A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.
Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying
2015-09-01
Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. PMID:26139508
Modeling Electrokinetic Flows by the Smoothed Profile Method
Luo, Xian; Beskok, Ali; Karniadakis, George Em
2010-01-01
We propose an efficient modeling method for electrokinetic flows based on the Smoothed Profile Method (SPM) [1–4] and spectral element discretizations. The new method allows for arbitrary differences in the electrical conductivities between the charged surfaces and the the surrounding electrolyte solution. The electrokinetic forces are included into the flow equations so that the Poisson-Boltzmann and electric charge continuity equations are cast into forms suitable for SPM. The method is validated by benchmark problems of electroosmotic flow in straight channels and electrophoresis of charged cylinders. We also present simulation results of electrophoresis of charged microtubules, and show that the simulated electrophoretic mobility and anisotropy agree with the experimental values. PMID:20352076
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System
Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan
2015-01-01
The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods. PMID:26229526
A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System.
Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan
2015-01-01
The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods. PMID:26229526
The method of tailored sensitivity kernels for GRACE mass change estimates
NASA Astrophysics Data System (ADS)
Groh, Andreas; Horwath, Martin
2016-04-01
To infer mass changes (such as mass changes of an ice sheet) from time series of GRACE spherical harmonic solutions, two basic approaches (with many variants) exist: The regional integration approach (or direct approach) is based on surface mass changes (equivalent water height, EWH) from GRACE and integrates those with specific integration kernels. The forward modeling approach (or mascon approach, or inverse approach) prescribes a finite set of mass change patterns and adjusts the amplitudes of those patterns (in a least squares sense) to the GRACE gravity field changes. The present study reviews the theoretical framework of both approaches. We recall that forward modeling approaches ultimately estimate mass changes by linear functionals of the gravity field changes. Therefore, they implicitly apply sensitivity kernels and may be considered as special realizations of the regional integration approach. We show examples for sensitivity kernels intrinsic to forward modeling approaches. We then propose to directly tailor sensitivity kernels (or in other words: mass change estimators) by a formal optimization procedure that minimizes the sum of propagated GRACE solution errors and leakage errors. This approach involves the incorporation of information on the structure of GRACE errors and the structure of those mass change signals that are most relevant for leakage errors. We discuss the realization of this method, as applied within the ESA "Antarctic Ice Sheet CCI (Climate Change Initiative)" project. Finally, results for the Antarctic Ice Sheet in terms of time series of mass changes of individual drainage basins and time series of gridded EWH changes are presented.
Verification and large deformation analysis using the reproducing kernel particle method
Beckwith, Frank
2015-09-01
The reproducing kernel particle method (RKPM) is a meshless method used to solve general boundary value problems using the principle of virtual work. RKPM corrects the kernel approximation by introducing reproducing conditions which force the method to be complete to arbritrary order polynomials selected by the user. Effort in recent years has led to the implementation of RKPM within the Sierra/SM physics software framework. The purpose of this report is to investigate convergence of RKPM for verification and validation purposes as well as to demonstrate the large deformation capability of RKPM in problems where the finite element method is known to experience difficulty. Results from analyses using RKPM are compared against finite element analysis. A host of issues associated with RKPM are identified and a number of potential improvements are discussed for future work.
NASA Astrophysics Data System (ADS)
Wu, Linmei; Shen, Li; Li, Zhipeng
2016-06-01
A kernel-based method for very high spatial resolution remote sensing image classification is proposed in this article. The new kernel method is based on spectral-spatial information and structure information as well, which is acquired from topic model, Latent Dirichlet Allocation model. The final kernel function is defined as K = u1Kspec + u2Kspat + u3Kstru, in which Kspec, Kspat, Kstru are radial basis function (RBF) and u1 + u2 + u3 = 1. In the experiment, comparison with three other kernel methods, including the spectral-based, the spectral- and spatial-based and the spectral- and structure-based method, is provided for a panchromatic QuickBird image of a suburban area with a size of 900 × 900 pixels and spatial resolution of 0.6 m. The result shows that the overall accuracy of the spectral- and structure-based kernel method is 80 %, which is higher than the spectral-based kernel method, as well as the spectral- and spatial-based which accuracy respectively is 67 % and 74 %. What's more, the accuracy of the proposed composite kernel method that jointly uses the spectral, spatial, and structure information is highest among the four methods which is increased to 83 %. On the other hand, the result of the experiment also verifies the validity of the expression of structure information about the remote sensing image.
Single corn kernel aflatoxin B1 extraction and analysis method
Technology Transfer Automated Retrieval System (TEKTRAN)
Aflatoxins are highly carcinogenic compounds produced by the fungus Aspergillus flavus. Aspergillus flavus is a phytopathogenic fungus that commonly infects crops such as cotton, peanuts, and maize. The goal was to design an effective sample preparation method and analysis for the extraction of afla...
Scalable Kernel Methods and Algorithms for General Sequence Analysis
ERIC Educational Resources Information Center
Kuksa, Pavel
2011-01-01
Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…
NASA Astrophysics Data System (ADS)
Jiang, Mingfeng; Zhang, Heng; Zhu, Lingyan; Cao, Li; Wang, Yaming; Xia, Ling; Gong, Yinglan
2015-04-01
Non-invasively reconstructing the cardiac transmembrane potentials (TMPs) from body surface potentials can act as a regression problem. The support vector regression (SVR) method is often used to solve the regression problem, however the computational complexity of the SVR training algorithm is usually intensive. In this paper, another learning algorithm, termed as extreme learning machine (ELM), is proposed to reconstruct the cardiac transmembrane potentials. Moreover, ELM can be extended to single-hidden layer feed forward neural networks with kernel matrix (kernelized ELM), which can achieve a good generalization performance at a fast learning speed. Based on the realistic heart-torso models, a normal and two abnormal ventricular activation cases are applied for training and testing the regression model. The experimental results show that the ELM method can perform a better regression ability than the single SVR method in terms of the TMPs reconstruction accuracy and reconstruction speed. Moreover, compared with the ELM method, the kernelized ELM method features a good approximation and generalization ability when reconstructing the TMPs.
Effects of sample size on KERNEL home range estimates
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
1999-01-01
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
Discretization errors associated with Reproducing Kernel Methods: One-dimensional domains
Voth, T.E.; Christon, M.A.
2000-01-10
The Reproducing Kernel Particle Method (RKPM) is a discretization technique for partial differential equations that uses the method of weighted residuals, classical reproducing kernel theory and modified kernels to produce either ``mesh-free'' or ``mesh-full'' methods. Although RKPM has many appealing attributes, the method is new, and its numerical performance is just beginning to be quantified. In order to address the numerical performance of RKPM, von Neumann analysis is performed for semi-discretizations of three model one-dimensional PDEs. The von Neumann analyses results are used to examine the global and asymptotic behavior of the semi-discretizations. The model PDEs considered for this analysis include the parabolic and hyperbolic (first and second-order wave) equations. Numerical diffusivity for the former and phase speed for the later are presented over the range of discrete wavenumbers and in an asymptotic sense as the particle spacing tends to zero. Group speed is also presented for the hyperbolic problems. Excellent diffusive and dispersive characteristics are observed when a consistent mass matrix formulation is used with the proper choice of refinement parameter. In contrast, the row-sum lumped mass matrix formulation severely degraded performance. The asymptotic analysis indicates that very good rates of convergence are possible when the consistent mass matrix formulation is used with an appropriate choice of refinement parameter.
Using nonlinear kernels in seismic tomography: go beyond gradient methods
NASA Astrophysics Data System (ADS)
Wu, R.
2013-05-01
In quasi-linear inversion, a nonlinear problem is typically solved iteratively and at each step the nonlinear problem is linearized through the use of a linear functional derivative, the Fréchet derivative. Higher order terms generally are assumed to be insignificant and neglected. The linearization approach leads to the popular gradient method of seismic inversion. However, for the real Earth, the wave equation (and the real wave propagation) is strongly nonlinear with respect to the medium parameter perturbations. Therefore, the quasi-linear inversion may have a serious convergence problem for strong perturbations. In this presentation I will compare the convergence properties of the Taylor-Fréchet series and the renormalized Fréchet series, the De Wolf approximation, and illustrate the improved convergence property with numerical examples. I'll also discuss the application of nonlinear partial derivative to least-square waveform inversion. References: Bonnans, J., Gilbert, J., Lemarechal, C. and Sagastizabal, C., 2006, Numirical optmization, Springer. Wu, R.S. and Y. Zheng, 2012. Nonlinear Fréchet derivative and its De Wolf approximation, Expanded Abstracts of Society of Exploration Gephysicists, SI 8.1.
Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah
2016-01-01
One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel “trick” concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available. PMID:27555865
Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah
2016-01-01
One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available. PMID:27555865
NASA Astrophysics Data System (ADS)
Huang, Fengzhen; Li, Jingzhen; Cao, Jun
2015-02-01
Temporally and Spatially Modulated Fourier Transform Imaging Spectrometer (TSMFTIS) is a new imaging spectrometer without moving mirrors and slits. As applied in remote sensing, TSMFTIS needs to rely on push-broom of the flying platform to obtain the interferogram of the target detected, and if the moving state of the flying platform changed during the imaging process, the target interferogram picked up from the remote sensing image sequence will deviate from the ideal interferogram, then the target spectrum recovered shall not reflect the real characteristic of the ground target object. Therefore, in order to achieve a high precision spectrum recovery of the target detected, the geometry position of the target point on the TSMFTIS image surface can be calculated in accordance with the sub-pixel image registration method, and the real point interferogram of the target can be obtained with image interpolation method. The core idea of the interpolation methods (nearest, bilinear and cubic etc) are to obtain the grey value of the point to be interpolated by weighting the grey value of the pixel around and with the kernel function constructed by the distance between the pixel around and the point to be interpolated. This paper adopts the gauss-based kernel regression mode, present a kernel function that consists of the grey information making use of the relative deviation and the distance information, then the kernel function is controlled by the deviation degree between the grey value of the pixel around and the means value so as to adjust weights self adaptively. The simulation adopts the partial spectrum data obtained by the pushbroom hyperspectral imager (PHI) as the spectrum of the target, obtains the successively push broomed motion error image in combination with the related parameter of the actual aviation platform; then obtains the interferogram of the target point with the above interpolation method; finally, recovers spectrogram with the nonuniform fast
A new approach to a maximum à posteriori-based kernel classification method.
Nopriadi; Yamashita, Yukihiko
2012-09-01
This paper presents a new approach to a maximum a posteriori (MAP)-based classification, specifically, MAP-based kernel classification trained by linear programming (MAPLP). Unlike traditional MAP-based classifiers, MAPLP does not directly estimate a posterior probability for classification. Instead, it introduces a kernelized function to an objective function that behaves similarly to a MAP-based classifier. To evaluate the performance of MAPLP, a binary classification experiment was performed with 13 datasets. The results of this experiment are compared with those coming from conventional MAP-based kernel classifiers and also from other state-of-the-art classification methods. It shows that MAPLP performs promisingly against the other classification methods. It is argued that the proposed approach makes a significant contribution to MAP-based classification research; the approach widens the freedom to choose an objective function, it is not constrained to the strict sense Bayesian, and can be solved by linear programming. A substantial advantage of our proposed approach is that the objective function is undemanding, having only a single parameter. This simplicity, thus, allows for further research development in the future. PMID:22721808
A new method by steering kernel-based Richardson-Lucy algorithm for neutron imaging restoration
NASA Astrophysics Data System (ADS)
Qiao, Shuang; Wang, Qiao; Sun, Jia-ning; Huang, Ji-peng
2014-01-01
Motivated by industrial applications, neutron radiography has become a powerful tool for non-destructive investigation techniques. However, resulted from a combined effect of neutron flux, collimated beam, limited spatial resolution of detector and scattering, etc., the images made with neutrons are degraded severely by blur and noise. For dealing with it, by integrating steering kernel regression into Richardson-Lucy approach, we present a novel restoration method in this paper, which is capable of suppressing noise while restoring details of the blurred imaging result efficiently. Experimental results show that compared with the other methods, the proposed method can improve the restoration quality both visually and quantitatively.
A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4
NASA Technical Reports Server (NTRS)
Park, Young-Keun; Fahrenthold, Eric P.
2004-01-01
An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.
Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J; Wang, Yalin
2015-05-01
Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the gray matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer's disease (AD) and mild cognitive impairment (MCI) on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces. PMID:25700360
Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J.; Wang, Yalin
2015-01-01
Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the grey matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer’s disease (AD) and mild cognitive impairment (MCI) on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces. PMID:25700360
A Particle-Particle Collision Model for Smoothed Profile Method
NASA Astrophysics Data System (ADS)
Mohaghegh, Fazlolah; Mousel, John; Udaykumar, H. S.
2014-11-01
Smoothed Profile Method (SPM) is a type of continuous forcing approach that adds the particles to the fluid using a forcing. The fluid-structure interaction is through a diffuse interface which avoids sudden transition from solid to fluid. The SPM simulation as a monolithic approach uses an indicator function field in the whole domain based on the distance from each particle's boundary where the possible particle-particle interaction can occur. A soft sphere potential based on the indicator function field has been defined to add an artificial pressure to the flow pressure in the potential overlapping regions. Thus, a repulsion force is obtained to avoid overlapping. Study of two particles which impulsively start moving in an initially uniform flow shows that the particle in the wake of the other one will have less acceleration leading to frequent collisions. Various Reynolds numbers and initial distances have been chosen to test the robustness of the method. Study of Drafting-Kissing Tumbling of two cylindrical particles shows a deviation from the benchmarks due to lack of rotation modeling. The method is shown to be accurate enough for simulating particle-particle collision and can easily be extended for particle-wall modeling and for non-spherical particles.
Weighted Wilcoxon-type Smoothly Clipped Absolute Deviation Method
Wang, Lan; Li, Runze
2009-01-01
Summary Shrinkage-type variable selection procedures have recently seen increasing applications in biomedical research. However, their performance can be adversely influenced by outliers in either the response or the covariate space. This paper proposes a weighted Wilcoxon-type smoothly clipped absolute deviation (WW-SCAD) method, which deals with robust variable selection and robust estimation simultaneously. The new procedure can be conveniently implemented with the statistical software R. We establish that the WW-SCAD correctly identifies the set of zero coefficients with probability approaching one and estimates the nonzero coefficients with the rate n−1/2. Moreover, with appropriately chosen weights the WW-SCAD is robust with respect to outliers in both the x and y directions. The important special case with constant weights yields an oracle-type estimator with high efficiency at the presence of heavier-tailed random errors. The robustness of the WW-SCAD is partly justified by its asymptotic performance under local shrinking contamination. We propose a BIC-type tuning parameter selector for the WW-SCAD. The performance of the WW-SCAD is demonstrated via simulations and by an application to a study that investigates the effects of personal characteristics and dietary factors on plasma beta-carotene level. PMID:18647294
A high-order Legendre-WENO kernel density function method for modeling disperse flows
NASA Astrophysics Data System (ADS)
Smith, Timothy; Pantano, Carlos
2015-11-01
We present a high-order kernel density function (KDF) method for disperse flow. The numerical method used to solve the system of hyperbolic equations utilizes a Roe-like update for equations in non-conservation form. We will present the extension of the low-order method to high order using the Legendre-WENO method and demonstrate the improved capability of the method to predict statistics of disperse flows in an accurate, consistent and efficient manner. By construction, the KDF method already enforced many realizability conditions but others remain. The proposed method also considers these constraints and their performance will be discussed. This project was funded by NSF project NSF-DMS 1318161.
Impact of beam smoothing method on direct drive target performance for the NIF
Rothenberg, J.E.; Weber, S.V.
1996-11-01
The impact of smoothing method on the performance of a direct drive target is modeled and examined in terms of its l-mode spectrum. In particular, two classes of smoothing methods are compared, smoothing by spectral dispersion (SSD) and the induced spatial incoherence (ISI) method. It is found that SSD using sinusoidal phase modulation (FM) results in poor smoothing at low l-modes and therefore inferior target performance at both peak velocity and ignition. Modeling of the hydrodynamic nonlinearity shows that saturation tends to reduce the difference between target performance for the smoothing methods considered. However, using SSD with more generalized phase modulation results in a smoothed spatial spectrum, and therefore target performance, which is identical to that obtained with the ISI or similar method where random phase plates are present in both methods and identical beam divergence is assumed.
NASA Astrophysics Data System (ADS)
Schroeter, Darrell; Kapit, Eliot; Thomale, Ronny; Greiter, Martin
2007-03-01
We have recently constructed a Hamiltonian that singles out the chiral spin liquid on a square lattice with periodic boundary conditions as the exact and, apart from the two-fold topological degeneracy, unique ground state [1]. The talk will present a kernel-sweeping method that greatly reduces the numerical effort required to perform the exact diagonalization of the Hamiltonian. Results from the calculation of the model on a 4x4 lattice, including the spectrum of the model, will be presented. [1] D. F. Schroeter, E. Kapit, R. Thomale, and M. Greiter, Phys. Rev. Lett. in review.
Chung, Moo K; Schaefer, Stacey M; Van Reekum, Carien M; Peschke-Schmitz, Lara; Sutterer, Mattew J; Davidson, Richard J
2014-01-01
We present a new unified kernel regression framework on manifolds. Starting with a symmetric positive definite kernel, we formulate a new bivariate kernel regression framework that is related to heat diffusion, kernel smoothing and recently popular diffusion wavelets. Various properties and performance of the proposed kernel regression framework are demonstrated. The method is subsequently applied in investigating the influence of age and gender on the human amygdala and hippocampus shapes. We detected a significant age effect on the posterior regions of hippocampi while there is no gender effect present. PMID:25485452
A Distributed Learning Method for ℓ 1 -Regularized Kernel Machine over Wireless Sensor Networks.
Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong
2016-01-01
In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ 1 norm regularization ( ℓ 1 -regularized) is investigated, and a novel distributed learning algorithm for the ℓ 1 -regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298
A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks
Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong
2016-01-01
In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298
Automated endmember determination and adaptive spectral mixture analysis using kernel methods
NASA Astrophysics Data System (ADS)
Rand, Robert S.; Banerjee, Amit; Broadwater, Joshua
2013-09-01
Various phenomena occur in geographic regions that cause pixels of a scene to contain spectrally mixed pixels. The mixtures may be linear or nonlinear. It could simply be that the pixel size of a sensor is too large so many pixels contain patches of different materials within them (linear), or there could be microscopic mixtures and multiple scattering occurring within pixels (non-linear). Often enough, scenes may contain cases of both linear and non-linear mixing on a pixel-by-pixel basis. Furthermore, appropriate endmembers in a scene are not always easy to determine. A reference spectral library of materials may or may not be available, yet, even if a library is available, using it directly for spectral unmixing may not always be fruitful. This study investigates a generalized kernel-based method for spectral unmixing that attempts to determine if each pixel in a scene is linear or non-linear, and adapts to compute a mixture model at each pixel accordingly. The effort also investigates a kernel-based support vector method for determining spectral endmembers in a scene. Two scenes of hyperspectral imagery calibrated to reflectance are used to validate the methods. We test the approaches using a HyMAP scene collected over the Waimanalo Bay region in Oahu, Hawaii, as well as an AVIRIS scene collected over the oil spill region in the Gulf of Mexico during the Deepwater Horizon oil incident.
Methods and electrolytes for electrodeposition of smooth films
Zhang, Jiguang; Xu, Wu; Graff, Gordon L; Chen, Xilin; Ding, Fei; Shao, Yuyan
2015-03-17
Electrodeposition involving an electrolyte having a surface-smoothing additive can result in self-healing, instead of self-amplification, of initial protuberant tips that give rise to roughness and/or dendrite formation on the substrate and/or film surface. For electrodeposition of a first conductive material (C1) on a substrate from one or more reactants in an electrolyte solution, the electrolyte solution is characterized by a surface-smoothing additive containing cations of a second conductive material (C2), wherein cations of C2 have an effective electrochemical reduction potential in the solution lower than that of the reactants.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318
Yang, Shanshan; Cai, Suxian; Zheng, Fang; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Zou, Quan; Chen, Jian
2014-10-01
This article applies advanced signal processing and computational methods to study the subtle fluctuations in knee joint vibroarthrographic (VAG) signals. Two new features are extracted to characterize the fluctuations of VAG signals. The fractal scaling index parameter is computed using the detrended fluctuation analysis algorithm to describe the fluctuations associated with intrinsic correlations in the VAG signal. The averaged envelope amplitude feature measures the difference between the upper and lower envelopes averaged over an entire VAG signal. Statistical analysis with the Kolmogorov-Smirnov test indicates that both of the fractal scaling index (p=0.0001) and averaged envelope amplitude (p=0.0001) features are significantly different between the normal and pathological signal groups. The bivariate Gaussian kernels are utilized for modeling the densities of normal and pathological signals in the two-dimensional feature space. Based on the feature densities estimated, the Bayesian decision rule makes better signal classifications than the least-squares support vector machine, with the overall classification accuracy of 88% and the area of 0.957 under the receiver operating characteristic (ROC) curve. Such VAG signal classification results are better than those reported in the state-of-the-art literature. The fluctuation features of VAG signals developed in the present study can provide useful information on the pathological conditions of degenerative knee joints. Classification results demonstrate the effectiveness of the kernel feature density modeling method for computer-aided VAG signal analysis. PMID:25096412
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel
Impact of beam smoothing method on direct drive target performance for the NIF
Rothenberg, J.E.; Weber, S.V.
1997-01-01
The impact of smoothing method on the performance of a direct drive target is modeled and examined in terms of its 1-mode spectrum. In particular, two classes of smoothing methods are compared, smoothing by spectral dispersion (SSD) and the induced spatial incoherence (ISI) method. It is found that SSD using sinusoidal phase modulation (FM) results in poor smoothing at low 1-modes and therefore inferior target performance at both peak velocity and ignition. This disparity is most notable if the effective imprinting integration time of the target is small. However, using SSD with more generalized phase modulation can result in smoothing at low l-modes which is identical to that obtained with ISI. For either smoothing method, the calculations indicate that at peak velocity the surface perturbations are about 100 times larger than that which leads to nonlinear hydrodynamics. Modeling of the hydrodynamic nonlinearity shows that saturation can reduce the amplified nonuniformities to the level required to achieve ignition for either smoothing method. The low l- mode behavior at ignition is found to be strongly dependent on the induced divergence of the smoothing method. For the NIF parameters the target performance asymptotes for smoothing divergence larger than {approximately}100 {mu}rad.
NASA Astrophysics Data System (ADS)
Zhang, Z. Q.; Zhou, J. X.; Wang, X. M.; Zhang, Y. F.; Zhang, L.
2004-09-01
This work introduces a numerical integration technique based on partition of unity (PU) to reproducing kernel particle method (RKPM) and presents an implementation of the visibility criterion for meshfree methods. According to the theory of PU and the inherent features of Gaussian quadrature, the convergence property of the PU integration is studied in the paper. Moreover, the practical approaches to implement the PU integration are presented in different strategies. And a method to carry out visibility criterion is presented to handle the problems with a complex domain. Furthermore, numerical examples have been performed on the h-version and p-like version convergence studies of the PU integration and the validity of visibility criterion. The results demonstrate that PU integration is a feasible and effective numerical integration technique, and RKPM enriched by PU integration and visibility criterion is of more efficiency, versatility and high performance.
Multi-feature-based robust face detection and coarse alignment method via multiple kernel learning
NASA Astrophysics Data System (ADS)
Sun, Bo; Zhang, Di; He, Jun; Yu, Lejun; Wu, Xuewen
2015-10-01
Face detection and alignment are two crucial tasks to face recognition which is a hot topic in the field of defense and security, whatever for the safety of social public, personal property as well as information and communication security. Common approaches toward the treatment of these tasks in recent years are often of three types: template matching-based, knowledge-based and machine learning-based, which are always separate-step, high computation cost or fragile robust. After deep analysis on a great deal of Chinese face images without hats, we propose a novel face detection and coarse alignment method, which is inspired by those three types of methods. It is multi-feature fusion with Simple Multiple Kernel Learning1 (Simple-MKL) algorithm. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve promising results.
Methods for Smoothing Expectancy Tables Applied to the Prediction of Success in College
ERIC Educational Resources Information Center
Perrin, David W.; Whitney, Douglas R.
1976-01-01
The gains in accuracy resulting from applying any of the smoothing methods appear sufficient to justify the suggestion that all expectancy tables used by colleges for admission, guidance, or planning purposes should be smoothed. These methods on the average, reduce the criterion measure (an index of inaccuracy) by 30 percent. (Author/MV)
Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods
NASA Astrophysics Data System (ADS)
Hora, Heinrich; Aydin, Meral
1992-04-01
The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too.
Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods
Hora, H. ); Aydin, M. )
1992-04-15
The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too.
ERIC Educational Resources Information Center
Grant, Mary C.; Zhang, Lilly; Damiano, Michele
2009-01-01
This study investigated kernel equating methods by comparing these methods to operational equatings for two tests in the SAT Subject Tests[TM] program. GENASYS (ETS, 2007) was used for all equating methods and scaled score kernel equating results were compared to Tucker, Levine observed score, chained linear, and chained equipercentile equating…
NASA Astrophysics Data System (ADS)
Li, Heng; Mohan, Radhe; Zhu, X. Ronald
2008-12-01
The clinical applications of kilovoltage x-ray cone-beam computed tomography (CBCT) have been compromised by the limited quality of CBCT images, which typically is due to a substantial scatter component in the projection data. In this paper, we describe an experimental method of deriving the scatter kernel of a CBCT imaging system. The estimated scatter kernel can be used to remove the scatter component from the CBCT projection images, thus improving the quality of the reconstructed image. The scattered radiation was approximated as depth-dependent, pencil-beam kernels, which were derived using an edge-spread function (ESF) method. The ESF geometry was achieved with a half-beam block created by a 3 mm thick lead sheet placed on a stack of slab solid-water phantoms. Measurements for ten water-equivalent thicknesses (WET) ranging from 0 cm to 41 cm were taken with (half-blocked) and without (unblocked) the lead sheet, and corresponding pencil-beam scatter kernels or point-spread functions (PSFs) were then derived without assuming any empirical trial function. The derived scatter kernels were verified with phantom studies. Scatter correction was then incorporated into the reconstruction process to improve image quality. For a 32 cm diameter cylinder phantom, the flatness of the reconstructed image was improved from 22% to 5%. When the method was applied to CBCT images for patients undergoing image-guided therapy of the pelvis and lung, the variation in selected regions of interest (ROIs) was reduced from >300 HU to <100 HU. We conclude that the scatter reduction technique utilizing the scatter kernel effectively suppresses the artifact caused by scatter in CBCT.
A kernel-based method for markerless tumor tracking in kV fluoroscopic images
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyong; Homma, Noriyasu; Ichiji, Kei; Abe, Makoto; Sugita, Norihiro; Takai, Yoshihiro; Narita, Yuichiro; Yoshizawa, Makoto
2014-09-01
Markerless tracking of respiration-induced tumor motion in kilo-voltage (kV) fluoroscopic image sequence is still a challenging task in real time image-guided radiation therapy (IGRT). Most of existing markerless tracking methods are based on a template matching technique or its extensions that are frequently sensitive to non-rigid tumor deformation and involve expensive computation. This paper presents a kernel-based method that is capable of tracking tumor motion in kV fluoroscopic image sequence with robust performance and low computational cost. The proposed tracking system consists of the following three steps. To enhance the contrast of kV fluoroscopic image, we firstly utilize a histogram equalization to transform the intensities of original images to a wider dynamical intensity range. A tumor target in the first frame is then represented by using a histogram-based feature vector. Subsequently, the target tracking is then formulated by maximizing a Bhattacharyya coefficient that measures the similarity between the tumor target and its candidates in the subsequent frames. The numerical solution for maximizing the Bhattacharyya coefficient is performed by a mean-shift algorithm. The proposed method was evaluated by using four clinical kV fluoroscopic image sequences. For comparison, we also implement four conventional template matching-based methods and compare their performance with our proposed method in terms of the tracking accuracy and computational cost. Experimental results demonstrated that the proposed method is superior to conventional template matching-based methods.
A fast object-oriented Matlab implementation of the Reproducing Kernel Particle Method
NASA Astrophysics Data System (ADS)
Barbieri, Ettore; Meo, Michele
2012-05-01
Novel numerical methods, known as Meshless Methods or Meshfree Methods and, in a wider perspective, Partition of Unity Methods, promise to overcome most of disadvantages of the traditional finite element techniques. The absence of a mesh makes meshfree methods very attractive for those problems involving large deformations, moving boundaries and crack propagation. However, meshfree methods still have significant limitations that prevent their acceptance among researchers and engineers, namely the computational costs. This paper presents an in-depth analysis of computational techniques to speed-up the computation of the shape functions in the Reproducing Kernel Particle Method and Moving Least Squares, with particular focus on their bottlenecks, like the neighbour search, the inversion of the moment matrix and the assembly of the stiffness matrix. The paper presents numerous computational solutions aimed at a considerable reduction of the computational times: the use of kd-trees for the neighbour search, sparse indexing of the nodes-points connectivity and, most importantly, the explicit and vectorized inversion of the moment matrix without using loops and numerical routines.
A kernel-based method for markerless tumor tracking in kV fluoroscopic images.
Zhang, Xiaoyong; Homma, Noriyasu; Ichiji, Kei; Abe, Makoto; Sugita, Norihiro; Takai, Yoshihiro; Narita, Yuichiro; Yoshizawa, Makoto
2014-09-01
Markerless tracking of respiration-induced tumor motion in kilo-voltage (kV) fluoroscopic image sequence is still a challenging task in real time image-guided radiation therapy (IGRT). Most of existing markerless tracking methods are based on a template matching technique or its extensions that are frequently sensitive to non-rigid tumor deformation and involve expensive computation. This paper presents a kernel-based method that is capable of tracking tumor motion in kV fluoroscopic image sequence with robust performance and low computational cost. The proposed tracking system consists of the following three steps. To enhance the contrast of kV fluoroscopic image, we firstly utilize a histogram equalization to transform the intensities of original images to a wider dynamical intensity range. A tumor target in the first frame is then represented by using a histogram-based feature vector. Subsequently, the target tracking is then formulated by maximizing a Bhattacharyya coefficient that measures the similarity between the tumor target and its candidates in the subsequent frames. The numerical solution for maximizing the Bhattacharyya coefficient is performed by a mean-shift algorithm. The proposed method was evaluated by using four clinical kV fluoroscopic image sequences. For comparison, we also implement four conventional template matching-based methods and compare their performance with our proposed method in terms of the tracking accuracy and computational cost. Experimental results demonstrated that the proposed method is superior to conventional template matching-based methods. PMID:25098382
Alternative methods to smooth the Earth's gravity field
NASA Technical Reports Server (NTRS)
Jekeli, C.
1981-01-01
Convolutions on the sphere with corresponding convolution theorems are developed for one and two dimensional functions. Some of these results are used in a study of isotropic smoothing operators or filters. Well known filters in Fourier spectral analysis, such as the rectangular, Gaussian, and Hanning filters, are adapted for data on a sphere. The low-pass filter most often used on gravity data is the rectangular (or Pellinen) filter. However, its spectrum has relatively large sidelobes; and therefore, this filter passes a considerable part of the upper end of the gravity spectrum. The spherical adaptations of the Gaussian and Hanning filters are more efficient in suppressing the high-frequency components of the gravity field since their frequency response functions are strongly field since their frequency response functions are strongly tapered at the high frequencies with no, or small, sidelobes. Formulas are given for practical implementation of these new filters.
Prediction of posttranslational modification sites from amino acid sequences with kernel methods.
Xu, Yan; Wang, Xiaobo; Wang, Yongcui; Tian, Yingjie; Shao, Xiaojian; Wu, Ling-Yun; Deng, Naiyang
2014-03-01
Post-translational modification (PTM) is the chemical modification of a protein after its translation and one of the later steps in protein biosynthesis for many proteins. It plays an important role which modifies the end product of gene expression and contributes to biological processes and diseased conditions. However, the experimental methods for identifying PTM sites are both costly and time-consuming. Hence computational methods are highly desired. In this work, a novel encoding method PSPM (position-specific propensity matrices) is developed. Then a support vector machine (SVM) with the kernel matrix computed by PSPM is applied to predict the PTM sites. The experimental results indicate that the performance of new method is better or comparable with the existing methods. Therefore, the new method is a useful computational resource for the identification of PTM sites. A unified standalone software PTMPred is developed. It can be used to predict all types of PTM sites if the user provides the training datasets. The software can be freely downloaded from http://www.aporc.org/doc/wiki/PTMPred. PMID:24291233
A numerical study of the Regge calculus and smooth lattice methods on a Kasner cosmology
NASA Astrophysics Data System (ADS)
Brewin, Leo
2015-10-01
Two lattice based methods for numerical relativity, the Regge calculus and the smooth lattice relativity, will be compared with respect to accuracy and computational speed in a full 3+1 evolution of initial data representing a standard Kasner cosmology. It will be shown that both methods provide convergent approximations to the exact Kasner cosmology. It will also be shown that the Regge calculus is of the order of 110 times slower than the smooth lattice method.
Smoothing methods comparison for CMB E- and B-mode separation
NASA Astrophysics Data System (ADS)
Wang, Yi-Fan; Wang, Kai; Zhao, Wen
2016-04-01
The anisotropies of the B-mode polarization in the cosmic microwave background radiation play a crucial role in the study of the very early Universe. However, in real observations, a mixture of the E-mode and B-mode can be caused by partial sky surveys, which must be separated before being applied to a cosmological explanation. The separation method developed by Smith (2006) has been widely adopted, where the edge of the top-hat mask should be smoothed to avoid numerical errors. In this paper, we compare three different smoothing methods and investigate leakage residuals of the E-B mixture. We find that, if less information loss is needed and a smaller region is smoothed in the analysis, the sin- and cos-smoothing methods are better. However, if we need a cleanly constructed B-mode map, the larger region around the mask edge should be smoothed. In this case, the Gaussian-smoothing method becomes much better. In addition, we find that the leakage caused by numerical errors in the Gaussian-smoothing method is mostly concentrated in two bands, which is quite easy to reduce for further E-B separations.
Technology Transfer Automated Retrieval System (TEKTRAN)
INTRODUCTION Aromatic rice or fragrant rice, (Oryza sativa L.), has a strong popcorn-like aroma due to the presence of a five-membered N-heterocyclic ring compound known as 2-acetyl-1-pyrroline (2-AP). To date, existing methods for detecting this compound in rice require the use of several kernels. ...
Calculates Thermal Neutron Scattering Kernel.
Energy Science and Technology Software Center (ESTSC)
1989-11-10
Version 00 THRUSH computes the thermal neutron scattering kernel by the phonon expansion method for both coherent and incoherent scattering processes. The calculation of the coherent part is suitable only for calculating the scattering kernel for heavy water.
Rotating vector methods for smooth torque control of a switched reluctance motor drive
Nagel, N.J.; Lorenz, R.D.
2000-04-01
This paper has two primary contributions to switched reluctance motor (SRM) control: a systematic approach to smooth torque production and a high-performance technique for sensorless motion control. The systematic approach to smooth torque production is based on development of novel rotating spatial vectors methods that can be used to predict the torque produced in an arbitrary SRM. This analysis directly leads to explicit, insightful methods to provide smooth torque control of SRM's. The high-performance technique for sensorless motion control is based on a rotating vector method for high bandwidth, high resolution, position, and velocity estimation suitable for both precise torque and motion control. The sensorless control and smooth torque control methods are both verified experimentally.
Volcano clustering determination: Bivariate Gauss vs. Fisher kernels
NASA Astrophysics Data System (ADS)
Cañón-Tapia, Edgardo
2013-05-01
Underlying many studies of volcano clustering is the implicit assumption that vent distribution can be studied by using kernels originally devised for distribution in plane surfaces. Nevertheless, an important change in topology in the volcanic context is related to the distortion that is introduced when attempting to represent features found on the surface of a sphere that are being projected into a plane. This work explores the extent to which different topologies of the kernel used to study the spatial distribution of vents can introduce significant changes in the obtained density functions. To this end, a planar (Gauss) and a spherical (Fisher) kernels are mutually compared. The role of the smoothing factor in these two kernels is also explored with some detail. The results indicate that the topology of the kernel is not extremely influential, and that either type of kernel can be used to characterize a plane or a spherical distribution with exactly the same detail (provided that a suitable smoothing factor is selected in each case). It is also shown that there is a limitation on the resolution of the Fisher kernel relative to the typical separation between data that can be accurately described, because data sets with separations lower than 500 km are considered as a single cluster using this method. In contrast, the Gauss kernel can provide adequate resolutions for vent distributions at a wider range of separations. In addition, this study also shows that the numerical value of the smoothing factor (or bandwidth) of both the Gauss and Fisher kernels has no unique nor direct relationship with the relevant separation among data. In order to establish the relevant distance, it is necessary to take into consideration the value of the respective smoothing factor together with a level of statistical significance at which the contributions to the probability density function will be analyzed. Based on such reference level, it is possible to create a hierarchy of
Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo
2016-01-01
Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduc-tion of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine. PMID:27074456
Lin, Wan-Yu; Yi, Nengjun; Lou, Xiang-Yang; Zhi, Degui; Zhang, Kui; Gao, Guimin; Tiwari, Hemant K; Liu, Nianjun
2013-09-01
For most complex diseases, the fraction of heritability that can be explained by the variants discovered from genome-wide association studies is minor. Although the so-called "rare variants" (minor allele frequency [MAF] < 1%) have attracted increasing attention, they are unlikely to account for much of the "missing heritability" because very few people may carry these rare variants. The genetic variants that are likely to fill in the "missing heritability" include uncommon causal variants (MAF < 5%), which are generally untyped in association studies using tagging single-nucleotide polymorphisms (SNPs) or commercial SNP arrays. Developing powerful statistical methods can help to identify chromosomal regions harboring uncommon causal variants, while bypassing the genome-wide or exome-wide next-generation sequencing. In this work, we propose a haplotype kernel association test (HKAT) that is equivalent to testing the variance component of random effects for distinct haplotypes. With an appropriate weighting scheme given to haplotypes, we can further enhance the ability of HKAT to detect uncommon causal variants. With scenarios simulated according to the population genetics theory, HKAT is shown to be a powerful method for detecting chromosomal regions harboring uncommon causal variants. PMID:23740760
Lin, Wan-Yu; Yi, Nengjun; Lou, Xiang-Yang; Zhi, Degui; Zhang, Kui; Gao, Guimin; Tiwari, Hemant K.; Liu, Nianjun
2014-01-01
For most complex diseases, the fraction of heritability that can be explained by the variants discovered from genome-wide association studies is minor. Although the so-called ‘rare variants’ (minor allele frequency [MAF] < 1%) have attracted increasing attention, they are unlikely to account for much of the ‘missing heritability’ because very few people may carry these rare variants. The genetic variants that are likely to fill in the ‘missing heritability’ include uncommon causal variants (MAF < 5%), which are generally untyped in association studies using tagging single-nucleotide polymorphisms (SNPs) or commercial SNP arrays. Developing powerful statistical methods can help to identify chromosomal regions harboring uncommon causal variants, while bypassing the genome-wide or exome-wide next-generation sequencing. In this work, we propose a haplotype kernel association test (HKAT) that is equivalent to testing the variance component of random effects for distinct haplotypes. With an appropriate weighting scheme given to haplotypes, we can further enhance the ability of HKAT to detect uncommon causal variants. With scenarios simulated according to the population genetics theory, HKAT is shown to be a powerful method for detecting chromosomal regions harboring uncommon causal variants. PMID:23740760
A new adaptive exponential smoothing method for non-stationary time series with level shifts
NASA Astrophysics Data System (ADS)
Monfared, Mohammad Ali Saniee; Ghandali, Razieh; Esmaeili, Maryam
2014-07-01
Simple exponential smoothing (SES) methods are the most commonly used methods in forecasting and time series analysis. However, they are generally insensitive to non-stationary structural events such as level shifts, ramp shifts, and spikes or impulses. Similar to that of outliers in stationary time series, these non-stationary events will lead to increased level of errors in the forecasting process. This paper generalizes the SES method into a new adaptive method called revised simple exponential smoothing (RSES), as an alternative method to recognize non-stationary level shifts in the time series. We show that the new method improves the accuracy of the forecasting process. This is done by controlling the number of observations and the smoothing parameter in an adaptive approach, and in accordance with the laws of statistical control limits and the Bayes rule of conditioning. We use a numerical example to show how the new RSES method outperforms its traditional counterpart, SES.
Bermejo, Guillermo A; Clore, G Marius; Schwieters, Charles D
2012-01-01
Statistical potentials that embody torsion angle probability densities in databases of high-quality X-ray protein structures supplement the incomplete structural information of experimental nuclear magnetic resonance (NMR) datasets. By biasing the conformational search during the course of structure calculation toward highly populated regions in the database, the resulting protein structures display better validation criteria and accuracy. Here, a new statistical torsion angle potential is developed using adaptive kernel density estimation to extract probability densities from a large database of more than 106 quality-filtered amino acid residues. Incorporated into the Xplor-NIH software package, the new implementation clearly outperforms an older potential, widely used in NMR structure elucidation, in that it exhibits simultaneously smoother and sharper energy surfaces, and results in protein structures with improved conformation, nonbonded atomic interactions, and accuracy. PMID:23011872
NASA Astrophysics Data System (ADS)
Jiang, Li; Shi, Tielin; Xuan, Jianping
2012-05-01
Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.
ERIC Educational Resources Information Center
Choi, Sae Il
2009-01-01
This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…
Numerical Convergence In Smoothed Particle Hydrodynamics
NASA Astrophysics Data System (ADS)
Zhu, Qirong; Hernquist, Lars; Li, Yuexing
2015-02-01
We study the convergence properties of smoothed particle hydrodynamics (SPH) using numerical tests and simple analytic considerations. Our analysis shows that formal numerical convergence is possible in SPH only in the joint limit N → ∞, h → 0, and Nnb → ∞, where N is the total number of particles, h is the smoothing length, and Nnb is the number of neighbor particles within the smoothing volume used to compute smoothed estimates. Previous work has generally assumed that the conditions N → ∞ and h → 0 are sufficient to achieve convergence, while holding Nnb fixed. We demonstrate that if Nnb is held fixed as the resolution is increased, there will be a residual source of error that does not vanish as N → ∞ and h → 0. Formal numerical convergence in SPH is possible only if Nnb is increased systematically as the resolution is improved. Using analytic arguments, we derive an optimal compromise scaling for Nnb by requiring that this source of error balance that present in the smoothing procedure. For typical choices of the smoothing kernel, we find Nnb vpropN 0.5. This means that if SPH is to be used as a numerically convergent method, the required computational cost does not scale with particle number as O(N), but rather as O(N 1 + δ), where δ ≈ 0.5, with a weak dependence on the form of the smoothing kernel.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.; Rowe, W. S.
1984-01-01
For the design of active controls to stabilize flight vehicles, which requires the use of unsteady aerodynamics that are valid for arbitrary complex frequencies, algorithms are derived for evaluating the nonelementary part of the kernel of the integral equation that relates unsteady pressure to downwash. This part of the kernel is separated into an infinite limit integral that is evaluated using Bessel and Struve functions and into a finite limit integral that is expanded in series and integrated termwise in closed form. The developed series expansions gave reliable answers for all complex reduced frequencies and executed faster than exponential approximations for many pressure stations.
Study on preparation method of Zanthoxylum bungeanum seeds kernel oil with zero trans-fatty acids.
Liu, Tong; Yao, Shi-Yong; Yin, Zhong-Yi; Zheng, Xu-Xu; Shen, Yu
2016-04-01
The seed of Zanthoxylum bungeanum (Z. bungeanum) is a by-product of pepper production and rich in unsaturated fatty acid, cellulose, and protein. The seed oil obtained from traditional producing process by squeezing or extracting would be bad quality and could not be used as edible oil. In this paper, a new preparation method of Z. bungeanum seed kernel oil (ZSKO) was developed by comparing the advantages and disadvantages of alkali saponification-cold squeezing, alkali saponification-solvent extraction, and alkali saponification-supercritical fluid extraction with carbon dioxide (SFE-CO2). The results showed that the alkali saponification-cold squeezing could be the optimal preparation method of ZSKO, which contained the following steps: Z. bungeanum seed was pretreated by alkali saponification under the conditions of adding 10 %NaOH (w/w), solution temperature was 80 °C, and saponification reaction time was 45 min, and pretreated seed was separated by filtering, water washing, and overnight drying at 50 °C, then repeated squeezing was taken until no oil generated at 60 °C with 15 % moisture content, and ZSKO was attained finally using centrifuge. The produced ZSKO contained more than 90 % unsaturated fatty acids and no trans-fatty acids and be testified as a good edible oil with low-value level of acid and peroxide. It was demonstrated that the alkali saponification-cold squeezing process could be scaled up and applied to industrialized production of ZSKO. PMID:26268620
NASA Technical Reports Server (NTRS)
Lan, C. E.; Lamar, J. E.
1977-01-01
A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.
Evaluating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Wilton, Donald R.; Champagne, Nathan J.
2008-01-01
Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.
Bladder Smooth Muscle Strip Contractility as a Method to Evaluate Lower Urinary Tract Pharmacology
Kullmann, F. Aura; Daugherty, Stephanie L.; de Groat, William C.; Birder, Lori A.
2015-01-01
We describe an in vitro method to measure bladder smooth muscle contractility, and its use for investigating physiological and pharmacological properties of the smooth muscle as well as changes induced by pathology. This method provides critical information for understanding bladder function while overcoming major methodological difficulties encountered in in vivo experiments, such as surgical and pharmacological manipulations that affect stability and survival of the preparations, the use of human tissue, and/or the use of expensive chemicals. It also provides a way to investigate the properties of each bladder component (i.e. smooth muscle, mucosa, nerves) in healthy and pathological conditions. The urinary bladder is removed from an anesthetized animal, placed in Krebs solution and cut into strips. Strips are placed into a chamber filled with warm Krebs solution. One end is attached to an isometric tension transducer to measure contraction force, the other end is attached to a fixed rod. Tissue is stimulated by directly adding compounds to the bath or by electric field stimulation electrodes that activate nerves, similar to triggering bladder contractions in vivo. We demonstrate the use of this method to evaluate spontaneous smooth muscle contractility during development and after an experimental spinal cord injury, the nature of neurotransmission (transmitters and receptors involved), factors involved in modulation of smooth muscle activity, the role of individual bladder components, and species and organ differences in response to pharmacological agents. Additionally, it could be used for investigating intracellular pathways involved in contraction and/or relaxation of the smooth muscle, drug structure-activity relationships and evaluation of transmitter release. The in vitro smooth muscle contractility method has been used extensively for over 50 years, and has provided data that significantly contributed to our understanding of bladder function as well as to
NASA Astrophysics Data System (ADS)
Lala, P.; Thao, Bui Van
1986-11-01
The first step in the treatment of satellite laser ranging data is its smoothing and rejection of incorrect points. The proposed method uses the comparison of observations with ephemerides and iterative matching of corresponding parameters. The method of solution and a program for a minicomputer are described. Examples of results for satellite Starlette are given.
A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems
NASA Astrophysics Data System (ADS)
Zhang, Guiyong; Liu, Gui-Rong
2010-05-01
In the framework of a weakened weak (W2) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W2 formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods [1]. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H1 space, but in a G1 space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H1 space and G1 space can be viewed as a space of functions with weakened weak (W2) requirement on continuity [1-3]. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W2 formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W2 formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations [2]. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much softer than the overly-stiff FEM model
A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems
Zhang Guiyong; Liu Guirong
2010-05-21
In the framework of a weakened weak (W{sup 2}) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W{sup 2} formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H{sup 1} space, but in a G{sup 1} space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H{sup 1} space and G{sup 1} space can be viewed as a space of functions with weakened weak (W{sup 2}) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W{sup 2} formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W{sup 2} formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much
Tests of smoothing methods for topological study of galaxy redshift surveys
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Dominik, Kurt G.
1993-01-01
Studying the topology of large-scale structure as a way to better understand initial conditions has become more widespread in recent years. Studying topology of simulations (which have periodic boundary conditions) in redshift space produces results compatible with the real topological characteristics of the simulation. Thus we expect we can extract useful information from redshift surveys. However, with nonperiodic boundary conditions, the use of smoothing must result in the loss of information at survey boundaries. In this paper, we test different methods of smoothing samples with nonperiodic boundary conditions to see which most efficiently preserves the topological features of the real distribution. We find that a smoothing method which (unlike most previous published analysis) sums only over cells inside the survey volume produces the best results among the schemes tested.
Full Waveform Inversion Using Waveform Sensitivity Kernels
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
2013-04-01
We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver
Kernel optimization in discriminant analysis.
You, Di; Hamsici, Onur C; Martinez, Aleix M
2011-03-01
Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results, using a large number of databases and classifiers, demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates. PMID:20820072
MC Kernel: Broadband Waveform Sensitivity Kernels for Seismic Tomography
NASA Astrophysics Data System (ADS)
Stähler, Simon C.; van Driel, Martin; Auer, Ludwig; Hosseini, Kasra; Sigloch, Karin; Nissen-Meyer, Tarje
2016-04-01
We present MC Kernel, a software implementation to calculate seismic sensitivity kernels on arbitrary tetrahedral or hexahedral grids across the whole observable seismic frequency band. Seismic sensitivity kernels are the basis for seismic tomography, since they map measurements to model perturbations. Their calculation over the whole frequency range was so far only possible with approximative methods (Dahlen et al. 2000). Fully numerical methods were restricted to the lower frequency range (usually below 0.05 Hz, Tromp et al. 2005). With our implementation, it's possible to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (www.axisem.info), and thus on spherically symmetric models. The advantage is that frequencies up to 0.2 Hz and higher can be accessed. Since the usage of irregular, adapted grids is an integral part of regularisation in seismic tomography, MC Kernel works in a inversion-grid-centred fashion: A Monte-Carlo integration method is used to project the kernel onto each basis function, which allows to control the desired precision of the kernel estimation. Also, it means that the code concentrates calculation effort on regions of interest without prior assumptions on the kernel shape. The code makes extensive use of redundancies in calculating kernels for different receivers or frequency-pass-bands for one earthquake, to facilitate its usage in large-scale global seismic tomography.
A simple method for computing the relativistic Compton scattering kernel for radiative transfer
NASA Technical Reports Server (NTRS)
Prasad, M. K.; Kershaw, D. S.; Beason, J. D.
1986-01-01
Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.
Linearized Kernel Dictionary Learning
NASA Astrophysics Data System (ADS)
Golts, Alona; Elad, Michael
2016-06-01
In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.
Bayesian Kernel Mixtures for Counts
Canale, Antonio; Dunson, David B.
2011-01-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
An Imbricate Finite Element Method (I-FEM) using full, reduced, and smoothed integration
NASA Astrophysics Data System (ADS)
Cazes, Fabien; Meschke, Günther
2013-11-01
A method to design finite elements that imbricate with each other while being assembled, denoted as imbricate finite element method, is proposed to improve the smoothness and the accuracy of the approximation based upon low order elements. Although these imbricate elements rely on triangular meshes, the approximation stems from the shape functions of bilinear quadrilateral elements. These elements satisfy the standard requirements of the finite element method: continuity, delta function property, and partition of unity. The convergence of the proposed approximation is investigated by means of two numerical benchmark problems comparing three different schemes for the numerical integration including a cell-based smoothed FEM based on a quadratic shape of the elements edges. The method is compared to related existing methods.
A relativistic smoothed particle hydrodynamics method tested with the shock tube
NASA Astrophysics Data System (ADS)
Mann, Patrick J.
1991-12-01
The smoothed particle hydrodynamics method is applied to an ADM 3 + 1 formulation of the equations for relativistic fluid flow. In particular the one-dimensional shock tube is addressed. Three codes are described. The first is a straightforward extension of classic SPH, while the other two are modifications which allow for time-dependent smoothing lengths. The first of these modifications approximates the internal energy density, while the second approximates the total energy density. Two smoothing forms are tested: an artificial viscosity and the direct method of A.J. Baker [Finite Element Computation Fluid Mechanics (Hemisphere, New York, 1983)]. The results indicate that the classic SPH code with particle-particle based artificial viscosity is reasonably accurate and very consistent. It gives quite sharp edges and flat plateaus, but the velocity plateau is significantly overestimated, and an oscillation can appear in the rarefaction wave. The modified versions with Baker smoothing procedure better results for moderate initial conditions, but begin to show spikes when the initial density jump is large. Generally the results are comparable to simple finite element and finite difference methods.
Hayashi, Takeshi; Kobayashi, Asako; Tomita, Katsura; Shimizu, Toyohiro
2015-01-01
We developed and evaluated the effectiveness of a new method to detect differences among rice cultivars in their resistance to kernel cracking. The method induces kernel cracking under laboratory controlled condition by moisture absorption to brown rice. The optimal moisture absorption conditions were determined using two japonica cultivars, ‘Nipponbare’ as a cracking-resistant cultivar and ‘Yamahikari’ as a cracking-susceptible cultivar: 12% initial moisture content of the brown rice, a temperature of 25°C, a duration of 5 h, and only a single absorption treatment. We then evaluated the effectiveness of these conditions using 12 japonica cultivars. The proportion of cracked kernels was significantly correlated with the mean 10-day maximum temperature after heading. In addition, the correlation between the proportions of cracked kernels in the 2 years of the study was higher than that for values obtained using the traditional late harvest method. The new moisture absorption method could stably evaluate the resistance to kernel cracking, and will help breeders to develop future cultivars with less cracking of the kernels. PMID:26719740
NASA Astrophysics Data System (ADS)
Danilewicz, Andrzej; Sikora, Zbigniew
2015-02-01
A theoretical base of SPH method, including the governing equations, discussion of importance of the smoothing function length, contact formulation, boundary treatment and finally utilization in hydrocode simulations are presented. An application of SPH to a real case of large penetrations (crater creating) into the soil caused by falling mass in Dynamic Replacement Method is discussed. An influence of particles spacing on method accuracy is presented. An example calculated by LS-DYNA software is discussed. Chronological development of Smooth Particle Hydrodynamics is presented. Theoretical basics of SPH method stability and consistency in SPH formulation, artificial viscosity and boundary treatment are discussed. Time integration techniques with stability conditions, SPH+FEM coupling, constitutive equation and equation of state (EOS) are presented as well.
Khandogin, Jana; Gregersen, Brent A; Thiel, Walter; York, Darrin M
2005-05-19
The present paper describes the extension of a recently developed smooth conductor-like screening model for solvation to a d-orbital semiempirical framework (MNDO/d-SCOSMO) with analytic gradients that can be used for geometry optimizations, transition state searches, and molecular dynamics simulations. The methodology is tested on the potential energy surfaces for separating ions and the dissociative phosphoryl transfer mechanism of methyl phosphate. The convergence behavior of the smooth COSMO method with respect to discretization level is examined and the numerical stability of the energy and gradient are compared to that from conventional COSMO calculations. The present method is further tested in applications to energy minimum and transition state geometry optimizations of neutral and charged metaphosphates, phosphates, and phosphoranes that are models for stationary points in transphosphorylation reaction pathways of enzymes and ribozymes. The results indicate that the smooth COSMO method greatly enhances the stability of quantum mechanical geometry optimization and transition state search calculations that would routinely fail with conventional solvation methods. The present MNDO/d-SCOSMO method has considerable computational advantages over hybrid quantum mechanical/molecular mechanical methods with explicit solvation, and represents a potentially useful tool in the arsenal of multi-scale quantum models used to study biochemical reactions. PMID:16852180
A method for smoothing segmented lung boundary in chest CT images
NASA Astrophysics Data System (ADS)
Yim, Yeny; Hong, Helen
2007-03-01
To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.
Testing local anisotropy using the method of smoothed residuals I — methodology
Appleby, Stephen; Shafieloo, Arman E-mail: arman@apctp.org
2014-03-01
We discuss some details regarding the method of smoothed residuals, which has recently been used to search for anisotropic signals in low-redshift distance measurements (Supernovae). In this short note we focus on some details regarding the implementation of the method, particularly the issue of effectively detecting signals in data that are inhomogeneously distributed on the sky. Using simulated data, we argue that the original method proposed in Colin et al. [1] will not detect spurious signals due to incomplete sky coverage, and that introducing additional Gaussian weighting to the statistic as in [2] can hinder its ability to detect a signal. Issues related to the width of the Gaussian smoothing are also discussed.
A new enzymic method for the isolation and culture of human bladder body smooth muscle cells.
Ma, F -H; Higashira, H; Ukai, Y; Hanai, T; Kiwamoto, H; Park, Y C; Kurita, T
2002-01-01
Cultured cells of the human urinary bladder smooth muscle are useful for investigating bladder function, but methods for culturing them are not well developed. We have now established a novel enzymic technique. The smooth muscle layer was separated out and incubated with 0.2% trypsin for 30 min at 37 degrees C. The samples were then minced and incubated with 0.1% collagenase for 30 min and centrifuged at 900 g. The pellets were resuspended in RPMI-1640 medium containing 10% fetal calf serum (FCS) and centrifuged at 250 g. The smooth muscle cells from the supernatant were cultured in RPMI-1640 containing 10% FCS. The cells grew to confluence after 7-10 days, forming the "hills and valleys" growth pattern characteristic of smooth muscle cells. Immunostaining with anti-alpha-actin, anti-myosin, and anti-caldesmon antibodies demonstrated that 99% of the cells were smooth muscle cells. To investigate the pharmacological properties of the cultured cells, we determined the inhibitory effect of muscarinic receptor antagonists on the binding of [3H]N-methylscopolamine to membranes from cultured cells. The pKi values obtained for six antagonists agreed with the corresponding values for transfected cells expressing the human muscarinic M2 subtype. Furthermore, carbachol produced an increase in the concentration of cytoplasmic free Ca2+ an action that was blocked by 4-diphenylacetoxy-N-methylpiperidine methiodide, an M3 selective antagonist. This result suggests that these cells express functional M3 muscarinic receptors, in addition to M2 receptors. The subcultured cells therefore appear to be unaffected by our new isolation method. PMID:11835427
To the theory of volterra integral equations of the first kind with discontinuous kernels
NASA Astrophysics Data System (ADS)
Apartsin, A. S.
2016-05-01
A nonclassical Volterra linear integral equation of the first kind describing the dynamics of an developing system with allowance for its age structure is considered. The connection of this equation with the classical Volterra linear integral equation of the first kind with a piecewise-smooth kernel is studied. For solving such equations, the quadrature method is applied.
Local Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
Methods for Least Squares Data Smoothing by Adjustment of Divided Differences
NASA Astrophysics Data System (ADS)
Demetriou, I. C.
2008-09-01
A brief survey is presented for the main methods that are used in least squares data smoothing by adjusting the signs of divided differences of the smoothed values. The most distinctive feature of the smoothing approach is that it provides automatically a piecewise monotonic or a piecewise convex/concave fit to the data. The data are measured values of a function of one variable that contain random errors. As a consequence of the errors, the number of sign alterations in the sequence of mth divided differences is usually unacceptably large, where m is a prescribed positive integer. Therefore, we make the least sum of squares change to the measurements by requiring the sequence of the divided differences of order m to have at most k-1 sign changes, for some positive integer k. Although, it is a combinatorial problem, whose solution can require about O(nk) quadratic programming calculations in n variables and n-m constraints, where n is the number of data, very efficient algorithms have been developed for the cases when m = 1 or m = 2 and k is arbitrary, as well as when m>2 for small values of k. Attention is paid to the purpose of each method instead of to its details. Some software packages make the methods publicly accessible through library systems.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-01-01
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-01-01
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405
NASA Astrophysics Data System (ADS)
Kang, S.; Suh, Y. K.
2011-02-01
The so-called smoothed profile method, originally suggested by Nakayama and Yamamoto and further improved by Luo et al. in 2005 and 2009, respectively, is an efficient numerical solver for fluid-structure interaction problems, which represents the particles by a certain smoothed profile on a fixed grid and constructs some form of body force added into the momentum (Navier-Stokes) equation by ensuring the rigidity of particles. For numerical simulations, the method first advances the flow and pressure fields by integrating the momentum equation except the body-force (momentum impulse) term in time and next updates them by separately taking temporal integration of the body-force term, thus requiring one more Poisson-equation solver for the extra pressure field due to the rigidity of particles to ensure the divergence-free constraint of the total velocity field. In the present study, we propose a simplified version of the smoothed profile method or the one-stage method, which combines the two stages of velocity update (temporal integration) into one to eliminate the necessity for the additional solver and, thus, significantly save the computational cost. To validate the proposed one-stage method, we perform the so-called direct numerical simulations on the two-dimensional motion of multiple inertialess paramagnetic particles in a nonmagnetic fluid subjected to an external uniform magnetic field and compare their results with the existing benchmark solutions. For the validation, we develop the finite-volume version of the direct simulation method by employing the proposed one-stage method. Comparison shows that the proposed one-stage method is very accurate and efficient in direct simulations of such magnetic particulate flows.
NASA Astrophysics Data System (ADS)
Morency, C.; Tromp, J.
2008-12-01
successfully performed. We present finite-frequency sensitivity kernels for wave propagation in porous media based upon adjoint methods. We first show that the adjoint equations in porous media are similar to the regular Biot equations upon defining an appropriate adjoint source. Then we present finite-frequency kernels for seismic phases in porous media (e.g., fast P, slow P, and S). These kernels illustrate the sensitivity of seismic observables to structural parameters and form the basis of tomographic inversions. Finally, we show an application of this imaging technique related to the detection of buried landmines and unexploded ordnance (UXO) in porous environments.
NASA Astrophysics Data System (ADS)
Liu, Jiaqi; Han, Jing; Zhang, Yi; Bai, Lianfa
2015-10-01
Locally adaptive regression kernels model can describe the edge shape of images accurately and graphic trend of images integrally, but it did not consider images' color information while the color is an important element of an image. Therefore, we present a novel method of target recognition based on 3-D-color-space locally adaptive regression kernels model. Different from the general additional color information, this method directly calculate the local similarity features of 3-D data from the color image. The proposed method uses a few examples of an object as a query to detect generic objects with incompact, complex and changeable shapes. Our method involves three phases: First, calculating the novel color-space descriptors from the RGB color space of query image which measure the likeness of a voxel to its surroundings. Salient features which include spatial- dimensional and color -dimensional information are extracted from said descriptors, and simplifying them to construct a non-similar local structure feature set of the object class by principal components analysis (PCA). Second, we compare the salient features with analogous features from the target image. This comparison is done using a matrix generalization of the cosine similarity measure. Then the similar structures in the target image are obtained using local similarity structure statistical matching. Finally, we use the method of non-maxima suppression in the similarity image to extract the object position and mark the object in the test image. Experimental results demonstrate that our approach is effective and accurate in improving the ability to identify targets.
NASA Astrophysics Data System (ADS)
Cho, Sang Hyun; Reece, Warren D.; Kim, Chan-Hyeong
2004-03-01
Dose calculations around electron-emitting metallic spherical sources were performed up to the X90 distance of each electron energy ranging from 0.5 to 3.0 MeV using the MCNP 4C Monte Carlo code and the dose point kernel (DPK) method with the DPKs rescaled using the linear range ratio and physical density ratio, respectively. The results show that the discrepancy between the MCNP and DPK results increases with the atomic number of the source (i.e., heterogeneity in source-target geometry), regardless of the rescaling method used. The observed discrepancies between the MCNP and DPK results were up to 100% for extreme cases such as a platinum source immersed in water.
Shot noise limit of the optical 3D measurement methods for smooth surfaces
NASA Astrophysics Data System (ADS)
Pavliček, Pavel; Pech, Miroslav
2016-03-01
The measurement uncertainty of optical 3D measurement methods for smooth surfaces caused by shot noise is investigated. The shot noise is a fundamental property of the quantum nature of light. If all noise sources are eliminated, the shot noise represents the ultimate limit of the measurement uncertainty. The measurement uncertainty is calculated for several simple model methods. The analysis shows that the measurement uncertainty depends on the wavelength of used light, the number of photons used for the measurement, and on a factor that is connected with the geometric arrangement of the measurement setup.
Bramble, J. H.; Pasciak, J. E.; Sammon, P. H.; Thomee, V.
1989-04-01
Backward difference methods for the discretization of parabolic boundary value problems are considered in this paper. In particular, we analyze the case when the backward difference equations are only solved 'approximately' by a preconditioned iteration. We provide an analysis which shows that these methods remain stable and accurate if a suitable number of iterations (often independent of the spatial discretization and time step size) are used. Results are provided for the smooth as well as nonsmooth initial data cases. Finally, the results of numerical experiments illustrating the algorithms' performance on model problems are given.
Source Region Identification Using Kernel Smoothing
As described in this paper, Nonparametric Wind Regression is a source-to-receptor source apportionment model that can be used to identify and quantify the impact of possible source regions of pollutants as defined by wind direction sectors. It is described in detail with an exam...
NASA Astrophysics Data System (ADS)
Preza, Chrysanthe; Miller, Michael I.; Conchello, Jose-Angel
1993-07-01
We have shown that the linear least-squares (LLS) estimate of the intensities of a 3-D object obtained from a set of optical sections is unstable due to the inversion of small and zero-valued eigenvalues of the point-spread function (PSF) operator. The LLS solution was regularized by constraining it to lie in a subspace spanned by the eigenvectors corresponding to a selected number of the largest eigenvalues. In this paper we extend the regularized LLS solution to a maximum a posteriori (MAP) solution induced by a prior formed from a 'Good's like' smoothness penalty. This approach also yields a regularized linear estimator which reduces noise as well as edge artifacts in the reconstruction. The advantage of the linear MAP (LMAP) estimate over the current regularized LLS (RLLS) is its ability to regularize the inverse problem by smoothly penalizing components in the image associated with small eigenvalues. Computer simulations were performed using a theoretical PSF and a simple phantom to compare the two regularization techniques. It is shown that the reconstructions using the smoothness prior, give superior variance and bias results compared to the RLLS reconstructions. Encouraging reconstructions obtained with the LMAP method from real microscopical images of a 10 micrometers fluorescent bead, and a four-cell Volvox embryo are shown.
A Fast Variational Method for the Construction of Resolution Adaptive C-Smooth Molecular Surfaces.
Bajaj, Chandrajit L; Xu, Guoliang; Zhang, Qin
2009-05-01
We present a variational approach to smooth molecular (proteins, nucleic acids) surface constructions, starting from atomic coordinates, as available from the protein and nucleic-acid data banks. Molecular dynamics (MD) simulations traditionally used in understanding protein and nucleic-acid folding processes, are based on molecular force fields, and require smooth models of these molecular surfaces. To accelerate MD simulations, a popular methodology is to employ coarse grained molecular models, which represent clusters of atoms with similar physical properties by psuedo- atoms, resulting in coarser resolution molecular surfaces. We consider generation of these mixed-resolution or adaptive molecular surfaces. Our approach starts from deriving a general form second order geometric partial differential equation in the level-set formulation, by minimizing a first order energy functional which additionally includes a regularization term to minimize the occurrence of chemically infeasible molecular surface pockets or tunnel-like artifacts. To achieve even higher computational efficiency, a fast cubic B-spline C(2) interpolation algorithm is also utilized. A narrow band, tri-cubic B-spline level-set method is then used to provide C(2) smooth and resolution adaptive molecular surfaces. PMID:19802355
The multiscale restriction smoothed basis method for fractured porous media (F-MsRSB)
NASA Astrophysics Data System (ADS)
Shah, Swej; Møyner, Olav; Tene, Matei; Lie, Knut-Andreas; Hajibeygi, Hadi
2016-08-01
A novel multiscale method for multiphase flow in heterogeneous fractured porous media is devised. The discrete fine-scale system is described using an embedded fracture modeling approach, in which the heterogeneous rock (matrix) and highly-conductive fractures are represented on independent grids. Given this fine-scale discrete system, the method first partitions the fine-scale volumetric grid representing the matrix and the lower-dimensional grids representing fractures into independent coarse grids. Then, basis functions for matrix and fractures are constructed by restricted smoothing, which gives a flexible and robust treatment of complex geometrical features and heterogeneous coefficients. From the basis functions one constructs a prolongation operator that maps between the coarse- and fine-scale systems. The resulting method allows for general coupling of matrix and fracture basis functions, giving efficient treatment of a large variety of fracture conductivities. In addition, basis functions can be adaptively updated using efficient global smoothing strategies to account for multiphase flow effects. The method is conservative and because it is described and implemented in algebraic form, it is straightforward to employ it to both rectilinear and unstructured grids. Through a series of challenging test cases for single and multiphase flow, in which synthetic and realistic fracture maps are combined with heterogeneous petrophysical matrix properties, we validate the method and conclude that it is an efficient and accurate approach for simulating flow in complex, large-scale, fractured media.
NUMERICAL CONVERGENCE IN SMOOTHED PARTICLE HYDRODYNAMICS
Zhu, Qirong; Li, Yuexing; Hernquist, Lars
2015-02-10
We study the convergence properties of smoothed particle hydrodynamics (SPH) using numerical tests and simple analytic considerations. Our analysis shows that formal numerical convergence is possible in SPH only in the joint limit N → ∞, h → 0, and N{sub nb} → ∞, where N is the total number of particles, h is the smoothing length, and N{sub nb} is the number of neighbor particles within the smoothing volume used to compute smoothed estimates. Previous work has generally assumed that the conditions N → ∞ and h → 0 are sufficient to achieve convergence, while holding N{sub nb} fixed. We demonstrate that if N{sub nb} is held fixed as the resolution is increased, there will be a residual source of error that does not vanish as N → ∞ and h → 0. Formal numerical convergence in SPH is possible only if N{sub nb} is increased systematically as the resolution is improved. Using analytic arguments, we derive an optimal compromise scaling for N{sub nb} by requiring that this source of error balance that present in the smoothing procedure. For typical choices of the smoothing kernel, we find N{sub nb} ∝N {sup 0.5}. This means that if SPH is to be used as a numerically convergent method, the required computational cost does not scale with particle number as O(N), but rather as O(N {sup 1} {sup +} {sup δ}), where δ ≈ 0.5, with a weak dependence on the form of the smoothing kernel.
Kernel Phase and Kernel Amplitude in Fizeau Imaging
NASA Astrophysics Data System (ADS)
Pope, Benjamin J. S.
2016-09-01
Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent fhistory of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.
Spatial smoothing systematically biases the localization of reward-related brain activity
Sacchet, Matthew D.; Knutson, Brian
2012-01-01
Neuroimaging methods with enhanced spatial resolution such as functional magnetic resonance imaging (FMRI) suggest that the subcortical striatum plays a critical role in human reward processing. Analysis of FMRI data requires several preprocessing steps, some of which entail tradeoffs. For instance, while spatial smoothing can enhance statistical power, it may also bias localization towards regions that contain more gray than white matter. In a meta-analysis and reanalysis of an existing dataset, we sought to determine whether spatial smoothing could systematically bias the spatial localization of foci related to reward anticipation in the nucleus accumbens (NAcc). An Activation Likelihood Estimate (ALE) meta-analysis revealed that peak ventral striatal ALE foci for studies that used smaller spatial smoothing kernels (i.e. < 6 mm FWHM) were more anterior than those identified for studies that used larger kernels (i.e. > 7 mm FWHM). Additionally, subtraction analysis of findings for studies that used smaller versus larger smoothing kernels revealed a significant cluster of differential activity in the left relatively anterior NAcc (Talairach coordinates: −10, 9, −1). A second meta-analysis revealed that larger smoothing kernels were correlated with more posterior localizations of NAcc activation foci (p < 0.015), but revealed no significant associations with other potentially relevant parameters (including voxel volume, magnet strength, and publication date). Finally, repeated analysis of a representative dataset processed at different smoothing kernels (i.e., 0–12 mm) also indicated that smoothing systematically yielded more posterior activation foci in the NAcc (p < 0.005). Taken together, these findings indicate that spatial smoothing can systematically bias the spatial localization of striatal activity. These findings have implications both for historical interpretation of past findings related to reward processing and for the analysis of future studies
NASA Technical Reports Server (NTRS)
Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.
2013-01-01
Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.
Smoothed Particle Inference: A Kilo-Parametric Method for X-ray Galaxy Cluster Modeling
Peterson, John R.; Marshall, P.J.; Andersson, K.; /Stockholm U. /SLAC
2005-08-05
We propose an ambitious new method that models the intracluster medium in clusters of galaxies as a set of X-ray emitting smoothed particles of plasma. Each smoothed particle is described by a handful of parameters including temperature, location, size, and elemental abundances. Hundreds to thousands of these particles are used to construct a model cluster of galaxies, with the appropriate complexity estimated from the data quality. This model is then compared iteratively with X-ray data in the form of adaptively binned photon lists via a two-sample likelihood statistic and iterated via Markov Chain Monte Carlo. The complex cluster model is propagated through the X-ray instrument response using direct sampling Monte Carlo methods. Using this approach the method can reproduce many of the features observed in the X-ray emission in a less assumption-dependent way that traditional analyses, and it allows for a more detailed characterization of the density, temperature, and metal abundance structure of clusters. Multi-instrument X-ray analyses and simultaneous X-ray, Sunyaev-Zeldovich (SZ), and lensing analyses are a straight-forward extension of this methodology. Significant challenges still exist in understanding the degeneracy in these models and the statistical noise induced by the complexity of the models.
Unified framework for anisotropic interpolation and smoothing of diffusion tensor images.
Mishra, Arabinda; Lu, Yonggang; Meng, Jingjing; Anderson, Adam W; Ding, Zhaohua
2006-07-15
To enhance the performance of diffusion tensor imaging (DTI)-based fiber tractography, this study proposes a unified framework for anisotropic interpolation and smoothing of DTI data. The critical component of this framework is an anisotropic sigmoid interpolation kernel which is adaptively modulated by the local image intensity gradient profile. The adaptive modulation of the sigmoid kernel permits image smoothing in homogeneous regions and meanwhile guarantees preservation of structural boundaries. The unified scheme thus allows piece-wise smooth, continuous and boundary preservation interpolation of DTI data, so that smooth fiber tracts can be tracked in a continuous manner and confined within the boundaries of the targeted structure. The new interpolation method is compared with conventional interpolation methods on the basis of fiber tracking from synthetic and in vivo DTI data, which demonstrates the effectiveness of this unified framework. PMID:16624586
Method of adiabatic modes in studying problems of smoothly irregular open waveguide structures
Sevastianov, L. A.; Egorov, A. A.; Sevastyanov, A. L.
2013-02-15
Basic steps in developing an original method of adiabatic modes that makes it possible to solve the direct and inverse problems of simulating and designing three-dimensional multilayered smoothly irregular open waveguide structures are described. A new element in the method is that an approximate solution of Maxwell's equations is made to obey 'inclined' boundary conditions at the interfaces between themedia being considered. These boundary conditions take into account the obliqueness of planes tangent to nonplanar boundaries between the media and lead to new equations for coupled vector quasiwaveguide hybrid adiabatic modes. Solutions of these equations describe the phenomenon of 'entanglement' of two linear polarizations of an irregular multilayered waveguide, the appearance of a new mode in an entangled state, and the effect of rotation of the polarization plane of quasiwaveguide modes. The efficiency of the method is demonstrated by considering the example of numerically simulating a thin-film generalized waveguide Lueneburg lens.
NASA Astrophysics Data System (ADS)
Rezaee, Mousa; Shaterian-Alghalandis, Vahid; Banan-Nojavani, Ali
2013-04-01
In this paper, the smooth orthogonal decomposition (SOD) method is developed to the light damped systems in which the inputs are time shifted functions of one or more random processes. An example of such practical cases is the vehicle suspension system in which the random inputs due to the road roughness applied to the rear wheels are the shifted functions of the same random inputs on the front wheels with a time lag depending on the vehicle wheelbase as well as its velocity. The developed SOD method is applied to determine the natural frequencies and mode shapes of a certain vehicle suspension system and the results are compared with the true values obtained by the structural eigenvalue problem. The consistency of the results indicates that the SOD method can be applied with a high degree of accuracy to calculate the modal parameters of vibrating systems in which the system inputs are shifted functions of one or more random processes.
Deng, Zhaohong; Choi, Kup-Sze; Jiang, Yizhang; Wang, Shitong
2014-12-01
Inductive transfer learning has attracted increasing attention for the training of effective model in the target domain by leveraging the information in the source domain. However, most transfer learning methods are developed for a specific model, such as the commonly used support vector machine, which makes the methods applicable only to the adopted models. In this regard, the generalized hidden-mapping ridge regression (GHRR) method is introduced in order to train various types of classical intelligence models, including neural networks, fuzzy logical systems and kernel methods. Furthermore, the knowledge-leverage based transfer learning mechanism is integrated with GHRR to realize the inductive transfer learning method called transfer GHRR (TGHRR). Since the information from the induced knowledge is much clearer and more concise than that from the data in the source domain, it is more convenient to control and balance the similarity and difference of data distributions between the source and target domains. The proposed GHRR and TGHRR algorithms have been evaluated experimentally by performing regression and classification on synthetic and real world datasets. The results demonstrate that the performance of TGHRR is competitive with or even superior to existing state-of-the-art inductive transfer learning algorithms. PMID:24710838
Immersed smoothed finite element method for fluid-structure interaction simulation of aortic valves
NASA Astrophysics Data System (ADS)
Yao, Jianyao; Liu, G. R.; Narmoneva, Daria A.; Hinton, Robert B.; Zhang, Zhi-Qian
2012-12-01
This paper presents a novel numerical method for simulating the fluid-structure interaction (FSI) problems when blood flows over aortic valves. The method uses the immersed boundary/element method and the smoothed finite element method and hence it is termed as IS-FEM. The IS-FEM is a partitioned approach and does not need a body-fitted mesh for FSI simulations. It consists of three main modules: the fluid solver, the solid solver and the FSI force solver. In this work, the blood is modeled as incompressible viscous flow and solved using the characteristic-based-split scheme with FEM for spacial discretization. The leaflets of the aortic valve are modeled as Mooney-Rivlin hyperelastic materials and solved using smoothed finite element method (or S-FEM). The FSI force is calculated on the Lagrangian fictitious fluid mesh that is identical to the moving solid mesh. The octree search and neighbor-to-neighbor schemes are used to detect efficiently the FSI pairs of fluid and solid cells. As an example, a 3D idealized model of aortic valve is modeled, and the opening process of the valve is simulated using the proposed IS-FEM. Numerical results indicate that the IS-FEM can serve as an efficient tool in the study of aortic valve dynamics to reveal the details of stresses in the aortic valves, the flow velocities in the blood, and the shear forces on the interfaces. This tool can also be applied to animal models studying disease processes and may ultimately translate to a new adaptive methods working with magnetic resonance images, leading to improvements on diagnostic and prognostic paradigms, as well as surgical planning, in the care of patients.
A method for the accurate and smooth approximation of standard thermodynamic functions
NASA Astrophysics Data System (ADS)
Coufal, O.
2013-01-01
A method is proposed for the calculation of approximations of standard thermodynamic functions. The method is consistent with the physical properties of standard thermodynamic functions. This means that the approximation functions are, in contrast to the hitherto used approximations, continuous and smooth in every temperature interval in which no phase transformations take place. The calculation algorithm was implemented by the SmoothSTF program in the C++ language which is part of this paper. Program summaryProgram title:SmoothSTF Catalogue identifier: AENH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3807 No. of bytes in distributed program, including test data, etc.: 131965 Distribution format: tar.gz Programming language: C++. Computer: Any computer with gcc version 4.3.2 compiler. Operating system: Debian GNU Linux 6.0. The program can be run in operating systems in which the gcc compiler can be installed, see http://gcc.gnu.org/install/specific.html. RAM: 256 MB are sufficient for the table of standard thermodynamic functions with 500 lines Classification: 4.9. Nature of problem: Standard thermodynamic functions (STF) of individual substances are given by thermal capacity at constant pressure, entropy and enthalpy. STF are continuous and smooth in every temperature interval in which no phase transformations take place. The temperature dependence of STF as expressed by the table of its values is for further application approximated by temperature functions. In the paper, a method is proposed for calculating approximation functions which, in contrast to the hitherto used approximations, are continuous and smooth in every temperature interval. Solution method: The approximation functions are
NASA Technical Reports Server (NTRS)
Zeng, S.; Wesseling, P.
1993-01-01
The performance of a linear multigrid method using four smoothing methods, called SCGS (Symmetrical Coupled GauBeta-Seidel), CLGS (Collective Line GauBeta-Seidel), SILU (Scalar ILU), and CILU (Collective ILU), is investigated for the incompressible Navier-Stokes equations in general coordinates, in association with Galerkin coarse grid approximation. Robustness and efficiency are measured and compared by application to test problems. The numerical results show that CILU is the most robust, SILU the least, with CLGS and SCGS in between. CLGS is the best in efficiency, SCGS and CILU follow, and SILU is the worst.
Ge, Tian; Nichols, Thomas E.; Ghosh, Debashis; Mormino, Elizabeth C.
2015-01-01
Measurements derived from neuroimaging data can serve as markers of disease and/or healthy development, are largely heritable, and have been increasingly utilized as (intermediate) phenotypes in genetic association studies. To date, imaging genetic studies have mostly focused on discovering isolated genetic effects, typically ignoring potential interactions with non-genetic variables such as disease risk factors, environmental exposures, and epigenetic markers. However, identifying significant interaction effects is critical for revealing the true relationship between genetic and phenotypic variables, and shedding light on disease mechanisms. In this paper, we present a general kernel machine based method for detecting effects of interaction between multidimensional variable sets. This method can model the joint and epistatic effect of a collection of single nucleotide polymorphisms (SNPs), accommodate multiple factors that potentially moderate genetic influences, and test for nonlinear interactions between sets of variables in a flexible framework. As a demonstration of application, we applied the method to data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) to detect the effects of the interactions between candidate Alzheimer's disease (AD) risk genes and a collection of cardiovascular disease (CVD) risk factors, on hippocampal volume measurements derived from structural brain magnetic resonance imaging (MRI) scans. Our method identified that two genes, CR1 and EPHA1, demonstrate significant interactions with CVD risk factors on hippocampal volume, suggesting that CR1 and EPHA1 may play a role in influencing AD-related neurodegeneration in the presence of CVD risks. PMID:25600633
Ge, Tian; Nichols, Thomas E; Ghosh, Debashis; Mormino, Elizabeth C; Smoller, Jordan W; Sabuncu, Mert R
2015-04-01
Measurements derived from neuroimaging data can serve as markers of disease and/or healthy development, are largely heritable, and have been increasingly utilized as (intermediate) phenotypes in genetic association studies. To date, imaging genetic studies have mostly focused on discovering isolated genetic effects, typically ignoring potential interactions with non-genetic variables such as disease risk factors, environmental exposures, and epigenetic markers. However, identifying significant interaction effects is critical for revealing the true relationship between genetic and phenotypic variables, and shedding light on disease mechanisms. In this paper, we present a general kernel machine based method for detecting effects of the interaction between multidimensional variable sets. This method can model the joint and epistatic effect of a collection of single nucleotide polymorphisms (SNPs), accommodate multiple factors that potentially moderate genetic influences, and test for nonlinear interactions between sets of variables in a flexible framework. As a demonstration of application, we applied the method to the data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) to detect the effects of the interactions between candidate Alzheimer's disease (AD) risk genes and a collection of cardiovascular disease (CVD) risk factors, on hippocampal volume measurements derived from structural brain magnetic resonance imaging (MRI) scans. Our method identified that two genes, CR1 and EPHA1, demonstrate significant interactions with CVD risk factors on hippocampal volume, suggesting that CR1 and EPHA1 may play a role in influencing AD-related neurodegeneration in the presence of CVD risks. PMID:25600633
NASA Astrophysics Data System (ADS)
Sugio, Tetsuya; Yamamoto, Masayoshi; Funabiki, Shigeyuki
The use of an SMES (Superconducting Magnetic Energy Storage) for smoothing power fluctuations in a railway substation has been discussed. This paper proposes a smoothing control method based on fuzzy reasoning for reducing the SMES capacity at substations along high-speed railways. The proposed smoothing control method comprises three countermeasures for reduction of the SMES capacity. The first countermeasure involves modification of rule 1 for smoothing out the fluctuating electric power to its average value. The other countermeasures involve the modification of the central value of the stored energy control in the SMES and revision of the membership function in rule 2 for reduction of the SMES capacity. The SMES capacity in the proposed smoothing control method is reduced by 49.5% when compared to that in the nonrevised control method. It is confirmed by computer simulations that the proposed control method is suitable for smoothing out power fluctuations in substations along high-speed railways and for reducing the SMES capacity.
An incompressible smoothed particle hydrodynamics method for the motion of rigid bodies in fluids
NASA Astrophysics Data System (ADS)
Tofighi, N.; Ozbulut, M.; Rahmat, A.; Feng, J. J.; Yildiz, M.
2015-09-01
A two-dimensional incompressible smoothed particle hydrodynamics scheme is presented for simulation of rigid bodies moving through Newtonian fluids. The scheme relies on combined usage of the rigidity constraints and the viscous penalty method to simulate rigid body motion. Different viscosity ratios and interpolation schemes are tested by simulating a rigid disc descending in quiescent medium. A viscosity ratio of 100 coupled with weighted harmonic averaging scheme has been found to provide satisfactory results. The performance of the resulting scheme is systematically tested for cases with linear motion, rotational motion and their combination. The test cases include sedimentation of a single and a pair of circular discs, sedimentation of an elliptic disc and migration and rotation of a circular disc in linear shear flow. Comparison with previous results at various Reynolds numbers indicates that the proposed method captures the motion of rigid bodies driven by flow or external body forces accurately.
NASA Astrophysics Data System (ADS)
Yang, G.; Han, X.; Hu, D. A.
2015-11-01
Modified cylindrical smoothed particle hydrodynamics (MCSPH) approximation equations are derived for hydrodynamics with material strength in axisymmetric cylindrical coordinates. The momentum equation and internal energy equation are represented to be in the axisymmetric form. The MCSPH approximation equations are applied to simulate the process of explosively driven metallic tubes, which includes strong shock waves, large deformations and large inhomogeneities, etc. The meshless and Lagrangian character of the MCSPH method offers the advantages in treating the difficulties embodied in these physical phenomena. Two test cases, the cylinder test and the metallic tube driven by two head-on colliding detonation waves, are presented. Numerical simulation results show that the new form of the MCSPH method can predict the detonation process of high explosives and the expansion process of metallic tubes accurately and robustly.
NASA Astrophysics Data System (ADS)
Mozdgir, A.; Mahdavi, Iraj; Seyyedi, I.; Shiraqei, M. E.
2011-06-01
An assembly line is a flow-oriented production system where the productive units performing the operations, referred to as stations, are aligned in a serial manner. The assembly line balancing problem arises and has to be solved when an assembly line has to be configured or redesigned. The so-called simple assembly line balancing problem (SALBP), a basic version of the general problem, has attracted attention of researchers and practitioners of operations research for almost half a century. There are four types of objective functions which are considered to this kind of problem. The versions of SALBP may be complemented by a secondary objective which consists of smoothing station loads. Many heuristics have been proposed for the assembly line balancing problem due to its computational complexity and difficulty in identifying an optimal solution and so many heuristic solutions are supposed to solve this problem. In this paper a differential evolution algorithm is developed to minimize workload smoothness index in SALBP-2 and the algorithm parameters are optimized using Taguchi method.
Sogi, Dalbir Singh; Siddiq, Muhammad; Greiby, Ibrahim; Dolan, Kirk D
2013-12-01
Mango processing produces significant amount of waste (peels and kernels) that can be utilized for the production of value-added ingredients for various food applications. Mango peel and kernel were dried using different techniques, such as freeze drying, hot air, vacuum and infrared. Freeze dried mango waste had higher antioxidant properties than those from other techniques. The ORAC values of peel and kernel varied from 418-776 and 1547-1819 μmol TE/g db. The solubility of freeze dried peel and kernel powder was the highest. The water and oil absorption index of mango waste powders ranged between 1.83-6.05 and 1.66-3.10, respectively. Freeze dried powders had the lowest bulk density values among different techniques tried. The cabinet dried waste powders can be potentially used in food products to enhance their nutritional and antioxidant properties. PMID:23871007
NASA Astrophysics Data System (ADS)
von Clarmann, T.
2014-09-01
The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the
Wang, Dongliang; Hutson, Alan D.
2016-01-01
The traditional confidence interval associated with the ordinary least squares estimator of linear regression coefficient is sensitive to non-normality of the underlying distribution. In this article, we develop a novel kernel density estimator for the ordinary least squares estimator via utilizing well-defined inversion based kernel smoothing techniques in order to estimate the conditional probability density distribution of the dependent random variable. Simulation results show that given a small sample size, our method significantly increases the power as compared with Wald-type CIs. The proposed approach is illustrated via an application to a classic small data set originally from Graybill (1961). PMID:26924882
NASA Astrophysics Data System (ADS)
Møyner, Olav; Lie, Knut-Andreas
2016-01-01
A wide variety of multiscale methods have been proposed in the literature to reduce runtime and provide better scaling for the solution of Poisson-type equations modeling flow in porous media. We present a new multiscale restricted-smoothed basis (MsRSB) method that is designed to be applicable to both rectilinear grids and unstructured grids. Like many other multiscale methods, MsRSB relies on a coarse partition of the underlying fine grid and a set of local prolongation operators (multiscale basis functions) that map unknowns associated with the fine grid cells to unknowns associated with blocks in the coarse partition. These mappings are constructed by restricted smoothing: Starting from a constant, a localized iterative scheme is applied directly to the fine-scale discretization to compute prolongation operators that are consistent with the local properties of the differential operators. The resulting method has three main advantages: First of all, both the coarse and the fine grid can have general polyhedral geometry and unstructured topology. This means that partitions and good prolongation operators can easily be constructed for complex models involving high media contrasts and unstructured cell connections introduced by faults, pinch-outs, erosion, local grid refinement, etc. In particular, the coarse partition can be adapted to geological or flow-field properties represented on cells or faces to improve accuracy. Secondly, the method is accurate and robust when compared to existing multiscale methods and does not need expensive recomputation of local basis functions to account for transient behavior: Dynamic mobility changes are incorporated by continuing to iterate a few extra steps on existing basis functions. This way, the cost of updating the prolongation operators becomes proportional to the amount of change in fluid mobility and one reduces the need for expensive, tolerance-based updates. Finally, since the MsRSB method is formulated on top of a cell
Coupling of Smoothed Particle Hydrodynamics with Finite Volume method for free-surface flows
NASA Astrophysics Data System (ADS)
Marrone, S.; Di Mascio, A.; Le Touzé, D.
2016-04-01
A new algorithm for the solution of free surface flows with large front deformation and fragmentation is presented. The algorithm is obtained by coupling a classical Finite Volume (FV) approach, that discretizes the Navier-Stokes equations on a block structured Eulerian grid, with an approach based on the Smoothed Particle Hydrodynamics (SPH) method, implemented in a Lagrangian framework. The coupling procedure is formulated in such a way that each solver is applied in the region where its intrinsic characteristics can be exploited in the most efficient and accurate way: the FV solver is used to resolve the bulk flow and the wall regions, whereas the SPH solver is implemented in the free surface region to capture details of the front evolution. The reported results clearly prove that the combined use of the two solvers is convenient from the point of view of both accuracy and computing time.
Wu, Wei; Fan, Qinwei; Zurada, Jacek M; Wang, Jian; Yang, Dakun; Liu, Yan
2014-02-01
The aim of this paper is to develop a novel method to prune feedforward neural networks by introducing an L1/2 regularization term into the error function. This procedure forces weights to become smaller during the training and can eventually removed after the training. The usual L1/2 regularization term involves absolute values and is not differentiable at the origin, which typically causes oscillation of the gradient of the error function during the training. A key point of this paper is to modify the usual L1/2 regularization term by smoothing it at the origin. This approach offers the following three advantages: First, it removes the oscillation of the gradient value. Secondly, it gives better pruning, namely the final weights to be removed are smaller than those produced through the usual L1/2 regularization. Thirdly, it makes it possible to prove the convergence of the training. Supporting numerical examples are also provided. PMID:24291693
NASA Astrophysics Data System (ADS)
Wang, Liang; Chen, Dong; Cheng, Tinghai; He, Pu; Lu, Xiaohui; Zhao, Hongwei
2016-08-01
The smooth impact drive mechanism (SIDM) is a type of piezoelectric actuator that has been developed for several decades. As a kind of driving method for the SIDM, the traditional sawtooth (TS) wave is always employed. The kinetic friction force during the rapid contraction stage usually results in the generation of a backward motion. A friction regulation hybrid (FRH) driving method realized by a composite waveform for the backward motion restraint of the SIDM is proposed in this paper. The composite waveform is composed of a sawtooth driving (SD) wave and a sinusoidal friction regulation (SFR) wave which is applied to the rapid deformation stage of the SD wave. A prototype of the SIDM was fabricated and its output performance under the excitation of the FRH driving method and the TS wave driving method was tested. The results indicate that the backward motion can be restrained obviously using the FRH driving method. Compared with the driving effect of the TS wave, the backward rates of the prototype in forward and reverse motions are decreased by 83% and 85%, respectively.
Workshop on advances in smooth particle hydrodynamics
Wingate, C.A.; Miller, W.A.
1993-12-31
This proceedings contains viewgraphs presented at the 1993 workshop held at Los Alamos National Laboratory. Discussed topics include: negative stress, reactive flow calculations, interface problems, boundaries and interfaces, energy conservation in viscous flows, linked penetration calculations, stability and consistency of the SPH method, instabilities, wall heating and conservative smoothing, tensors, tidal disruption of stars, breaking the 10,000,000 particle limit, modelling relativistic collapse, SPH without H, relativistic KSPH avoidance of velocity based kernels, tidal compression and disruption of stars near a supermassive rotation black hole, and finally relativistic SPH viscosity and energy.
Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.
2011-01-01
Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185
Simulation of surface tension in 2D and 3D with smoothed particle hydrodynamics method
NASA Astrophysics Data System (ADS)
Zhang, Mingyu
2010-09-01
The methods for simulating surface tension with smoothed particle hydrodynamics (SPH) method in two dimensions and three dimensions are developed. In 2D surface tension model, the SPH particle on the boundary in 2D is detected dynamically according to the algorithm developed by Dilts [G.A. Dilts, Moving least-squares particle hydrodynamics II: conservation and boundaries, International Journal for Numerical Methods in Engineering 48 (2000) 1503-1524]. The boundary curve in 2D is reconstructed locally with Lagrangian interpolation polynomial. In 3D surface tension model, the SPH particle on the boundary in 3D is detected dynamically according to the algorithm developed by Haque and Dilts [A. Haque, G.A. Dilts, Three-dimensional boundary detection for particle methods, Journal of Computational Physics 226 (2007) 1710-1730]. The boundary surface in 3D is reconstructed locally with moving least squares (MLS) method. By transforming the coordinate system, it is guaranteed that the interface function is one-valued in the local coordinate system. The normal vector and curvature of the boundary surface are calculated according to the reconstructed boundary surface and then surface tension force can be calculated. Surface tension force acts only on the boundary particle. Density correction is applied to the boundary particle in order to remove the boundary inconsistency. The surface tension models in 2D and 3D have been applied to benchmark tests for surface tension. The ability of the current method applying to the simulation of surface tension in 2D and 3D is proved.
ERIC Educational Resources Information Center
Gardner, Don E.
The merits of double exponential smoothing are discussed relative to other types of pattern-based enrollment forecasting methods. The difficulties associated with selecting an appropriate weight factor are discussed, and their potential effects on prediction results are illustrated. Two methods for objectively selecting the "best" weight factor…
NASA Astrophysics Data System (ADS)
Dyachkov, S. A.; Parshikov, A. N.; Zhakhovsky, V. V.
2015-11-01
Experimental methods of observation of early stage of shock-induced ejecta from metal surface with micrometer-sized perturbations are still limited in terms of following a complete sequence of processes having microscale dimensions and nanoscale times. Therefore, simulations by the smoothed particle hydrodynamics (SPH) and molecular dynamics (MD) methods can shed of light on details of micro-jet evolution. The size of simulated sample is too restricted in MD, but the simulations with large enough number of atoms can be scaled well to the sizes of realistic samples. To validate such scaling the comparative MD and SPH simulations of tin samples are performed. SPH simulation takes the realistic experimental sizes, while MD uses the proportionally scaled sizes of samples. It is shown that the velocity and mass distributions along the jets simulated by MD and SPH are in a good agreement. The observed difference in velocity of spikes between MD and experiments can be partially explained by a profound effect of surface tension on jets ejected from the small-scale samples.
NASA Astrophysics Data System (ADS)
Duguet, T.; Bender, M.; Ebran, J.-P.; Lesinski, T.; Somà, V.
2015-12-01
This programmatic paper lays down the possibility to reconcile the necessity to resum many-body correlations into the energy kernel with the fact that safe multi-reference energy density functional (EDF) calculations cannot be achieved whenever the Pauli principle is not enforced, as is for example the case when many-body correlations are parametrized under the form of empirical density dependencies. Our proposal is to exploit a newly developed ab initio many-body formalism to guide the construction of safe, explicitly correlated and systematically improvable parametrizations of the off-diagonal energy and norm kernels that lie at the heart of the nuclear EDF method. The many-body formalism of interest relies on the concepts of symmetry breaking and restoration that have made the fortune of the nuclear EDF method and is, as such, amenable to this guidance. After elaborating on our proposal, we briefly outline the project we plan to execute in the years to come.
NASA Astrophysics Data System (ADS)
Gaudeua de Gerlicz, C.; Golding, J. G.; Bobola, Ph.; Moutarde, C.; Naji, S.
2008-06-01
The spaceflight under microgravity cause basically biological and physiological imbalance in human being. Lot of study has been yet release on this topic especially about sleep disturbances and on the circadian rhythms (alternation vigilance-sleep, body, temperature...). Factors like space motion sickness, noise, or excitement can cause severe sleep disturbances. For a stay of longer than four months in space, gradual increases in the planned duration of sleep were reported. [1] The average sleep in orbit was more than 1.5 hours shorter than the during control periods on earth, where sleep averaged 7.9 hours. [2] Alertness and calmness were unregistered yield clear circadian pattern of 24h but with a phase delay of 4h.The calmness showed a biphasic component (12h) mean sleep duration was 6.4 structured by 3-5 non REM/REM cycles. Modelisations of neurophysiologic mechanisms of stress and interactions between various physiological and psychological variables of rhythms have can be yet release with the COSINOR method. [3
Visualizing and Interacting with Kernelized Data.
Barbosa, A; Paulovich, F V; Paiva, A; Goldenstein, S; Petronetto, F; Nonato, L G
2016-03-01
Kernel-based methods have experienced a substantial progress in the last years, tuning out an essential mechanism for data classification, clustering and pattern recognition. The effectiveness of kernel-based techniques, though, depends largely on the capability of the underlying kernel to properly embed data in the feature space associated to the kernel. However, visualizing how a kernel embeds the data in a feature space is not so straightforward, as the embedding map and the feature space are implicitly defined by the kernel. In this work, we present a novel technique to visualize the action of a kernel, that is, how the kernel embeds data into a high-dimensional feature space. The proposed methodology relies on a solid mathematical formulation to map kernelized data onto a visual space. Our approach is faster and more accurate than most existing methods while still allowing interactive manipulation of the projection layout, a game-changing trait that other kernel-based projection techniques do not have. PMID:26829242
NASA Astrophysics Data System (ADS)
Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-12-01
Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a
Technology Transfer Automated Retrieval System (TEKTRAN)
It would be useful to know the total kernel mass within a given mass of peanuts (mass ratio) while the peanuts are bought or being processed. In this work, the possibility of finding this mass ratio while the peanuts were in their shells was investigated. Capacitance, phase angle and dissipation fa...
ERIC Educational Resources Information Center
Chen, Haiwen; Holland, Paul
2010-01-01
In this paper, we develop a new curvilinear equating for the nonequivalent groups with anchor test (NEAT) design under the assumption of the classical test theory model, that we name curvilinear Levine observed score equating. In fact, by applying both the kernel equating framework and the mean preserving linear transformation of…
NASA Technical Reports Server (NTRS)
Pinson, Robin M.; Schmitt, Terri L.; Hanson, John M.
2008-01-01
Six degree-of-freedom (DOF) launch vehicle trajectories are designed to follow an optimized 3-DOF reference trajectory. A vehicle has a finite amount of control power that it can allocate to performing maneuvers. Therefore, the 3-DOF trajectory must be designed to refrain from using 100% of the allowable control capability to perform maneuvers, saving control power for handling off-nominal conditions, wind gusts and other perturbations. During the Ares I trajectory analysis, two maneuvers were found to be hard for the control system to implement; a roll maneuver prior to the gravity turn and an angle of attack maneuver immediately after the J-2X engine start-up. It was decided to develop an approach for creating smooth maneuvers in the optimized reference trajectories that accounts for the thrust available from the engines. A feature of this method is that no additional angular velocity in the direction of the maneuver has been added to the vehicle after the maneuver completion. This paper discusses the equations behind these new maneuvers and their implementation into the Ares I trajectory design cycle. Also discussed is a possible extension to adjusting closed-loop guidance.
NASA Astrophysics Data System (ADS)
Bleiziffer, Patrick; Krug, Marcel; Görling, Andreas
2015-06-01
A self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation (ACFD) theorem, employing the frequency-dependent exact exchange kernel fx is presented. The resulting SC-exact-exchange-only (EXX)-ACFD method leads to even more accurate correlation potentials than those obtained within the direct random phase approximation (dRPA). In contrast to dRPA methods, not only the Coulomb kernel but also the exact exchange kernel fx is taken into account in the EXX-ACFD correlation which results in a method that, unlike dRPA methods, is free of self-correlations, i.e., a method that treats exactly all one-electron systems, like, e.g., the hydrogen atom. The self-consistent evaluation of EXX-ACFD total energies improves the accuracy compared to EXX-ACFD total energies evaluated non-self-consistently with EXX or dRPA orbitals and eigenvalues. Reaction energies of a set of small molecules, for which highly accurate experimental reference data are available, are calculated and compared to quantum chemistry methods like Møller-Plesset perturbation theory of second order (MP2) or coupled cluster methods [CCSD, coupled cluster singles, doubles, and perturbative triples (CCSD(T))]. Moreover, we compare our methods to other ACFD variants like dRPA combined with perturbative corrections such as the second order screened exchange corrections or a renormalized singles correction. Similarly, the performance of our EXX-ACFD methods is investigated for the non-covalently bonded dimers of the S22 reference set and for potential energy curves of noble gas, water, and benzene dimers. The computational effort of the SC-EXX-ACFD method exhibits the same scaling of N5 with respect to the system size N as the non-self-consistent evaluation of only the EXX-ACFD correlation energy; however, the prefactor increases significantly. Reaction energies from the SC-EXX-ACFD method deviate quite little from EXX-ACFD energies obtained non-self-consistently with dRPA orbitals
NASA Astrophysics Data System (ADS)
Altaç, Zekeriya; Tekkalmaz, Mesut
2013-11-01
In this study, a nodal method based on the synthetic kernel (SKN) approximation is developed for solving the radiative transfer equation (RTE) in one- and two-dimensional cartesian geometries. The RTE for a two-dimensional node is transformed to one-dimensional RTE, based on face-averaged radiation intensity. At the node interfaces, double P1 expansion is employed to the surface angular intensities with the isotropic transverse leakage assumption. The one-dimensional radiative integral transfer equation (RITE) is obtained in terms of the node-face-averaged incoming/outgoing incident energy and partial heat fluxes. The synthetic kernel approximation is employed to the transfer kernels and nodal-face contributions. The resulting SKN equations are solved analytically. One-dimensional interface-coupling nodal SK1 and SK2 equations (incoming/outgoing incident energy and net partial heat flux) are derived for the small nodal-mesh limit. These equations have simple algebraic and recursive forms which impose burden on neither the memory nor the computational time. The method was applied to one- and two-dimensional benchmark problems including hot/cold medium with transparent/emitting walls. The 2D results are free of ray effect and the results, for geometries of a few mean-free-paths or more, are in excellent agreement with the exact solutions.
Domain transfer multiple kernel learning.
Duan, Lixin; Tsang, Ivor W; Xu, Dong
2012-03-01
Cross-domain learning methods have shown promising results by leveraging labeled patterns from the auxiliary domain to learn a robust classifier for the target domain which has only a limited number of labeled samples. To cope with the considerable change between feature distributions of different domains, we propose a new cross-domain kernel learning framework into which many existing kernel methods can be readily incorporated. Our framework, referred to as Domain Transfer Multiple Kernel Learning (DTMKL), simultaneously learns a kernel function and a robust classifier by minimizing both the structural risk functional and the distribution mismatch between the labeled and unlabeled samples from the auxiliary and target domains. Under the DTMKL framework, we also propose two novel methods by using SVM and prelearned classifiers, respectively. Comprehensive experiments on three domain adaptation data sets (i.e., TRECVID, 20 Newsgroups, and email spam data sets) demonstrate that DTMKL-based methods outperform existing cross-domain learning and multiple kernel learning methods. PMID:21646679
NASA Astrophysics Data System (ADS)
Zouch, Wassim; Slima, Mohamed Ben; Feki, Imed; Derambure, Philippe; Taleb-Ahmed, Abdelmalik; Hamida, Ahmed Ben
2010-12-01
A new nonparametric method, based on the smooth weighted-minimum-norm (WMN) focal underdetermined-system solver (FOCUSS), for electrical cerebral activity localization using electroencephalography measurements is proposed. This method iteratively adjusts the spatial sources by reducing the size of the lead-field and the weighting matrix. Thus, an enhancement of source localization is obtained, as well as a reduction of the computational complexity. The performance of the proposed method, in terms of localization errors, robustness, and computation time, is compared with the WMN-FOCUSS and nonshrinking smooth WMN-FOCUSS methods as well as with standard generalized inverse methods (unweighted minimum norm, WMN, and FOCUSS). Simulation results for single-source localization confirm the effectiveness and robustness of the proposed method with respect to the reconstruction accuracy of a simulated single dipole.
ERIC Educational Resources Information Center
Kolen, Michael J.; And Others
Six methods for smoothing double-entry expectancy tables (tables that relate two predictor variables to probability of attaining a selected level of success on a criterion) were compared using data for entering students at 85 colleges and universities. ACT composite scores and self-reported high school grade averages were used to construct…
ERIC Educational Resources Information Center
Perrin, David W.; Whitney, Douglas R.
Six methods for smoothing expectancy tables were compared using data for entering students at 86 colleges and universities. Linear regression analyses were applied to ACT scores and high school grades to obtain predicted first term grade point averages (FGPA's) for students entering each institution in 1969-70. Expectancy tables were constructed…
Mazza, G; Roßmanith, E; Lang-Olip, I; Pfeiffer, D
2016-08-01
Even though umbilical cord arteries are a common source of vascular smooth muscle cells, the lack of reliable marker profiles have not facilitated the isolation of human umbilical artery smooth muscle cells (HUASMC). For accurate characterization of HUASMC and cells in their environment, the expression of smooth muscle and mesenchymal markers was analyzed in umbilical cord tissue sections. The resulting marker profile was then used to evaluate the quality of HUASMC isolation and culture methods. HUASMC and perivascular-Wharton's jelly stromal cells (pv-WJSC) showed positive staining for α-smooth muscle actin (α-SMA), smooth muscle myosin heavy chain (SM-MHC), desmin, vimentin and CD90. Anti-CD10 stained only pv-WJSC. Consequently, HUASMC could be characterized as α-SMA+ , SM-MHC+ , CD10- cells, which are additionally negative for endothelial markers (CD31 and CD34). Enzymatic isolation provided primary HUASMC batches with 90-99 % purity, yet, under standard culture conditions, contaminant CD10+ cells rapidly constituted more than 80 % of the total cell population. Contamination was mainly due to the poor adhesion of HUASMC to cell culture plates, regardless of the different protein coatings (fibronectin, collagen I or gelatin). HUASMC showed strong attachment and long-term viability only in 3D matrices. The explant isolation method achieved cultures with only 13-40 % purity with considerable contamination by CD10+ cells. CD10+ cells showed spindle-like morphology and up-regulated expression of α-SMA and SM-MHC upon culture in smooth muscle differentiation medium. Considering the high contamination risk of HUASMC cultures by CD10+ neighboring cells and their phenotypic similarities, precise characterization is mandatory to avoid misleading results. PMID:25535117
Kernel Partial Least Squares for Nonlinear Regression and Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.
Induced Pluripotent Stem Cell-derived Vascular Smooth Muscle Cells: Methods and Application
Dash, Biraja C.; Jiang, Zhengxin; Suh, Carol; Qyang, Yibing
2015-01-01
Vascular smooth muscle cells (VSMCs) play a major role in the pathophysiology of cardiovascular diseases. The advent of induced pluripotent stem cell (iPSC) technology and their capability to differentiation into virtually every cell type in the human body make this field a ray of hope for vascular regenerative therapy and for understanding disease mechanism. In this review, we first discuss the recent iPSC technology and vascular smooth muscle development from embryo and then examine different methodology to derive VSMCs from iPSCs and their applications in regenerative therapy and disease modeling. PMID:25559088
Methods and energy storage devices utilizing electrolytes having surface-smoothing additives
Xu, Wu; Zhang, Jiguang; Graff, Gordon L; Chen, Xilin; Ding, Fei
2015-11-12
Electrodeposition and energy storage devices utilizing an electrolyte having a surface-smoothing additive can result in self-healing, instead of self-amplification, of initial protuberant tips that give rise to roughness and/or dendrite formation on the substrate and anode surface. For electrodeposition of a first metal (M1) on a substrate or anode from one or more cations of M1 in an electrolyte solution, the electrolyte solution is characterized by a surface-smoothing additive containing cations of a second metal (M2), wherein cations of M2 have an effective electrochemical reduction potential in the solution lower than that of the cations of M1.
NASA Astrophysics Data System (ADS)
Pan, Wenxiao; Bao, Jie; Tartakovsky, Alexandre
2013-11-01
A Continuous Boundary Force (CBF) method was developed for implementing Robin (Navier) boundary condition (BC) that can describe no-slip or slip conditions (slip length from zero to infinity) at the fluid-solid interface. In the CBF method the Robin BC is replaced by a homogeneous Neumann BC and an additional volumetric source term in the governing momentum equation. The formulation is derived based on an approximation of the sharp boundary with a diffuse interface of finite thickness, across which the BC is reformulated by means of a smoothed characteristic function. The CBF method is easy to be implemented in Lagrangian particle-based methods. We first implemented it in smoothed particle hydrodynamics (SPH) to solve numerically the Navier-Stokes equations subject to spatial-independent or dependent Robin BC in two and three dimensions. The numerical accuracy and convergence is examined through comparisons with the corresponding finite difference or finite element solutions. The CBF method is further implemented in smoothed dissipative particle dynamics (SDPD), a mesoscale scheme, for modeling slip flows commonly existent in micro/nano channels and microfluidic devices. The authors acknowledge the funding support by the ASCR Program of the Office of Science, U.S. Department of Energy.
NASA Astrophysics Data System (ADS)
Szczesna, Dorota H.; Kulas, Zbigniew; Kasprzak, Henryk T.; Stenevi, Ulf
2009-11-01
A lateral shearing interferometer was used to examine the smoothness of the tear film. The information about the distribution and stability of the precorneal tear film is carried out by the wavefront reflected from the surface of tears and coded in interference fringes. Smooth and regular fringes indicate a smooth tear film surface. On corneae after laser in situ keratomileusis (LASIK) or radial keratotomy (RK) surgery, the interference fringes are seldom regular. The fringes are bent on bright lines, which are interpreted as tear film breakups. The high-intensity pattern seems to appear in similar location on the corneal surface after refractive surgery. Our purpose was to extract information about the pattern existing under the interference fringes and calculate its shape reproducibility over time and following eye blinks. A low-pass filter was applied and correlation coefficient was calculated to compare a selected fragment of the template image to each of the following frames in the recorded sequence. High values of the correlation coefficient suggest that irregularities of the corneal epithelium might influence tear film instability and that tear film breakup may be associated with local irregularities of the corneal topography created after the LASIK and RK surgeries.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
Tan, Stéphanie; Soulez, Gilles; Diez Martinez, Patricia; Larrivée, Sandra; Stevens, Louis-Mathieu; Goussard, Yves; Mansour, Samer; Chartrand-Lefebvre, Carl
2016-01-01
Purpose Metallic artifacts can result in an artificial thickening of the coronary stent wall which can significantly impair computed tomography (CT) imaging in patients with coronary stents. The objective of this study is to assess in vivo visualization of coronary stent wall and lumen with an edge-enhancing CT reconstruction kernel, as compared to a standard kernel. Methods This is a prospective cross-sectional study involving the assessment of 71 coronary stents (24 patients), with blinded observers. After 256-slice CT angiography, image reconstruction was done with medium-smooth and edge-enhancing kernels. Stent wall thickness was measured with both orthogonal and circumference methods, averaging thickness from diameter and circumference measurements, respectively. Image quality was assessed quantitatively using objective parameters (noise, signal to noise (SNR) and contrast to noise (CNR) ratios), as well as visually using a 5-point Likert scale. Results Stent wall thickness was decreased with the edge-enhancing kernel in comparison to the standard kernel, either with the orthogonal (0.97 ± 0.02 versus 1.09 ± 0.03 mm, respectively; p<0.001) or the circumference method (1.13 ± 0.02 versus 1.21 ± 0.02 mm, respectively; p = 0.001). The edge-enhancing kernel generated less overestimation from nominal thickness compared to the standard kernel, both with the orthogonal (0.89 ± 0.19 versus 1.00 ± 0.26 mm, respectively; p<0.001) and the circumference (1.06 ± 0.26 versus 1.13 ± 0.31 mm, respectively; p = 0.005) methods. The edge-enhancing kernel was associated with lower SNR and CNR, as well as higher background noise (all p < 0.001), in comparison to the medium-smooth kernel. Stent visual scores were higher with the edge-enhancing kernel (p<0.001). Conclusion In vivo 256-slice CT assessment of coronary stents shows that the edge-enhancing CT reconstruction kernel generates thinner stent walls, less overestimation from nominal thickness, and better image quality
Performance Assessment of Kernel Density Clustering for Gene Expression Profile Data
Zeng, Beiyan; Chen, Yiping P.; Smith, Oscar H.
2003-01-01
Kernel density smoothing techniques have been used in classification or supervised learning of gene expression profile (GEP) data, but their applications to clustering or unsupervised learning of those data have not been explored and assessed. Here we report a kernel density clustering method for analysing GEP data and compare its performance with the three most widely-used clustering methods: hierarchical clustering, K-means clustering, and multivariate mixture model-based clustering. Using several methods to measure agreement, between-cluster isolation, and withincluster coherence, such as the Adjusted Rand Index, the Pseudo F test, the r2 test, and the profile plot, we have assessed the effectiveness of kernel density clustering for recovering clusters, and its robustness against noise on clustering both simulated and real GEP data. Our results show that the kernel density clustering method has excellent performance in recovering clusters from simulated data and in grouping large real expression profile data sets into compact and well-isolated clusters, and that it is the most robust clustering method for analysing noisy expression profile data compared to the other three methods assessed. PMID:18629292
How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.
Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J
2014-09-01
Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PMID:24885339
NASA Astrophysics Data System (ADS)
Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle
2016-08-01
Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.
Adaptive wiener image restoration kernel
Yuan, Ding
2007-06-05
A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.
NASA Astrophysics Data System (ADS)
Élie-Dit-Cosaque, Xavier J.-G.; Gakwaya, Augustin; Naceur, Hakim
2015-01-01
A smoothed finite element method formulation for the resultant eight-node solid-shell element is presented in this paper for geometrical linear analysis. The smoothing process is successfully performed on the element mid-surface to deal with the membrane and bending effects of the stiffness matrix. The strain smoothing process allows replacing the Cartesian derivatives of shape functions by the product of shape functions with normal vectors to the element mid-surface boundaries. The present formulation remains competitive when compared to the classical finite element formulations since no inverse of the Jacobian matrix is calculated. The three dimensional resultant shell theory allows the element kinematics to be defined only with the displacement degrees of freedom. The assumed natural strain method is used not only to eliminate the transverse shear locking problem encountered in thin-walled structures, but also to reduce trapezoidal effects. The efficiency of the present element is presented and compared with that of standard solid-shell elements through various benchmark problems including some with highly distorted meshes.
Simulation of wave mitigation by coastal vegetation using smoothed particle hydrodynamics method
NASA Astrophysics Data System (ADS)
Iryanto; Gunawan, P. H.
2016-02-01
Vegetation in coastal area lead to wave mitigation has been studied by some researchers recently. The effect of vegetation forest in coastal area is minimizing the negative impact of wave propagation. In order to describe the effect of vegetation resistance into the water flow, the modified model of framework smoothed hydrodynamics particle has been constructed. In the Lagrangian framework, the Darcy, Manning, and laminar viscosity resistances are added. The effect of each resistances is given in some results of numerical simulations. Simulation of wave mitigation on sloping beach is also given.
Bleiziffer, Patrick; Krug, Marcel; Görling, Andreas
2015-06-28
A self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation (ACFD) theorem, employing the frequency-dependent exact exchange kernel fx is presented. The resulting SC-exact-exchange-only (EXX)-ACFD method leads to even more accurate correlation potentials than those obtained within the direct random phase approximation (dRPA). In contrast to dRPA methods, not only the Coulomb kernel but also the exact exchange kernel fx is taken into account in the EXX-ACFD correlation which results in a method that, unlike dRPA methods, is free of self-correlations, i.e., a method that treats exactly all one-electron systems, like, e.g., the hydrogen atom. The self-consistent evaluation of EXX-ACFD total energies improves the accuracy compared to EXX-ACFD total energies evaluated non-self-consistently with EXX or dRPA orbitals and eigenvalues. Reaction energies of a set of small molecules, for which highly accurate experimental reference data are available, are calculated and compared to quantum chemistry methods like Møller-Plesset perturbation theory of second order (MP2) or coupled cluster methods [CCSD, coupled cluster singles, doubles, and perturbative triples (CCSD(T))]. Moreover, we compare our methods to other ACFD variants like dRPA combined with perturbative corrections such as the second order screened exchange corrections or a renormalized singles correction. Similarly, the performance of our EXX-ACFD methods is investigated for the non-covalently bonded dimers of the S22 reference set and for potential energy curves of noble gas, water, and benzene dimers. The computational effort of the SC-EXX-ACFD method exhibits the same scaling of N(5) with respect to the system size N as the non-self-consistent evaluation of only the EXX-ACFD correlation energy; however, the prefactor increases significantly. Reaction energies from the SC-EXX-ACFD method deviate quite little from EXX-ACFD energies obtained non-self-consistently with dRPA orbitals
Bleiziffer, Patrick Krug, Marcel; Görling, Andreas
2015-06-28
A self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation (ACFD) theorem, employing the frequency-dependent exact exchange kernel f{sub x} is presented. The resulting SC-exact-exchange-only (EXX)-ACFD method leads to even more accurate correlation potentials than those obtained within the direct random phase approximation (dRPA). In contrast to dRPA methods, not only the Coulomb kernel but also the exact exchange kernel f{sub x} is taken into account in the EXX-ACFD correlation which results in a method that, unlike dRPA methods, is free of self-correlations, i.e., a method that treats exactly all one-electron systems, like, e.g., the hydrogen atom. The self-consistent evaluation of EXX-ACFD total energies improves the accuracy compared to EXX-ACFD total energies evaluated non-self-consistently with EXX or dRPA orbitals and eigenvalues. Reaction energies of a set of small molecules, for which highly accurate experimental reference data are available, are calculated and compared to quantum chemistry methods like Møller-Plesset perturbation theory of second order (MP2) or coupled cluster methods [CCSD, coupled cluster singles, doubles, and perturbative triples (CCSD(T))]. Moreover, we compare our methods to other ACFD variants like dRPA combined with perturbative corrections such as the second order screened exchange corrections or a renormalized singles correction. Similarly, the performance of our EXX-ACFD methods is investigated for the non-covalently bonded dimers of the S22 reference set and for potential energy curves of noble gas, water, and benzene dimers. The computational effort of the SC-EXX-ACFD method exhibits the same scaling of N{sup 5} with respect to the system size N as the non-self-consistent evaluation of only the EXX-ACFD correlation energy; however, the prefactor increases significantly. Reaction energies from the SC-EXX-ACFD method deviate quite little from EXX-ACFD energies obtained non
A smooth dissipative particle dynamics method for domains with arbitrary-geometry solid boundaries
NASA Astrophysics Data System (ADS)
Gatsonis, Nikolaos A.; Potami, Raffaele; Yang, Jun
2014-01-01
A smooth dissipative particle dynamics method with dynamic virtual particle allocation (SDPD-DV) for modeling and simulation of mesoscopic fluids in wall-bounded domains is presented. The physical domain in SDPD-DV may contain external and internal solid boundaries of arbitrary geometries, periodic inlets and outlets, and the fluid region. The SDPD-DV method is realized with fluid particles, boundary particles, and dynamically allocated virtual particles. The internal or external solid boundaries of the domain can be of arbitrary geometry and are discretized with a surface grid. These boundaries are represented by boundary particles with assigned properties. The fluid domain is discretized with fluid particles of constant mass and variable volume. Conservative and dissipative force models due to virtual particles exerted on a fluid particle in the proximity of a solid boundary supplement the original SDPD formulation. The dynamic virtual particle allocation approach provides the density and the forces due to virtual particles. The integration of the SDPD equations is accomplished with a velocity-Verlet algorithm for the momentum and a Runge-Kutta for the entropy equation. The velocity integrator is supplemented by a bounce-forward algorithm in cases where the virtual particle force model is not able to prevent particle penetration. For the incompressible isothermal systems considered in this work, the pressure of a fluid particle is obtained by an artificial compressibility formulation for liquids and the ideal gas law for gases. The self-diffusion coefficient is obtained by an implementation of the generalized Einstein and the Green-Kubo relations. Field properties are obtained by sampling SDPD-DV outputs on a post-processing grid that allows harnessing the particle information on desired spatiotemporal scales. The SDPD-DV method is verified and validated with simulations in bounded and periodic domains that cover the hydrodynamic and mesoscopic regimes for
A smooth dissipative particle dynamics method for domains with arbitrary-geometry solid boundaries
NASA Astrophysics Data System (ADS)
Gatsonis, Nikolaos A.; Potami, Raffaele; Yang, Jun
2014-01-01
A smooth dissipative particle dynamics method with dynamic virtual particle allocation (SDPD-DV) for modeling and simulation of mesoscopic fluids in wall-bounded domains is presented. The physical domain in SDPD-DV may contain external and internal solid boundaries of arbitrary geometries, periodic inlets and outlets, and the fluid region. The SDPD-DV method is realized with fluid particles, boundary particles, and dynamically allocated virtual particles. The internal or external solid boundaries of the domain can be of arbitrary geometry and are discretized with a surface grid. These boundaries are represented by boundary particles with assigned properties. The fluid domain is discretized with fluid particles of constant mass and variable volume. Conservative and dissipative force models due to virtual particles exerted on a fluid particle in the proximity of a solid boundary supplement the original SDPD formulation. The dynamic virtual particle allocation approach provides the density and the forces due to virtual particles. The integration of the SDPD equations is accomplished with a velocity-Verlet algorithm for the momentum and a Runge-Kutta for the entropy equation. The velocity integrator is supplemented by a bounce-forward algorithm in cases where the virtual particle force model is not able to prevent particle penetration. For the incompressible isothermal systems considered in this work, the pressure of a fluid particle is obtained by an artificial compressibility formulation for liquids and the ideal gas law for gases. The self-diffusion coefficient is obtained by an implementation of the generalized Einstein and the Green-Kubo relations. Field properties are obtained by sampling SDPD-DV outputs on a post-processing grid that allows harnessing the particle information on desired spatiotemporal scales. The SDPD-DV method is verified and validated with simulations in bounded and periodic domains that cover the hydrodynamic and mesoscopic regimes for
Users manual for Opt-MS : local methods for simplicial mesh smoothing and untangling.
Freitag, L.
1999-07-20
Creating meshes containing good-quality elements is a challenging, yet critical, problem facing computational scientists today. Several researchers have shown that the size of the mesh, the shape of the elements within that mesh, and their relationship to the physical application of interest can profoundly affect the efficiency and accuracy of many numerical approximation techniques. If the application contains anisotropic physics, the mesh can be improved by considering both local characteristics of the approximate application solution and the geometry of the computational domain. If the application is isotropic, regularly shaped elements in the mesh reduce the discretization error, and the mesh can be improved a priori by considering geometric criteria only. The Opt-MS package provides several local node point smoothing techniques that improve elements in the mesh by adjusting grid point location using geometric, criteria. The package is easy to use; only three subroutine calls are required for the user to begin using the software. The package is also flexible; the user may change the technique, function, or dimension of the problem at any time during the mesh smoothing process. Opt-MS is designed to interface with C and C++ codes, ad examples for both two-and three-dimensional meshes are provided.
On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint
Zhang, Chong; Liu, Yufeng; Wu, Yichao
2015-01-01
For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575
NASA Technical Reports Server (NTRS)
Cunningham, A. M., Jr.
1976-01-01
The theory, results and user instructions for an aerodynamic computer program are presented. The theory is based on linear lifting surface theory, and the method is the kernel function. The program is applicable to multiple interfering surfaces which may be coplanar or noncoplanar. Local linearization was used to treat nonuniform flow problems without shocks. For cases with imbedded shocks, the appropriate boundary conditions were added to account for the flow discontinuities. The data describing nonuniform flow fields must be input from some other source such as an experiment or a finite difference solution. The results are in the form of small linear perturbations about nonlinear flow fields. The method was applied to a wide variety of problems for which it is demonstrated to be significantly superior to the uniform flow method. Program user instructions are given for easy access.
Shanbehzadeh, Jamshid
2014-01-01
Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation. PMID:24624225
An analysis of smoothed particle hydrodynamics
Swegle, J.W.; Attaway, S.W.; Heinstein, M.W.; Mello, F.J.; Hicks, D.L.
1994-03-01
SPH (Smoothed Particle Hydrodynamics) is a gridless Lagrangian technique which is appealing as a possible alternative to numerical techniques currently used to analyze high deformation impulsive loading events. In the present study, the SPH algorithm has been subjected to detailed testing and analysis to determine its applicability in the field of solid dynamics. An important result of the work is a rigorous von Neumann stability analysis which provides a simple criterion for the stability or instability of the method in terms of the stress state and the second derivative of the kernel function. Instability, which typically occurs only for solids in tension, results not from the numerical time integration algorithm, but because the SPH algorithm creates an effective stress with a negative modulus. The analysis provides insight into possible methods for removing the instability. Also, SPH has been coupled into the transient dynamics finite element code PRONTO, and a weighted residual derivation of the SPH equations has been obtained.
Weighted Bergman Kernels and Quantization}
NASA Astrophysics Data System (ADS)
Engliš, Miroslav
Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion
NASA Astrophysics Data System (ADS)
Zhao, Xujun; Bordas, Stéphane P. A.; Qu, Jianmin
2013-12-01
Interfacial energy plays an important role in equilibrium morphologies of nanosized microstructures of solid materials due to the high interface-to-volume ratio, and can no longer be neglected as it does in conventional mechanics analysis. When designing nanodevices and to understand the behavior of materials at the nano-scale, this interfacial energy must therefore be taken into account. The present work develops an effective numerical approach by means of a hybrid smoothed extended finite element/level set method to model nanoscale inhomogeneities with interfacial energy effect, in which the finite element mesh can be completely independent of the interface geometry. The Gurtin-Murdoch surface elasticity model is used to account for the interface stress effect and the Wachspress interpolants are used for the first time to construct the shape functions in the smoothed extended finite element method. Selected numerical results are presented to study the accuracy and efficiency of the proposed method as well as the equilibrium shapes of misfit particles in elastic solids. The presented results compare very well with those obtained from theoretical solutions and experimental observations, and the computational efficiency of the method is shown to be superior to that of its most advanced competitor.
Difference image analysis: automatic kernel design using information criteria
NASA Astrophysics Data System (ADS)
Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.
2016-03-01
We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.
Putting Priors in Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.