NASA Astrophysics Data System (ADS)
Du, Peijun; Tan, Kun; Xing, Xiaoshi
2010-12-01
Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.
1981-07-01
process is observed over all of (0,1], the reproducing kernel Hilbert space (RKHS) techniques developed by Parzen (1961a, 1961b) 2 may be used to construct...covariance kernel,R, for the process (1.1) is the reproducing kernel for a reproducing kernel Hilbert space (RKHS) which will be denoted as H(R) (c.f...2.6), it is known that (c.f. Eubank, Smith and Smith (1981a, 1981b)), i) H(R) is a Hilbert function space consisting of functions which satisfy for fEH
A kernel adaptive algorithm for quaternion-valued inputs.
Paul, Thomas K; Ogunfunmi, Tokunbo
2015-10-01
The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.
Structured functional additive regression in reproducing kernel Hilbert spaces.
Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen
2014-06-01
Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application.
On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint
Zhang, Chong; Liu, Yufeng; Wu, Yichao
2015-01-01
For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575
Limitations of shallow nets approximation.
Lin, Shao-Bo
2017-10-01
In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Structured functional additive regression in reproducing kernel Hilbert spaces
Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen
2013-01-01
Summary Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application. PMID:25013362
Adaptive learning in complex reproducing kernel Hilbert spaces employing Wirtinger's subgradients.
Bouboulis, Pantelis; Slavakis, Konstantinos; Theodoridis, Sergios
2012-03-01
This paper presents a wide framework for non-linear online supervised learning tasks in the context of complex valued signal processing. The (complex) input data are mapped into a complex reproducing kernel Hilbert space (RKHS), where the learning phase is taking place. Both pure complex kernels and real kernels (via the complexification trick) can be employed. Moreover, any convex, continuous and not necessarily differentiable function can be used to measure the loss between the output of the specific system and the desired response. The only requirement is the subgradient of the adopted loss function to be available in an analytic form. In order to derive analytically the subgradients, the principles of the (recently developed) Wirtinger's calculus in complex RKHS are exploited. Furthermore, both linear and widely linear (in RKHS) estimation filters are considered. To cope with the problem of increasing memory requirements, which is present in almost all online schemes in RKHS, the sparsification scheme, based on projection onto closed balls, has been adopted. We demonstrate the effectiveness of the proposed framework in a non-linear channel identification task, a non-linear channel equalization problem and a quadrature phase shift keying equalization scheme, using both circular and non circular synthetic signal sources.
NASA Astrophysics Data System (ADS)
Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.
2014-10-01
In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.
Online Pairwise Learning Algorithms.
Ying, Yiming; Zhou, Ding-Xuan
2016-04-01
Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.
Single image super-resolution via an iterative reproducing kernel Hilbert space method.
Deng, Liang-Jian; Guo, Weihong; Huang, Ting-Zhu
2016-11-01
Image super-resolution, a process to enhance image resolution, has important applications in satellite imaging, high definition television, medical imaging, etc. Many existing approaches use multiple low-resolution images to recover one high-resolution image. In this paper, we present an iterative scheme to solve single image super-resolution problems. It recovers a high quality high-resolution image from solely one low-resolution image without using a training data set. We solve the problem from image intensity function estimation perspective and assume the image contains smooth and edge components. We model the smooth components of an image using a thin-plate reproducing kernel Hilbert space (RKHS) and the edges using approximated Heaviside functions. The proposed method is applied to image patches, aiming to reduce computation and storage. Visual and quantitative comparisons with some competitive approaches show the effectiveness of the proposed method.
Study of the convergence behavior of the complex kernel least mean square algorithm.
Paul, Thomas K; Ogunfunmi, Tokunbo
2013-09-01
The complex kernel least mean square (CKLMS) algorithm is recently derived and allows for online kernel adaptive learning for complex data. Kernel adaptive methods can be used in finding solutions for neural network and machine learning applications. The derivation of CKLMS involved the development of a modified Wirtinger calculus for Hilbert spaces to obtain the cost function gradient. We analyze the convergence of the CKLMS with different kernel forms for complex data. The expressions obtained enable us to generate theory-predicted mean-square error curves considering the circularity of the complex input signals and their effect on nonlinear learning. Simulations are used for verifying the analysis results.
Out-of-Sample Extensions for Non-Parametric Kernel Methods.
Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang
2017-02-01
Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.
NASA Astrophysics Data System (ADS)
Shiju, S.; Sumitra, S.
2017-12-01
In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.
Towards the Geometry of Reproducing Kernels
NASA Astrophysics Data System (ADS)
Galé, J. E.
2010-11-01
It is shown here how one is naturally led to consider a category whose objects are reproducing kernels of Hilbert spaces, and how in this way a differential geometry for such kernels may be settled down.
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
NASA Astrophysics Data System (ADS)
Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid
2018-06-01
This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.
Recio-Spinoso, Alberto; Fan, Yun-Hui; Ruggero, Mario A
2011-05-01
Basilar-membrane responses to white Gaussian noise were recorded using laser velocimetry at basal sites of the chinchilla cochlea with characteristic frequencies near 10 kHz and first-order Wiener kernels were computed by cross correlation of the stimuli and the responses. The presence or absence of minimum-phase behavior was explored by fitting the kernels with discrete linear filters with rational transfer functions. Excellent fits to the kernels were obtained with filters with transfer functions including zeroes located outside the unit circle, implying nonminimum-phase behavior. These filters accurately predicted basilar-membrane responses to other noise stimuli presented at the same level as the stimulus for the kernel computation. Fits with all-pole and other minimum-phase discrete filters were inferior to fits with nonminimum-phase filters. Minimum-phase functions predicted from the amplitude functions of the Wiener kernels by Hilbert transforms were different from the measured phase curves. These results, which suggest that basilar-membrane responses do not have the minimum-phase property, challenge the validity of models of cochlear processing, which incorporate minimum-phase behavior. © 2011 IEEE
Aveiro method in reproducing kernel Hilbert spaces under complete dictionary
NASA Astrophysics Data System (ADS)
Mai, Weixiong; Qian, Tao
2017-12-01
Aveiro Method is a sparse representation method in reproducing kernel Hilbert spaces (RKHS) that gives orthogonal projections in linear combinations of reproducing kernels over uniqueness sets. It, however, suffers from determination of uniqueness sets in the underlying RKHS. In fact, in general spaces, uniqueness sets are not easy to be identified, let alone the convergence speed aspect with Aveiro Method. To avoid those difficulties we propose an anew Aveiro Method based on a dictionary and the matching pursuit idea. What we do, in fact, are more: The new Aveiro method will be in relation to the recently proposed, the so called Pre-Orthogonal Greedy Algorithm (P-OGA) involving completion of a given dictionary. The new method is called Aveiro Method Under Complete Dictionary (AMUCD). The complete dictionary consists of all directional derivatives of the underlying reproducing kernels. We show that, under the boundary vanishing condition, bring available for the classical Hardy and Paley-Wiener spaces, the complete dictionary enables an efficient expansion of any given element in the Hilbert space. The proposed method reveals new and advanced aspects in both the Aveiro Method and the greedy algorithm.
Fredholm-Volterra Integral Equation with a Generalized Singular Kernel and its Numerical Solutions
NASA Astrophysics Data System (ADS)
El-Kalla, I. L.; Al-Bugami, A. M.
2010-11-01
In this paper, the existence and uniqueness of solution of the Fredholm-Volterra integral equation (F-VIE), with a generalized singular kernel, are discussed and proved in the spaceL2(Ω)×C(0,T). The Fredholm integral term (FIT) is considered in position while the Volterra integral term (VIT) is considered in time. Using a numerical technique we have a system of Fredholm integral equations (SFIEs). This system of integral equations can be reduced to a linear algebraic system (LAS) of equations by using two different methods. These methods are: Toeplitz matrix method and Product Nyström method. A numerical examples are considered when the generalized kernel takes the following forms: Carleman function, logarithmic form, Cauchy kernel, and Hilbert kernel.
Kernel Partial Least Squares for Nonlinear Regression and Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.
Li, Kan; Príncipe, José C.
2018-01-01
This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime. PMID:29666568
Li, Kan; Príncipe, José C
2018-01-01
This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
NASA Astrophysics Data System (ADS)
Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim
2018-01-01
The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.
Comparing fixed and variable-width Gaussian networks.
Kůrková, Věra; Kainen, Paul C
2014-09-01
The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.
Generalized time-dependent Schrödinger equation in two dimensions under constraints
NASA Astrophysics Data System (ADS)
Sandev, Trifce; Petreska, Irina; Lenzi, Ervin K.
2018-01-01
We investigate a generalized two-dimensional time-dependent Schrödinger equation on a comb with a memory kernel. A Dirac delta term is introduced in the Schrödinger equation so that the quantum motion along the x-direction is constrained at y = 0. The wave function is analyzed by using Green's function approach for several forms of the memory kernel, which are of particular interest. Closed form solutions for the cases of Dirac delta and power-law memory kernels in terms of Fox H-function, as well as for a distributed order memory kernel, are obtained. Further, a nonlocal term is also introduced and investigated analytically. It is shown that the solution for such a case can be represented in terms of infinite series in Fox H-functions. Green's functions for each of the considered cases are analyzed and plotted for the most representative ones. Anomalous diffusion signatures are evident from the presence of the power-law tails. The normalized Green's functions obtained in this work are of broader interest, as they are an important ingredient for further calculations and analyses of some interesting effects in the transport properties in low-dimensional heterogeneous media.
FAST TRACK COMMUNICATION: General approach to \\mathfrak {SU}(n) quasi-distribution functions
NASA Astrophysics Data System (ADS)
Klimov, Andrei B.; de Guise, Hubert
2010-10-01
We propose an operational form for the kernel of a mapping between an operator acting in a Hilbert space of a quantum system with an \\mathfrak {SU}(n) symmetry group and its symbol in the corresponding classical phase space. For symmetric irreps of \\mathfrak {SU}(n) , this mapping is bijective. We briefly discuss complications that will occur in the general case.
[Rapid identification of hogwash oil by using synchronous fluorescence spectroscopy].
Sun, Yan-Hui; An, Hai-Yang; Jia, Xiao-Li; Wang, Juan
2012-10-01
To identify hogwash oil quickly, the characteristic delta lambda of hogwash oil was analyzed by three dimensional fluorescence spectroscopy with parallel factor analysis, and the model was built up by using synchronous fluorescence spectroscopy with support vector machines (SVM). The results showed that the characteristic delta lambda of hogwash oil was 60 nm. Collecting original spectrum of different samples under the condition of characteristic delta lambda 60 nm, the best model was established while 5 principal components were selected from original spectrum and the radial basis function (RBF) was used as the kernel function, and the optimal penalty factor C and kernel function g were 512 and 0.5 respectively obtained by the grid searching and 6-fold cross validation. The discrimination rate of the model was 100% for both training sets and prediction sets. Thus, it is quick and accurate to apply synchronous fluorescence spectroscopy to identification of hogwash oil.
Tao, Chenyang; Feng, Jianfeng
2016-03-15
Quantifying associations in neuroscience (and many other scientific disciplines) is often challenged by high-dimensionality, nonlinearity and noisy observations. Many classic methods have either poor power or poor scalability on data sets of the same or different scales such as genetical, physiological and image data. Based on the framework of reproducing kernel Hilbert spaces we proposed a new nonlinear association criteria (NAC) with an efficient numerical algorithm and p-value approximation scheme. We also presented mathematical justification that links the proposed method to related methods such as kernel generalized variance, kernel canonical correlation analysis and Hilbert-Schmidt independence criteria. NAC allows the detection of association between arbitrary input domain as long as a characteristic kernel is defined. A MATLAB package was provided to facilitate applications. Extensive simulation examples and four real world neuroscience examples including functional MRI causality, Calcium imaging and imaging genetic studies on autism [Brain, 138(5):13821393 (2015)] and alcohol addiction [PNAS, 112(30):E4085-E4093 (2015)] are used to benchmark NAC. It demonstrates the superior performance over the existing procedures we tested and also yields biologically significant results for the real world examples. NAC beats its linear counterparts when nonlinearity is presented in the data. It also shows more robustness against different experimental setups compared with its nonlinear counterparts. In this work we presented a new and robust statistical approach NAC for measuring associations. It could serve as an interesting alternative to the existing methods for datasets where nonlinearity and other confounding factors are present. Copyright © 2016 Elsevier B.V. All rights reserved.
On Hilbert-Schmidt norm convergence of Galerkin approximation for operator Riccati equations
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1988-01-01
An abstract approximation framework for the solution of operator algebraic Riccati equations is developed. The approach taken is based on a formulation of the Riccati equation as an abstract nonlinear operator equation on the space of Hilbert-Schmidt operators. Hilbert-Schmidt norm convergence of solutions to generic finite dimensional Galerkin approximations to the Riccati equation to the solution of the original infinite dimensional problem is argued. The application of the general theory is illustrated via an operator Riccati equation arising in the linear-quadratic design of an optimal feedback control law for a 1-D heat/diffusion equation. Numerical results demonstrating the convergence of the associated Hilbert-Schmidt kernels are included.
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah
2016-01-01
One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.
NASA Astrophysics Data System (ADS)
Kidon, Lyran; Wilner, Eli Y.; Rabani, Eran
2015-12-01
The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima-Zwanzig-Mori time-convolution (TC) and the other on the Tokuyama-Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called "memory kernel" or "generator," going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operator in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green's function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.
Adaptive multiregression in reproducing kernel Hilbert spaces: the multiaccess MIMO channel case.
Slavakis, Konstantinos; Bouboulis, Pantelis; Theodoridis, Sergios
2012-02-01
This paper introduces a wide framework for online, i.e., time-adaptive, supervised multiregression tasks. The problem is formulated in a general infinite-dimensional reproducing kernel Hilbert space (RKHS). In this context, a fairly large number of nonlinear multiregression models fall as special cases, including the linear case. Any convex, continuous, and not necessarily differentiable function can be used as a loss function in order to quantify the disagreement between the output of the system and the desired response. The only requirement is the subgradient of the adopted loss function to be available in an analytic form. To this end, we demonstrate a way to calculate the subgradients of robust loss functions, suitable for the multiregression task. As it is by now well documented, when dealing with online schemes in RKHS, the memory keeps increasing with each iteration step. To attack this problem, a simple sparsification strategy is utilized, which leads to an algorithmic scheme of linear complexity with respect to the number of unknown parameters. A convergence analysis of the technique, based on arguments of convex analysis, is also provided. To demonstrate the capacity of the proposed method, the multiregressor is applied to the multiaccess multiple-input multiple-output channel equalization task for a setting with poor resources and nonavailable channel information. Numerical results verify the potential of the method, when its performance is compared with those of the state-of-the-art linear techniques, which, in contrast, use space-time coding, more antenna elements, as well as full channel information.
NASA Astrophysics Data System (ADS)
Zamzamir, Zamzana; Murid, Ali H. M.; Ismail, Munira
2014-06-01
Numerical solution for uniquely solvable exterior Riemann-Hilbert problem on region with corners at offcorner points has been explored by discretizing the related integral equation using Picard iteration method without any modifications to the left-hand side (LHS) and right-hand side (RHS) of the integral equation. Numerical errors for all iterations are converge to the required solution. However, for certain problems, it gives lower accuracy. Hence, this paper presents a new numerical approach for the problem by treating the generalized Neumann kernel at LHS and the function at RHS of the integral equation. Due to the existence of the corner points, Gaussian quadrature is employed which avoids the corner points during numerical integration. Numerical example on a test region is presented to demonstrate the effectiveness of this formulation.
Hermite polynomials and quasi-classical asymptotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, S. Twareque, E-mail: twareque.ali@concordia.ca; Engliš, Miroslav, E-mail: englis@math.cas.cz
2014-04-15
We study an unorthodox variant of the Berezin-Toeplitz type of quantization scheme, on a reproducing kernel Hilbert space generated by the real Hermite polynomials and work out the associated quasi-classical asymptotics.
NASA Technical Reports Server (NTRS)
Milman, M. H.
1985-01-01
A factorization approach is presented for deriving approximations to the optimal feedback gain for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the feedback kernels.
Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.
Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli
2016-05-01
Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.
Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.
Kwak, Nojun
2016-05-20
Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.
1985-02-01
0 Here Q denotes the midplane of the plate ?assumed to be a Lipschitzian) with a smooth boundary ", and H (Q) and H (Q) are the Hilbert spaces of...using a reproducing kernel Hilbert space approach, Weinert [8,9] et al, developed a structural correspondence between spline interpolation and linear...597 A Mesh Moving Technique for Time Dependent Partial Differential Equations in Two Space Dimensions David C. Arney and Joseph
Reconstruction of Sensory Stimuli Encoded with Integrate-and-Fire Neurons with Random Thresholds
Lazar, Aurel A.; Pnevmatikakis, Eftychios A.
2013-01-01
We present a general approach to the reconstruction of sensory stimuli encoded with leaky integrate-and-fire neurons with random thresholds. The stimuli are modeled as elements of a Reproducing Kernel Hilbert Space. The reconstruction is based on finding a stimulus that minimizes a regularized quadratic optimality criterion. We discuss in detail the reconstruction of sensory stimuli modeled as absolutely continuous functions as well as stimuli with absolutely continuous first-order derivatives. Reconstruction results are presented for stimuli encoded with single as well as a population of neurons. Examples are given that demonstrate the performance of the reconstruction algorithms as a function of threshold variability. PMID:24077610
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sevast'yanov, E A; Sadekova, E Kh
The Bulgarian mathematicians Sendov, Popov, and Boyanov have well-known results on the asymptotic behaviour of the least deviations of 2{pi}-periodic functions in the classes H{sup {omega}} from trigonometric polynomials in the Hausdorff metric. However, the asymptotics they give are not adequate to detect a difference in, for example, the rate of approximation of functions f whose moduli of continuity {omega}(f;{delta}) differ by factors of the form (log(1/{delta})){sup {beta}}. Furthermore, a more detailed determination of the asymptotic behaviour by traditional methods becomes very difficult. This paper develops an approach based on using trigonometric snakes as approximating polynomials. The snakes of ordermore » n inscribed in the Minkowski {delta}-neighbourhood of the graph of the approximated function f provide, in a number of cases, the best approximation for f (for the appropriate choice of {delta}). The choice of {delta} depends on n and f and is based on constructing polynomial kernels adjusted to the Hausdorff metric and polynomials with special oscillatory properties. Bibliography: 19 titles.« less
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan
2018-02-01
In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.
Pure endmember extraction using robust kernel archetypoid analysis for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Yang, Gang; Wu, Ke; Li, Weiyue; Zhang, Dianfa
2017-09-01
A robust kernel archetypoid analysis (RKADA) method is proposed to extract pure endmembers from hyperspectral imagery (HSI). The RKADA assumes that each pixel is a sparse linear mixture of all endmembers and each endmember corresponds to a real pixel in the image scene. First, it improves the re8gular archetypal analysis with a new binary sparse constraint, and the adoption of the kernel function constructs the principal convex hull in an infinite Hilbert space and enlarges the divergences between pairwise pixels. Second, the RKADA transfers the pure endmember extraction problem into an optimization problem by minimizing residual errors with the Huber loss function. The Huber loss function reduces the effects from big noises and outliers in the convergence procedure of RKADA and enhances the robustness of the optimization function. Third, the random kernel sinks for fast kernel matrix approximation and the two-stage algorithm for optimizing initial pure endmembers are utilized to improve its computational efficiency in realistic implementations of RKADA, respectively. The optimization equation of RKADA is solved by using the block coordinate descend scheme and the desired pure endmembers are finally obtained. Six state-of-the-art pure endmember extraction methods are employed to make comparisons with the RKADA on both synthetic and real Cuprite HSI datasets, including three geometrical algorithms vertex component analysis (VCA), alternative volume maximization (AVMAX) and orthogonal subspace projection (OSP), and three matrix factorization algorithms the preconditioning for successive projection algorithm (PreSPA), hierarchical clustering based on rank-two nonnegative matrix factorization (H2NMF) and self-dictionary multiple measurement vector (SDMMV). Experimental results show that the RKADA outperforms all the six methods in terms of spectral angle distance (SAD) and root-mean-square-error (RMSE). Moreover, the RKADA has short computational times in offline operations and shows significant improvement in identifying pure endmembers for ground objects with smaller spectrum differences. Therefore, the RKADA could be an alternative for pure endmember extraction from hyperspectral images.
Direct discriminant locality preserving projection with Hammerstein polynomial expansion.
Chen, Xi; Zhang, Jiashu; Li, Defang
2012-12-01
Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kidon, Lyran; The Sackler Center for Computational Molecular and Materials Science, Tel Aviv University, Tel Aviv 69978; Wilner, Eli Y.
2015-12-21
The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima–Zwanzig–Mori time-convolution (TC) and the other on the Tokuyama–Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called “memory kernel” or “generator,” going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operatormore » in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green’s function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.« less
Numerical method for solving the nonlinear four-point boundary value problems
NASA Astrophysics Data System (ADS)
Lin, Yingzhen; Lin, Jinnan
2010-12-01
In this paper, a new reproducing kernel space is constructed skillfully in order to solve a class of nonlinear four-point boundary value problems. The exact solution of the linear problem can be expressed in the form of series and the approximate solution of the nonlinear problem is given by the iterative formula. Compared with known investigations, the advantages of our method are that the representation of exact solution is obtained in a new reproducing kernel Hilbert space and accuracy of numerical computation is higher. Meanwhile we present the convergent theorem, complexity analysis and error estimation. The performance of the new method is illustrated with several numerical examples.
A Primer on Vibrational Ball Bearing Feature Generation for Prognostics and Diagnostics Algorithms
2015-03-01
Atlas -Marks (Cone-Shaped Kernel) ........................................................36 8.7.7 Hilbert-Huang Transform...bearing surface and eventually progress to the surface where the material will separate. Also known as pitting, spalling, or flaking. • Wear ...normal degradation caused by dirt and foreign particles causing abrasion of the contact surfaces over time resulting in alterations in the raceway and
NASA Astrophysics Data System (ADS)
Hsieh, M.; Zhao, L.; Ma, K.
2010-12-01
Finite-frequency approach enables seismic tomography to fully utilize the spatial and temporal distributions of the seismic wavefield to improve resolution. In achieving this goal, one of the most important tasks is to compute efficiently and accurately the (Fréchet) sensitivity kernels of finite-frequency seismic observables such as traveltime and amplitude to the perturbations of model parameters. In scattering-integral approach, the Fréchet kernels are expressed in terms of the strain Green tensors (SGTs), and a pre-established SGT database is necessary to achieve practical efficiency for a three-dimensional reference model in which the SGTs must be calculated numerically. Methods for computing Fréchet kernels for seismic velocities have long been established. In this study, we develop algorithms based on the finite-difference method for calculating Fréchet kernels for the quality factor Qμ and seismic boundary topography. Kernels for the quality factor can be obtained in a way similar to those for seismic velocities with the help of the Hilbert transform. The effects of seismic velocities and quality factor on either traveltime or amplitude are coupled. Kernels for boundary topography involve spatial gradient of the SGTs and they also exhibit interesting finite-frequency characteristics. Examples of quality factor and boundary topography kernels will be shown for a realistic model for the Taiwan region with three-dimensional velocity variation as well as surface and Moho discontinuity topography.
Entanglement entropy in (3 + 1)-d free U(1) gauge theory
NASA Astrophysics Data System (ADS)
Soni, Ronak M.; Trivedi, Sandip P.
2017-02-01
We consider the entanglement entropy for a free U(1) theory in 3+1 dimensions in the extended Hilbert space definition. By taking the continuum limit carefully we obtain a replica trick path integral which calculates this entanglement entropy. The path integral is gauge invariant, with a gauge fixing delta function accompanied by a Faddeev -Popov determinant. For a spherical region it follows that the result for the logarithmic term in the entanglement, which is universal, is given by the a anomaly coefficient. We also consider the extractable part of the entanglement, which corresponds to the number of Bell pairs which can be obtained from entanglement distillation or dilution. For a spherical region we show that the coefficient of the logarithmic term for the extractable part is different from the extended Hilbert space result. We argue that the two results will differ in general, and this difference is accounted for by a massless scalar living on the boundary of the region of interest.
NASA Technical Reports Server (NTRS)
Hahne, G. E.
1991-01-01
A formal theory of the scattering of time-harmonic acoustic scalar waves from impenetrable, immobile obstacles is established. The time-independent formal scattering theory of nonrelativistic quantum mechanics, in particular the theory of the complete Green's function and the transition (T) operator, provides the model. The quantum-mechanical approach is modified to allow the treatment of acoustic-wave scattering with imposed boundary conditions of impedance type on the surface (delta-Omega) of an impenetrable obstacle. With k0 as the free-space wavenumber of the signal, a simplified expression is obtained for the k0-dependent T operator for a general case of homogeneous impedance boundary conditions for the acoustic wave on delta-Omega. All the nonelementary operators entering the expression for the T operator are formally simple rational algebraic functions of a certain invertible linear radiation impedance operator which maps any sufficiently well-behaved complex-valued function on delta-Omega into another such function on delta-Omega. In the subsequent study, the short-wavelength and the long-wavelength behavior of the radiation impedance operator and its inverse (the 'radiation admittance' operator) as two-point kernels on a smooth delta-Omega are studied for pairs of points that are close together.
Soft and hard classification by reproducing kernel Hilbert space methods.
Wahba, Grace
2002-12-24
Reproducing kernel Hilbert space (RKHS) methods provide a unified context for solving a wide variety of statistical modelling and function estimation problems. We consider two such problems: We are given a training set [yi, ti, i = 1, em leader, n], where yi is the response for the ith subject, and ti is a vector of attributes for this subject. The value of y(i) is a label that indicates which category it came from. For the first problem, we wish to build a model from the training set that assigns to each t in an attribute domain of interest an estimate of the probability pj(t) that a (future) subject with attribute vector t is in category j. The second problem is in some sense less ambitious; it is to build a model that assigns to each t a label, which classifies a future subject with that t into one of the categories or possibly "none of the above." The approach to the first of these two problems discussed here is a special case of what is known as penalized likelihood estimation. The approach to the second problem is known as the support vector machine. We also note some alternate but closely related approaches to the second problem. These approaches are all obtained as solutions to optimization problems in RKHS. Many other problems, in particular the solution of ill-posed inverse problems, can be obtained as solutions to optimization problems in RKHS and are mentioned in passing. We caution the reader that although a large literature exists in all of these topics, in this inaugural article we are selectively highlighting work of the author, former students, and other collaborators.
Cherif, Aicha O; Trabelsi, Hajer; Ben Messaouda, Mhamed; Kâabi, Belhassen; Pellerin, Isabelle; Boukhchina, Sadok; Kallel, Habib; Pepe, Claude
2010-08-11
4-Desmethylsterols, the main component of the phytosterol fraction, have been analyzed during the development of Tunisian peanut kernels ( Arachis hypogaea L.), Trabelsia (AraT) and Chounfakhi (AraC), which are monocultivar species, and Arbi (AraA), which is a wild species, by gas chromatography-mass spectrometry. Immature wild peanut (AraA) showed the highest contents of beta-sitosterol (554.8 mg/100 g of oil), campesterol (228.6 mg/100 g of oil), and Delta(5)-avenasterol (39.0 mg/100 g of oil) followed by peanut cultivar AraC with beta-sitosterol, campesterol, and Delta(5)-avenasterol averages of 267.7, 92.1, and 28.6 mg/100 g of oil, respectively, and similarly for AraT 309.1, 108.4, and 27.4 mg/100 g of oil, respectively, were found. These results suggest that, in immature stages, phytosterol contents can be important regulator factors for the functional quality of peanut oil for the agro-industry chain from plant to nutraceuticals.
Computer program for supersonic Kernel-function flutter analysis of thin lifting surfaces
NASA Technical Reports Server (NTRS)
Cunningham, H. J.
1974-01-01
This report describes a computer program (program D2180) that has been prepared to implement the analysis described in (N71-10866) for calculating the aerodynamic forces on a class of harmonically oscillating planar lifting surfaces in supersonic potential flow. The planforms treated are the delta and modified-delta (arrowhead) planforms with subsonic leading and supersonic trailing edges, and (essentially) pointed tips. The resulting aerodynamic forces are applied in a Galerkin modal flutter analysis. The required input data are the flow and planform parameters including deflection-mode data, modal frequencies, and generalized masses.
Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.
Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan
2016-11-01
In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects. Copyright © 2016 Crop Science Society of America.
Reactive Collisions and Final State Analysis in Hypersonic Flight Regime
2016-09-13
Kelvin.[7] The gas-phase, surface reactions and energy transfer at these tempera- tures are essentially uncharacterized and the experimental methodologies...high temperatures (1000 to 20000 K) and compared with results from experimentally derived thermodynamics quantities from the NASA CEA (NASA Chemical...with a reproducing kernel Hilbert space (RKHS) method[13] combined with Legendre polynomials; (2) quasi classical trajectory (QCT) calculations to study
Eshkuvatov, Z K; Zulkarnain, F S; Nik Long, N M A; Muminov, Z
2016-01-01
Modified homotopy perturbation method (HPM) was used to solve the hypersingular integral equations (HSIEs) of the first kind on the interval [-1,1] with the assumption that the kernel of the hypersingular integral is constant on the diagonal of the domain. Existence of inverse of hypersingular integral operator leads to the convergence of HPM in certain cases. Modified HPM and its norm convergence are obtained in Hilbert space. Comparisons between modified HPM, standard HPM, Bernstein polynomials approach Mandal and Bhattacharya (Appl Math Comput 190:1707-1716, 2007), Chebyshev expansion method Mahiub et al. (Int J Pure Appl Math 69(3):265-274, 2011) and reproducing kernel Chen and Zhou (Appl Math Lett 24:636-641, 2011) are made by solving five examples. Theoretical and practical examples revealed that the modified HPM dominates the standard HPM and others. Finally, it is found that the modified HPM is exact, if the solution of the problem is a product of weights and polynomial functions. For rational solution the absolute error decreases very fast by increasing the number of collocation points.
Yao, Jincao; Yu, Huimin; Hu, Roland
2017-01-01
This paper introduces a new implicit-kernel-sparse-shape-representation-based object segmentation framework. Given an input object whose shape is similar to some of the elements in the training set, the proposed model can automatically find a cluster of implicit kernel sparse neighbors to approximately represent the input shape and guide the segmentation. A distance-constrained probabilistic definition together with a dualization energy term is developed to connect high-level shape representation and low-level image information. We theoretically prove that our model not only derives from two projected convex sets but is also equivalent to a sparse-reconstruction-error-based representation in the Hilbert space. Finally, a "wake-sleep"-based segmentation framework is applied to drive the evolutionary curve to recover the original shape of the object. We test our model on two public datasets. Numerical experiments on both synthetic images and real applications show the superior capabilities of the proposed framework.
Sinha, Shriprakash
2017-12-04
Ever since the accidental discovery of Wingless [Sharma R.P., Drosophila information service, 1973, 50, p 134], research in the field of Wnt signaling pathway has taken significant strides in wet lab experiments and various cancer clinical trials, augmented by recent developments in advanced computational modeling of the pathway. Information rich gene expression profiles reveal various aspects of the signaling pathway and help in studying different issues simultaneously. Hitherto, not many computational studies exist which incorporate the simultaneous study of these issues. This manuscript ∙ explores the strength of contributing factors in the signaling pathway, ∙ analyzes the existing causal relations among the inter/extracellular factors effecting the pathway based on prior biological knowledge and ∙ investigates the deviations in fold changes in the recently found prevalence of psychophysical laws working in the pathway. To achieve this goal, local and global sensitivity analysis is conducted on the (non)linear responses between the factors obtained from static and time series expression profiles using the density (Hilbert-Schmidt Information Criterion) and variance (Sobol) based sensitivity indices. The results show the advantage of using density based indices over variance based indices mainly due to the former's employment of distance measures & the kernel trick via Reproducing kernel Hilbert space (RKHS) that capture nonlinear relations among various intra/extracellular factors of the pathway in a higher dimensional space. In time series data, using these indices it is now possible to observe where in time, which factors get influenced & contribute to the pathway, as changes in concentration of the other factors are made. This synergy of prior biological knowledge, sensitivity analysis & representations in higher dimensional spaces can facilitate in time based administration of target therapeutic drugs & reveal hidden biological information within colorectal cancer samples.
Local coding based matching kernel method for image classification.
Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong
2014-01-01
This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.
Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Theodoridis, Sergios
2008-12-01
Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.
Learn the Lagrangian: A Vector-Valued RKHS Approach to Identifying Lagrangian Systems.
Cheng, Ching-An; Huang, Han-Pang
2016-12-01
We study the modeling of Lagrangian systems with multiple degrees of freedom. Based on system dynamics, canonical parametric models require ad hoc derivations and sometimes simplification for a computable solution; on the other hand, due to the lack of prior knowledge in the system's structure, modern nonparametric models in machine learning face the curse of dimensionality, especially in learning large systems. In this paper, we bridge this gap by unifying the theories of Lagrangian systems and vector-valued reproducing kernel Hilbert space. We reformulate Lagrangian systems with kernels that embed the governing Euler-Lagrange equation-the Lagrangian kernels-and show that these kernels span a subspace capturing the Lagrangian's projection as inverse dynamics. By such property, our model uses only inputs and outputs as in machine learning and inherits the structured form as in system dynamics, thereby removing the need for the mundane derivations for new systems as well as the generalization problem in learning from scratches. In effect, it learns the system's Lagrangian, a simpler task than directly learning the dynamics. To demonstrate, we applied the proposed kernel to identify the robot inverse dynamics in simulations and experiments. Our results present a competitive novel approach to identifying Lagrangian systems, despite using only inputs and outputs.
Investigations of Reactive Processes at Temperatures Relevant to the Hypersonic Flight Regime
2014-10-31
molecule is constructed based on high- level ab-initio calculations and interpolated using the reproducible kernel Hilbert space (RKHS) method and...a potential energy surface (PES) for the ground state of the NO2 molecule is constructed based on high- level ab initio calculations and interpolated...between O(3P) and NO(2Π) at higher temperatures relevant to the hypersonic flight regime of reentering space- crafts. At a more fundamental level , we
Tool Wear Feature Extraction Based on Hilbert Marginal Spectrum
NASA Astrophysics Data System (ADS)
Guan, Shan; Song, Weijie; Pang, Hongyang
2017-09-01
In the metal cutting process, the signal contains a wealth of tool wear state information. A tool wear signal’s analysis and feature extraction method based on Hilbert marginal spectrum is proposed. Firstly, the tool wear signal was decomposed by empirical mode decomposition algorithm and the intrinsic mode functions including the main information were screened out by the correlation coefficient and the variance contribution rate. Secondly, Hilbert transform was performed on the main intrinsic mode functions. Hilbert time-frequency spectrum and Hilbert marginal spectrum were obtained by Hilbert transform. Finally, Amplitude domain indexes were extracted on the basis of the Hilbert marginal spectrum and they structured recognition feature vector of tool wear state. The research results show that the extracted features can effectively characterize the different wear state of the tool, which provides a basis for monitoring tool wear condition.
NASA Technical Reports Server (NTRS)
Milman, Mark H.
1987-01-01
The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary systems. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.
NASA Technical Reports Server (NTRS)
Milman, Mark H.
1988-01-01
The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary schemes. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.
A Regression Design Approach to Optimal and Robust Spacing Selection.
1981-07-01
Hassanein (1968, 1969a, 1969b, 1971, 1972, 1977), Kulldorf (1963), Kulldorf and Vannman (1973), Rhodin (1976), Sarhan and Greenberg (1958, 1962) and...of d0 and Q0 1 d 0 "Q0 ’ are in the reproducing kernel Hilbert space (RKHS) generated by R, the techniques developed by Parzen (1961a, 1961b) may be... Greenberg , B.G. (1958). Estimation problems in the exponential distribution using order statistics. Proceedings of the Statistical Techniques in Missile
Tensor manifold-based extreme learning machine for 2.5-D face recognition
NASA Astrophysics Data System (ADS)
Chong, Lee Ying; Ong, Thian Song; Teoh, Andrew Beng Jin
2018-01-01
We explore the use of the Gabor regional covariance matrix (GRCM), a flexible matrix-based descriptor that embeds the Gabor features in the covariance matrix, as a 2.5-D facial descriptor and an effective means of feature fusion for 2.5-D face recognition problems. Despite its promise, matching is not a trivial problem for GRCM since it is a special instance of a symmetric positive definite (SPD) matrix that resides in non-Euclidean space as a tensor manifold. This implies that GRCM is incompatible with the existing vector-based classifiers and distance matchers. Therefore, we bridge the gap of the GRCM and extreme learning machine (ELM), a vector-based classifier for the 2.5-D face recognition problem. We put forward a tensor manifold-compliant ELM and its two variants by embedding the SPD matrix randomly into reproducing kernel Hilbert space (RKHS) via tensor kernel functions. To preserve the pair-wise distance of the embedded data, we orthogonalize the random-embedded SPD matrix. Hence, classification can be done using a simple ridge regressor, an integrated component of ELM, on the random orthogonal RKHS. Experimental results show that our proposed method is able to improve the recognition performance and further enhance the computational efficiency.
NASA Astrophysics Data System (ADS)
Zhang, Yunlu; Yan, Lei; Liou, Frank
2018-05-01
The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.
Online Distributed Learning Over Networks in RKH Spaces Using Random Fourier Features
NASA Astrophysics Data System (ADS)
Bouboulis, Pantelis; Chouvardas, Symeon; Theodoridis, Sergios
2018-04-01
We present a novel diffusion scheme for online kernel-based learning over networks. So far, a major drawback of any online learning algorithm, operating in a reproducing kernel Hilbert space (RKHS), is the need for updating a growing number of parameters as time iterations evolve. Besides complexity, this leads to an increased need of communication resources, in a distributed setting. In contrast, the proposed method approximates the solution as a fixed-size vector (of larger dimension than the input space) using Random Fourier Features. This paves the way to use standard linear combine-then-adapt techniques. To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented. Conditions for asymptotic convergence and boundness of the networkwise regret are also provided. The simulated tests illustrate the performance of the proposed scheme.
Acoustical Applications of the HHT Method
NASA Technical Reports Server (NTRS)
Huang, Norden E.
2003-01-01
A document discusses applications of a method based on the Huang-Hilbert transform (HHT). The method was described, without the HHT name, in Analyzing Time Series Using EMD and Hilbert Spectra (GSC-13817), NASA Tech Briefs, Vol. 24, No. 10 (October 2000), page 63. To recapitulate: The method is especially suitable for analyzing time-series data that represent nonstationary and nonlinear physical phenomena. The method involves the empirical mode decomposition (EMD), in which a complicated signal is decomposed into a finite number of functions, called intrinsic mode functions (IMFs), that admit well-behaved Hilbert transforms. The HHT consists of the combination of EMD and Hilbert spectral analysis.
Encoding Dissimilarity Data for Statistical Model Building.
Wahba, Grace
2010-12-01
We summarize, review and comment upon three papers which discuss the use of discrete, noisy, incomplete, scattered pairwise dissimilarity data in statistical model building. Convex cone optimization codes are used to embed the objects into a Euclidean space which respects the dissimilarity information while controlling the dimension of the space. A "newbie" algorithm is provided for embedding new objects into this space. This allows the dissimilarity information to be incorporated into a Smoothing Spline ANOVA penalized likelihood model, a Support Vector Machine, or any model that will admit Reproducing Kernel Hilbert Space components, for nonparametric regression, supervised learning, or semi-supervised learning. Future work and open questions are discussed. The papers are: F. Lu, S. Keles, S. Wright and G. Wahba 2005. A framework for kernel regularization with application to protein clustering. Proceedings of the National Academy of Sciences 102, 12332-1233.G. Corrada Bravo, G. Wahba, K. Lee, B. Klein, R. Klein and S. Iyengar 2009. Examining the relative influence of familial, genetic and environmental covariate information in flexible risk models. Proceedings of the National Academy of Sciences 106, 8128-8133F. Lu, Y. Lin and G. Wahba. Robust manifold unfolding with kernel regularization. TR 1008, Department of Statistics, University of Wisconsin-Madison.
Aeroelastic Flight Data Analysis with the Hilbert-Huang Algorithm
NASA Technical Reports Server (NTRS)
Brenner, Martin J.; Prazenica, Chad
2006-01-01
This report investigates the utility of the Hilbert Huang transform for the analysis of aeroelastic flight data. It is well known that the classical Hilbert transform can be used for time-frequency analysis of functions or signals. Unfortunately, the Hilbert transform can only be effectively applied to an extremely small class of signals, namely those that are characterized by a single frequency component at any instant in time. The recently-developed Hilbert Huang algorithm addresses the limitations of the classical Hilbert transform through a process known as empirical mode decomposition. Using this approach, the data is filtered into a series of intrinsic mode functions, each of which admits a well-behaved Hilbert transform. In this manner, the Hilbert Huang algorithm affords time-frequency analysis of a large class of signals. This powerful tool has been applied in the analysis of scientific data, structural system identification, mechanical system fault detection, and even image processing. The purpose of this report is to demonstrate the potential applications of the Hilbert Huang algorithm for the analysis of aeroelastic systems, with improvements such as localized online processing. Applications for correlations between system input and output, and amongst output sensors, are discussed to characterize the time-varying amplitude and frequency correlations present in the various components of multiple data channels. Online stability analyses and modal identification are also presented. Examples are given using aeroelastic test data from the F-18 Active Aeroelastic Wing airplane, an Aerostructures Test Wing, and pitch plunge simulation.
Aeroelastic Flight Data Analysis with the Hilbert-Huang Algorithm
NASA Technical Reports Server (NTRS)
Brenner, Marty; Prazenica, Chad
2005-01-01
This paper investigates the utility of the Hilbert-Huang transform for the analysis of aeroelastic flight data. It is well known that the classical Hilbert transform can be used for time-frequency analysis of functions or signals. Unfortunately, the Hilbert transform can only be effectively applied to an extremely small class of signals, namely those that are characterized by a single frequency component at any instant in time. The recently-developed Hilbert-Huang algorithm addresses the limitations of the classical Hilbert transform through a process known as empirical mode decomposition. Using this approach, the data is filtered into a series of intrinsic mode functions, each of which admits a well-behaved Hilbert transform. In this manner, the Hilbert-Huang algorithm affords time-frequency analysis of a large class of signals. This powerful tool has been applied in the analysis of scientific data, structural system identification, mechanical system fault detection, and even image processing. The purpose of this paper is to demonstrate the potential applications of the Hilbert-Huang algorithm for the analysis of aeroelastic systems, with improvements such as localized/online processing. Applications for correlations between system input and output, and amongst output sensors, are discussed to characterize the time-varying amplitude and frequency correlations present in the various components of multiple data channels. Online stability analyses and modal identification are also presented. Examples are given using aeroelastic test data from the F/A-18 Active Aeroelastic Wing aircraft, an Aerostructures Test Wing, and pitch-plunge simulation.
Characterizing resonant component in speech: A different view of tracking fundamental frequency
NASA Astrophysics Data System (ADS)
Dong, Bin
2017-05-01
Inspired by the nonlinearity and nonstationarity and the modulations in speech, Hilbert-Huang Transform and cyclostationarity analysis are employed to investigate the speech resonance in vowel in sequence. Cyclostationarity analysis is not directly manipulated on the target vowel, but on its intrinsic mode functions one by one. Thanks to the equivalence between the fundamental frequency in speech and the cyclic frequency in cyclostationarity analysis, the modulation intensity distributions of the intrinsic mode functions provide much information for the estimation of the fundamental frequency. To highlight the relationship between frequency and time, the pseudo-Hilbert spectrum is proposed to replace the Hilbert spectrum here. After contrasting the pseudo-Hilbert spectra of and the modulation intensity distributions of the intrinsic mode functions, it finds that there is usually one intrinsic mode function which works as the fundamental component of the vowel. Furthermore, the fundamental frequency of the vowel can be determined by tracing the pseudo-Hilbert spectrum of its fundamental component along the time axis. The later method is more robust to estimate the fundamental frequency, when meeting nonlinear components. Two vowels [a] and [i], picked up from a speech database FAU Aibo Emotion Corpus, are applied to validate the above findings.
A tensor Banach algebra approach to abstract kinetic equations
NASA Astrophysics Data System (ADS)
Greenberg, W.; van der Mee, C. V. M.
The study deals with a concrete algebraic construction providing the existence theory for abstract kinetic equation boundary-value problems, when the collision operator A is an accretive finite-rank perturbation of the identity operator in a Hilbert space H. An algebraic generalization of the Bochner-Phillips theorem is utilized to study solvability of the abstract boundary-value problem without any regulatory condition. A Banach algebra in which the convolution kernel acts is obtained explicitly, and this result is used to prove a perturbation theorem for bisemigroups, which then plays a vital role in solving the initial equations.
Cohomologie des Groupes Localement Compacts et Produits Tensoriels Continus de Representations
ERIC Educational Resources Information Center
Guichardet, A.
1976-01-01
Contains few and sometimes incomplete proofs on continuous tensor products of Hilbert spaces and of group representations, and on the irreducibility of the latter. Theory of continuous tensor products of Hilbert Spaces is closely related to that of conditionally positive definite functions; it relies on the technique of symmetric Hilbert spaces,…
Dirac’s magnetic monopole and the Kontsevich star product
NASA Astrophysics Data System (ADS)
Soloviev, M. A.
2018-03-01
We examine relationships between various quantization schemes for an electrically charged particle in the field of a magnetic monopole. Quantization maps are defined in invariant geometrical terms, appropriate to the case of nontrivial topology, and are constructed for two operator representations. In the first setting, the quantum operators act on the Hilbert space of sections of a nontrivial complex line bundle associated with the Hopf bundle, whereas the second approach uses instead a quaternionic Hilbert module of sections of a trivial quaternionic line bundle. We show that these two quantizations are naturally related by a bundle morphism and, as a consequence, induce the same phase-space star product. We obtain explicit expressions for the integral kernels of star-products corresponding to various operator orderings and calculate their asymptotic expansions up to the third order in the Planck constant \\hbar . We also show that the differential form of the magnetic Weyl product corresponding to the symmetric ordering agrees completely with the Kontsevich formula for deformation quantization of Poisson structures and can be represented by Kontsevich’s graphs.
A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions
NASA Astrophysics Data System (ADS)
Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.
2017-05-01
Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-01-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-12-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.
Real-time dose computation: GPU-accelerated source modeling and superposition/convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacques, Robert; Wong, John; Taylor, Russell
Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3} times were 8.3 and 94 s, respectively, on an AMD (Sunnyvale, CA) Opteron 254 (two cores, 2.8 GHz). Conclusions: The authors have completed a comprehensive, GPU-accelerated dose engine in order to provide a substantial performance gain over CPU based implementations. Real-time dose computation is feasible with the accuracy levels of the superposition/convolution algorithm.« less
Application of the Hilbert-Huang Transform to Financial Data
NASA Technical Reports Server (NTRS)
Huang, Norden
2005-01-01
A paper discusses the application of the Hilbert-Huang transform (HHT) method to time-series financial-market data. The method was described, variously without and with the HHT name, in several prior NASA Tech Briefs articles and supporting documents. To recapitulate: The method is especially suitable for analyzing time-series data that represent nonstationary and nonlinear phenomena including physical phenomena and, in the present case, financial-market processes. The method involves the empirical mode decomposition (EMD), in which a complicated signal is decomposed into a finite number of functions, called "intrinsic mode functions" (IMFs), that admit well-behaved Hilbert transforms. The HHT consists of the combination of EMD and Hilbert spectral analysis. The local energies and the instantaneous frequencies derived from the IMFs through Hilbert transforms can be used to construct an energy-frequency-time distribution, denoted a Hilbert spectrum. The instant paper begins with a discussion of prior approaches to quantification of market volatility, summarizes the HHT method, then describes the application of the method in performing time-frequency analysis of mortgage-market data from the years 1972 through 2000. Filtering by use of the EMD is shown to be useful for quantifying market volatility.
Clifford coherent state transforms on spheres
NASA Astrophysics Data System (ADS)
Dang, Pei; Mourão, José; Nunes, João P.; Qian, Tao
2018-01-01
We introduce a one-parameter family of transforms, U(m)t , t > 0, from the Hilbert space of Clifford algebra valued square integrable functions on the m-dimensional sphere, L2(Sm , dσm) ⊗Cm+1, to the Hilbert spaces, ML2(R m + 1 ∖ { 0 } , dμt) , of solutions of the Euclidean Dirac equation on R m + 1 ∖ { 0 } which are square integrable with respect to appropriate measures, dμt. We prove that these transforms are unitary isomorphisms of the Hilbert spaces and are extensions of the Segal-Bargman coherent state transform, U(1) :L2(S1 , dσ1) ⟶ HL2(C ∖ { 0 } , dμ) , to higher dimensional spheres in the context of Clifford analysis. In Clifford analysis it is natural to replace the analytic continuation from Sm to SCm as in (Hall, 1994; Stenzel, 1999; Hall and Mitchell, 2002) by the Cauchy-Kowalewski extension from Sm to R m + 1 ∖ { 0 } . One then obtains a unitary isomorphism from an L2-Hilbert space to a Hilbert space of solutions of the Dirac equation, that is to a Hilbert space of monogenic functions.
Kim, Sangmin; Raphael, Patrick D; Oghalai, John S; Applegate, Brian E
2016-04-01
Swept-laser sources offer a number of advantages for Phase-sensitive Optical Coherence Tomography (PhOCT). However, inter- and intra-sweep variability leads to calibration errors that adversely affect phase sensitivity. While there are several approaches to overcoming this problem, our preferred method is to simply calibrate every sweep of the laser. This approach offers high accuracy and phase stability at the expense of a substantial processing burden. In this approach, the Hilbert phase of the interferogram from a reference interferometer provides the instantaneous wavenumber of the laser, but is computationally expensive. Fortunately, the Hilbert transform may be approximated by a Finite Impulse-Response (FIR) filter. Here we explore the use of several FIR filter based Hilbert transforms for calibration, explicitly considering the impact of filter choice on phase sensitivity and OCT image quality. Our results indicate that the complex FIR filter approach is the most robust and accurate among those considered. It provides similar image quality and slightly better phase sensitivity than the traditional FFT-IFFT based Hilbert transform while consuming fewer resources in an FPGA implementation. We also explored utilizing the Hilbert magnitude of the reference interferogram to calculate an ideal window function for spectral amplitude calibration. The ideal window function is designed to carefully control sidelobes on the axial point spread function. We found that after a simple chromatic correction, calculating the window function using the complex FIR filter and the reference interferometer gave similar results to window functions calculated using a mirror sample and the FFT-IFFT Hilbert transform. Hence, the complex FIR filter can enable accurate and high-speed calibration of the magnitude and phase of spectral interferograms.
Kim, Sangmin; Raphael, Patrick D.; Oghalai, John S.; Applegate, Brian E.
2016-01-01
Swept-laser sources offer a number of advantages for Phase-sensitive Optical Coherence Tomography (PhOCT). However, inter- and intra-sweep variability leads to calibration errors that adversely affect phase sensitivity. While there are several approaches to overcoming this problem, our preferred method is to simply calibrate every sweep of the laser. This approach offers high accuracy and phase stability at the expense of a substantial processing burden. In this approach, the Hilbert phase of the interferogram from a reference interferometer provides the instantaneous wavenumber of the laser, but is computationally expensive. Fortunately, the Hilbert transform may be approximated by a Finite Impulse-Response (FIR) filter. Here we explore the use of several FIR filter based Hilbert transforms for calibration, explicitly considering the impact of filter choice on phase sensitivity and OCT image quality. Our results indicate that the complex FIR filter approach is the most robust and accurate among those considered. It provides similar image quality and slightly better phase sensitivity than the traditional FFT-IFFT based Hilbert transform while consuming fewer resources in an FPGA implementation. We also explored utilizing the Hilbert magnitude of the reference interferogram to calculate an ideal window function for spectral amplitude calibration. The ideal window function is designed to carefully control sidelobes on the axial point spread function. We found that after a simple chromatic correction, calculating the window function using the complex FIR filter and the reference interferometer gave similar results to window functions calculated using a mirror sample and the FFT-IFFT Hilbert transform. Hence, the complex FIR filter can enable accurate and high-speed calibration of the magnitude and phase of spectral interferograms. PMID:27446666
NASA Astrophysics Data System (ADS)
Su, Zhi-Yuan; Wang, Chuan-Chen; Wu, Tzuyin; Wang, Yeng-Tseng; Tang, Feng-Cheng
2008-01-01
This study used the Hilbert-Huang transform, a recently developed, instantaneous frequency-time analysis, to analyze radial artery pulse signals taken from women in their 36th week of pregnancy and after pregnancy. The acquired instantaneous frequency-time spectrum (Hilbert spectrum) is further compared with the Morlet wavelet spectrum. Results indicate that the Hilbert spectrum is especially suitable for analyzing the time series of non-stationary radial artery pulse signals since, in the Hilbert-Huang transform, signals are decomposed into different mode functions in accordance with signal’s local time scale. Therefore, the Hilbert spectrum contains more detailed information than the Morlet wavelet spectrum. From the Hilbert spectrum, we can see that radial artery pulse signals taken from women in their 36th week of pregnancy and after pregnancy have different patterns. This approach could be applied to facilitate non-invasive diagnosis of fetus’ physiological signals in the future.
An Immersed Boundary method with divergence-free velocity interpolation and force spreading
NASA Astrophysics Data System (ADS)
Bao, Yuanxun; Donev, Aleksandar; Griffith, Boyce E.; McQueen, David M.; Peskin, Charles S.
2017-10-01
The Immersed Boundary (IB) method is a mathematical framework for constructing robust numerical methods to study fluid-structure interaction in problems involving an elastic structure immersed in a viscous fluid. The IB formulation uses an Eulerian representation of the fluid and a Lagrangian representation of the structure. The Lagrangian and Eulerian frames are coupled by integral transforms with delta function kernels. The discretized IB equations use approximations to these transforms with regularized delta function kernels to interpolate the fluid velocity to the structure, and to spread structural forces to the fluid. It is well-known that the conventional IB method can suffer from poor volume conservation since the interpolated Lagrangian velocity field is not generally divergence-free, and so this can cause spurious volume changes. In practice, the lack of volume conservation is especially pronounced for cases where there are large pressure differences across thin structural boundaries. The aim of this paper is to greatly reduce the volume error of the IB method by introducing velocity-interpolation and force-spreading schemes with the properties that the interpolated velocity field in which the structure moves is at least C1 and satisfies a continuous divergence-free condition, and that the force-spreading operator is the adjoint of the velocity-interpolation operator. We confirm through numerical experiments in two and three spatial dimensions that this new IB method is able to achieve substantial improvement in volume conservation compared to other existing IB methods, at the expense of a modest increase in the computational cost. Further, the new method provides smoother Lagrangian forces (tractions) than traditional IB methods. The method presented here is restricted to periodic computational domains. Its generalization to non-periodic domains is important future work.
Filter distortion effects on telemetry signal-to-noise ratio
NASA Technical Reports Server (NTRS)
Sadr, R.; Hurd, W.
1987-01-01
The effect of filtering on the Signal-to-Noise Ratio (SNR) of a coherently demodulated band-limited signal is determined in the presence of worse-case amplitude ripple. The problem is formulated mathematically as an optimization problem in the L2-Hilbert space. The form of the worst-cast amplitude ripple is specified, and the degradation in the SNR is derived in a closed form expression. It is shown that when the maximum passband amplitude ripple is 2 delta (peak to peak), the SNR is degraded by at most (1 - delta squared), even when the ripple is unknown or uncompensated. For example, an SNR loss of less than 0.01 dB due to amplitude ripple can be assured by keeping the amplitude ripple to under 0.42 dB.
A multi-label learning based kernel automatic recommendation method for support vector machine.
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.
A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896
Hilbert complexes of nonlinear elasticity
NASA Astrophysics Data System (ADS)
Angoshtari, Arzhang; Yavari, Arash
2016-12-01
We introduce some Hilbert complexes involving second-order tensors on flat compact manifolds with boundary that describe the kinematics and the kinetics of motion in nonlinear elasticity. We then use the general framework of Hilbert complexes to write Hodge-type and Helmholtz-type orthogonal decompositions for second-order tensors. As some applications of these decompositions in nonlinear elasticity, we study the strain compatibility equations of linear and nonlinear elasticity in the presence of Dirichlet boundary conditions and the existence of stress functions on non-contractible bodies. As an application of these Hilbert complexes in computational mechanics, we briefly discuss the derivation of a new class of mixed finite element methods for nonlinear elasticity.
The canonical quantization of chaotic maps on the torus
NASA Astrophysics Data System (ADS)
Rubin, Ron Shai
In this thesis, a quantization method for classical maps on the torus is presented. The quantum algebra of observables is defined as the quantization of measurable functions on the torus with generators exp (2/pi ix) and exp (2/pi ip). The Hilbert space we use remains the infinite-dimensional L2/ (/IR, dx). The dynamics is given by a unitary quantum propagator such that as /hbar /to 0, the classical dynamics is returned. We construct such a quantization for the Kronecker map, the cat map, the baker's map, the kick map, and the Harper map. For the cat map, we find the same for the propagator on the plane the same integral kernel conjectured in (HB) using semiclassical methods. We also define a quantum 'integral over phase space' as a trace over the quantum algebra. Using this definition, we proceed to define quantum ergodicity and mixing for maps on the torus. We prove that the quantum cat map and Kronecker map are both ergodic, but only the cat map is mixing, true to its classical origins. For Planck's constant satisfying the integrality condition h = 1/N, with N/in doubz+, we construct an explicit isomorphism between L2/ (/IR, dx) and the Hilbert space of sections of an N-dimensional vector bundle over a θ-torus T2 of boundary conditions. The basis functions are distributions in L2/ (/IR, dx), given by an infinite comb of Dirac δ-functions. In Bargmann space these distributions take on the form of Jacobi ϑ-functions. Transformations from position to momentum representation can be implemented via a finite N-dimensional discrete Fourier transform. With the θ-torus, we provide a connection between the finite-dimensional quantum maps given in the physics literature and the canonical quantization presented here and found in the language of pseudo-differential operators elsewhere in mathematics circles. Specifically, at a fixed point of the dynamics on the θ-torus, we return a finite-dimensional matrix propagator. We present this connection explicitly for several examples.
The Riemann-Hilbert problem for nonsymmetric systems
NASA Astrophysics Data System (ADS)
Greenberg, W.; Zweifel, P. F.; Paveri-Fontana, S.
1991-12-01
A comparison of the Riemann-Hilbert problem and the Wiener-Hopf factorization problem arising in the solution of half-space singular integral equations is presented. Emphasis is on the factorization of functions lacking the reflection symmetry usual in transport theory.
Wigner functions defined with Laplace transform kernels.
Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George
2011-10-24
We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America
NASA Astrophysics Data System (ADS)
Martins, Luis Gustavo Nogueira; Stefanello, Michel Baptistella; Degrazia, Gervásio Annes; Acevedo, Otávio Costa; Puhales, Franciano Scremin; Demarco, Giuliano; Mortarini, Luca; Anfossi, Domenico; Roberti, Débora Regina; Costa, Felipe Denardin; Maldaner, Silvana
2016-11-01
In this study we analyze natural complex signals employing the Hilbert-Huang spectral analysis. Specifically, low wind meandering meteorological data are decomposed into turbulent and non turbulent components. These non turbulent movements, responsible for the absence of a preferential direction of the horizontal wind, provoke negative lobes in the meandering autocorrelation functions. The meandering characteristic time scales (meandering periods) are determined from the spectral peak provided by the Hilbert-Huang marginal spectrum. The magnitudes of the temperature and horizontal wind meandering period obtained agree with the results found from the best fit of the heuristic meandering autocorrelation functions. Therefore, the new method represents a new procedure to evaluate meandering periods that does not employ mathematical expressions to represent observed meandering autocorrelation functions.
An SVM model with hybrid kernels for hydrological time series
NASA Astrophysics Data System (ADS)
Wang, C.; Wang, H.; Zhao, X.; Xie, Q.
2017-12-01
Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.
A mixture model for robust registration in Kinect sensor
NASA Astrophysics Data System (ADS)
Peng, Li; Zhou, Huabing; Zhu, Shengguo
2018-03-01
The Microsoft Kinect sensor has been widely used in many applications, but it suffers from the drawback of low registration precision between color image and depth image. In this paper, we present a robust method to improve the registration precision by a mixture model that can handle multiply images with the nonparametric model. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS).The estimation is performed by the EM algorithm which by also estimating the variance of the prior model is able to obtain good estimates. We illustrate the proposed method on the public available dataset. The experimental results show that our approach outperforms the baseline methods.
Zhuang, Leimeng; Khan, Muhammad Rezaul; Beeker, Willem; Leinse, Arne; Heideman, René; Roeloffzen, Chris
2012-11-19
We propose and demonstrate a novel wideband microwave photonic fractional Hilbert transformer implemented using a ring resonator-based optical all-pass filter. The full programmability of the ring resonator allows variable and arbitrary fractional order of the Hilbert transformer. The performance analysis in both frequency and time domain validates that the proposed implementation provides a good approximation to an ideal fractional Hilbert transformer. This is also experimentally verified by an electrical S21 response characterization performed on a waveguide realization of a ring resonator. The waveguide-based structure allows the proposed Hilbert transformer to be integrated together with other building blocks on a photonic integrated circuit to create various system-level functionalities for on-chip microwave photonic signal processors. As an example, a circuit consisting of a splitter and a ring resonator has been realized which can perform on-chip phase control of microwave signals generated by means of optical heterodyning, and simultaneous generation of in-phase and quadrature microwave signals for a wide frequency range. For these functionalities, this simple and on-chip solution is considered to be practical, particularly when operating together with a dual-frequency laser. To our best knowledge, this is the first-time on-chip demonstration where ring resonators are employed to perform phase control functionalities for optical generation of microwave signals by means of optical heterodyning.
Semiclassical analysis for pseudo-relativistic Hartree equations
NASA Astrophysics Data System (ADS)
Cingolani, Silvia; Secchi, Simone
2015-06-01
In this paper we study the semiclassical limit for the pseudo-relativistic Hartree equation $\\sqrt{-\\varepsilon^2 \\Delta + m^2}u + V u = (I_\\alpha * |u|^{p}) |u|^{p-2}u$ in $\\mathbb{R}^N$ where $m>0$, $2 \\leq p < \\frac{2N}{N-1}$, $V \\colon \\mathbb{R}^N \\to \\mathbb{R}$ is an external scalar potential, $I_\\alpha (x) = \\frac{c_{N,\\alpha}}{|x|^{N-\\alpha}}$ is a convolution kernel, $c_{N,\\alpha}$ is a positive constant and $(N-1)p-N<\\alpha
Sima, Chaotan; Gates, J C; Holmes, C; Mennea, P L; Zervas, M N; Smith, P G R
2013-09-01
Terahertz bandwidth photonic Hilbert transformers are proposed and experimentally demonstrated. The integrated device is fabricated via a direct UV grating writing technique in a silica-on-silicon platform. The photonic Hilbert transformer operates at bandwidths of up to 2 THz (~16 nm) in the telecom band, a 10-fold greater bandwidth than any previously reported experimental approaches. Achieving this performance requires detailed knowledge of the system transfer function of the direct UV grating writing technique; this allows improved linearity and yields terahertz bandwidth Bragg gratings with improved spectral quality. By incorporating a flat-top reflector and Hilbert grating with a waveguide coupler, an ultrawideband all-optical single-sideband filter is demonstrated.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
Classification of Phylogenetic Profiles for Protein Function Prediction: An SVM Approach
NASA Astrophysics Data System (ADS)
Kotaru, Appala Raju; Joshi, Ramesh C.
Predicting the function of an uncharacterized protein is a major challenge in post-genomic era due to problems complexity and scale. Having knowledge of protein function is a crucial link in the development of new drugs, better crops, and even the development of biochemicals such as biofuels. Recently numerous high-throughput experimental procedures have been invented to investigate the mechanisms leading to the accomplishment of a protein’s function and Phylogenetic profile is one of them. Phylogenetic profile is a way of representing a protein which encodes evolutionary history of proteins. In this paper we proposed a method for classification of phylogenetic profiles using supervised machine learning method, support vector machine classification along with radial basis function as kernel for identifying functionally linked proteins. We experimentally evaluated the performance of the classifier with the linear kernel, polynomial kernel and compared the results with the existing tree kernel. In our study we have used proteins of the budding yeast saccharomyces cerevisiae genome. We generated the phylogenetic profiles of 2465 yeast genes and for our study we used the functional annotations that are available in the MIPS database. Our experiments show that the performance of the radial basis kernel is similar to polynomial kernel is some functional classes together are better than linear, tree kernel and over all radial basis kernel outperformed the polynomial kernel, linear kernel and tree kernel. In analyzing these results we show that it will be feasible to make use of SVM classifier with radial basis function as kernel to predict the gene functionality using phylogenetic profiles.
On the heat trace of Schroedinger operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banuelos, R.; Sa Barreto, A.
1995-12-31
Trace formulae for heat kernels of Schroedinger operators have been widely studied in connection with spectral and scattering theory. They have been used to obtain information about a potential from its spectrum, or from its scattering data, and vice-versa. Using elementary Fourier transform methods we obtain a formula for the general coefficient in the asymptotic expansion of the trace of the heat kernel of the Schroedinger operator {minus}{Delta} + V, as t {down_arrow} 0, with V {element_of} S(R{sup n}), the class of functions with rapid decay at infinity. In dimension n = 1 a recurrent formula for the general coefficientmore » in the expansion is obtained in [6]. However the KdV methods used there do not seem to generalize to higher dimension. Using the formula of [6] and the symmetry of some integrals, Y. Colin de Verdiere has computed the first four coefficients for potentials in three space dimensions. Also in [1] a different method is used to compute heat coefficients for differential operators on manifolds. 14 refs.« less
Development of a kernel function for clinical data.
Daemen, Anneleen; De Moor, Bart
2009-01-01
For most diseases and examinations, clinical data such as age, gender and medical history guides clinical management, despite the rise of high-throughput technologies. To fully exploit such clinical information, appropriate modeling of relevant parameters is required. As the widely used linear kernel function has several disadvantages when applied to clinical data, we propose a new kernel function specifically developed for this data. This "clinical kernel function" more accurately represents similarities between patients. Evidently, three data sets were studied and significantly better performances were obtained with a Least Squares Support Vector Machine when based on the clinical kernel function compared to the linear kernel function.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
Hilbert transform evaluation for electron-phonon self-energies
NASA Astrophysics Data System (ADS)
Bevilacqua, Giuseppe; Menichetti, Guido; Pastori Parravicini, Giuseppe
2016-01-01
The electron tunneling current through nanostructures is considered in the presence of the electron-phonon interactions. In the Keldysh nonequilibrium formalism, the lesser, greater, advanced and retarded self-energies components are expressed by means of appropriate Langreth rules. We discuss the key role played by the entailed Hilbert transforms, and provide an analytic way for their evaluation. Particular attention is given to the current-conserving lowest-order-expansion for the treament of the electron-phonon interaction; by means of an appropriate elaboration of the analytic properties and pole structure of the Green's functions and of the Fermi functions, we arrive at a surprising simple, elegant, fully analytic and easy-to-use expression of the Hilbert transforms and involved integrals in the energy domain.
Solution of a cauchy problem for a diffusion equation in a Hilbert space by a Feynman formula
NASA Astrophysics Data System (ADS)
Remizov, I. D.
2012-07-01
The Cauchy problem for a class of diffusion equations in a Hilbert space is studied. It is proved that the Cauchy problem in well posed in the class of uniform limits of infinitely smooth bounded cylindrical functions on the Hilbert space, and the solution is presented in the form of the so-called Feynman formula, i.e., a limit of multiple integrals against a gaussian measure as the multiplicity tends to infinity. It is also proved that the solution of the Cauchy problem depends continuously on the diffusion coefficient. A process reducing an approximate solution of an infinite-dimensional diffusion equation to finding a multiple integral of a real function of finitely many real variables is indicated.
Rational Solutions of the Painlevé-II Equation Revisited
NASA Astrophysics Data System (ADS)
Miller, Peter D.; Sheng, Yue
2017-08-01
The rational solutions of the Painlevé-II equation appear in several applications and are known to have many remarkable algebraic and analytic properties. They also have several different representations, useful in different ways for establishing these properties. In particular, Riemann-Hilbert representations have proven to be useful for extracting the asymptotic behavior of the rational solutions in the limit of large degree (equivalently the large-parameter limit). We review the elementary properties of the rational Painlevé-II functions, and then we describe three different Riemann-Hilbert representations of them that have appeared in the literature: a representation by means of the isomonodromy theory of the Flaschka-Newell Lax pair, a second representation by means of the isomonodromy theory of the Jimbo-Miwa Lax pair, and a third representation found by Bertola and Bothner related to pseudo-orthogonal polynomials. We prove that the Flaschka-Newell and Bertola-Bothner Riemann-Hilbert representations of the rational Painlevé-II functions are explicitly connected to each other. Finally, we review recent results describing the asymptotic behavior of the rational Painlevé-II functions obtained from these Riemann-Hilbert representations by means of the steepest descent method.
Functional identification of spike-processing neural circuits.
Lazar, Aurel A; Slutskiy, Yevgeniy B
2014-02-01
We introduce a novel approach for a complete functional identification of biophysical spike-processing neural circuits. The circuits considered accept multidimensional spike trains as their input and comprise a multitude of temporal receptive fields and conductance-based models of action potential generation. Each temporal receptive field describes the spatiotemporal contribution of all synapses between any two neurons and incorporates the (passive) processing carried out by the dendritic tree. The aggregate dendritic current produced by a multitude of temporal receptive fields is encoded into a sequence of action potentials by a spike generator modeled as a nonlinear dynamical system. Our approach builds on the observation that during any experiment, an entire neural circuit, including its receptive fields and biophysical spike generators, is projected onto the space of stimuli used to identify the circuit. Employing the reproducing kernel Hilbert space (RKHS) of trigonometric polynomials to describe input stimuli, we quantitatively describe the relationship between underlying circuit parameters and their projections. We also derive experimental conditions under which these projections converge to the true parameters. In doing so, we achieve the mathematical tractability needed to characterize the biophysical spike generator and identify the multitude of receptive fields. The algorithms obviate the need to repeat experiments in order to compute the neurons' rate of response, rendering our methodology of interest to both experimental and theoretical neuroscientists.
Averaging of random walks and shift-invariant measures on a Hilbert space
NASA Astrophysics Data System (ADS)
Sakbaev, V. Zh.
2017-06-01
We study random walks in a Hilbert space H and representations using them of solutions of the Cauchy problem for differential equations whose initial conditions are numerical functions on H. We construct a finitely additive analogue of the Lebesgue measure: a nonnegative finitely additive measure λ that is defined on a minimal subset ring of an infinite-dimensional Hilbert space H containing all infinite-dimensional rectangles with absolutely converging products of the side lengths and is invariant under shifts and rotations in H. We define the Hilbert space H of equivalence classes of complex-valued functions on H that are square integrable with respect to a shift-invariant measure λ. Using averaging of the shift operator in H over random vectors in H with a distribution given by a one-parameter semigroup (with respect to convolution) of Gaussian measures on H, we define a one-parameter semigroup of contracting self-adjoint transformations on H, whose generator is called the diffusion operator. We obtain a representation of solutions of the Cauchy problem for the Schrödinger equation whose Hamiltonian is the diffusion operator.
Sedlacek, J D; Komaravalli, S R; Hanley, A M; Price, B D; Davis, P M
2001-04-01
The Indian meal moth, Plodia interpunctella (Hübner), and Angoumois grain moth, Sitotroga cerealella (Olivier), are two globally distributed stored-grain pests. Laboratory experiments were conducted to examine the impact that corn (Zea mays L.) kernels (i.e., grain) of some Bacillus thuringiensis Berliner (Bt) corn hybrids containing CrylAb Bt delta-endotoxin have on life history attributes of Indian meal moth and Angoumois grain moth. Stored grain is at risk to damage from Indian meal moth and Angoumois grain moth; therefore, Bt corn may provide a means of protecting this commodity from damage. Thus, the objective of this research was to quantify the effects of transgenic corn seed containing CrylAb delta-endotoxin on Indian meal moth and Angoumois grain moth survival, fecundity, and duration of development. Experiments with Bt grain, non-Bt isolines, and non-Bt grain were conducted in environmental chambers at 27 +/- 1 degrees C and > or = 60% RH in continuous dark. Fifty eggs were placed in ventilated pint jars containing 170 g of cracked or whole corn for the Indian meal moth and Angoumois grain moth, respectively. Emergence and fecundity were observed for 5 wk. Emergence and fecundity of Indian meal moth and emergence of Angoumois grain moth were significantly lower for individuals reared on P33V08 and N6800Bt, MON 810 and Bt-11 transformed hybrids, respectively, than on their non-Bt transformed isolines. Longer developmental times were observed for Indian meal moth reared on P33V08 and N6800Bt than their non-Bt-transformed isolines. These results indicate that MON 810 and Bt-11 CrylAb delta-endotoxin-containing kernels reduce laboratory populations of Indian meal moth and Angoumois grain moth. Thus, storing Bt-transformed grain is a management tactic that warrants bin scale testing and may effectively reduce Indian meal moth and Angoumois grain moth populations in grain without application of synthetic chemicals or pesticides.
Empirical mode decomposition for analyzing acoustical signals
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2005-01-01
The present invention discloses a computer implemented signal analysis method through the Hilbert-Huang Transformation (HHT) for analyzing acoustical signals, which are assumed to be nonlinear and nonstationary. The Empirical Decomposition Method (EMD) and the Hilbert Spectral Analysis (HSA) are used to obtain the HHT. Essentially, the acoustical signal will be decomposed into the Intrinsic Mode Function Components (IMFs). Once the invention decomposes the acoustic signal into its constituting components, all operations such as analyzing, identifying, and removing unwanted signals can be performed on these components. Upon transforming the IMFs into Hilbert spectrum, the acoustical signal may be compared with other acoustical signals.
The Baker-Akhiezer Function and Factorization of the Chebotarev-Khrapkov Matrix
NASA Astrophysics Data System (ADS)
Antipov, Yuri A.
2014-10-01
A new technique is proposed for the solution of the Riemann-Hilbert problem with the Chebotarev-Khrapkov matrix coefficient {G(t) = α1(t)I + α2(t)Q(t)} , {α1(t), α2(t) in H(L)} , I = diag{1, 1}, Q(t) is a {2×2} zero-trace polynomial matrix. This problem has numerous applications in elasticity and diffraction theory. The main feature of the method is the removal of essential singularities of the solution to the associated homogeneous scalar Riemann-Hilbert problem on the hyperelliptic surface of an algebraic function by means of the Baker-Akhiezer function. The consequent application of this function for the derivation of the general solution to the vector Riemann-Hilbert problem requires the finding of the {ρ} zeros of the Baker-Akhiezer function ({ρ} is the genus of the surface). These zeros are recovered through the solution to the associated Jacobi problem of inversion of abelian integrals or, equivalently, the determination of the zeros of the associated degree-{ρ} polynomial and solution of a certain linear algebraic system of {ρ} equations.
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
A sparse grid based method for generative dimensionality reduction of high-dimensional data
NASA Astrophysics Data System (ADS)
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
A Grassmann graph embedding framework for gait analysis
NASA Astrophysics Data System (ADS)
Connie, Tee; Goh, Michael Kah Ong; Teoh, Andrew Beng Jin
2014-12-01
Gait recognition is important in a wide range of monitoring and surveillance applications. Gait information has often been used as evidence when other biometrics is indiscernible in the surveillance footage. Building on recent advances of the subspace-based approaches, we consider the problem of gait recognition on the Grassmann manifold. We show that by embedding the manifold into reproducing kernel Hilbert space and applying the mechanics of graph embedding on such manifold, significant performance improvement can be obtained. In this work, the gait recognition problem is studied in a unified way applicable for both supervised and unsupervised configurations. Sparse representation is further incorporated in the learning mechanism to adaptively harness the local structure of the data. Experiments demonstrate that the proposed method can tolerate variations in appearance for gait identification effectively.
The generalization ability of online SVM classification based on Markov sampling.
Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang
2015-03-01
In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.
Burger, Karin; Koehler, Thomas; Chabior, Michael; Allner, Sebastian; Marschner, Mathias; Fehringer, Andreas; Willner, Marian; Pfeiffer, Franz; Noël, Peter
2014-12-29
Phase-contrast x-ray computed tomography has a high potential to become clinically implemented because of its complementarity to conventional absorption-contrast.In this study, we investigate noise-reducing but resolution-preserving analytical reconstruction methods to improve differential phase-contrast imaging. We apply the non-linear Perona-Malik filter on phase-contrast data prior or post filtered backprojected reconstruction. Secondly, the Hilbert kernel is replaced by regularized iterative integration followed by ramp filtered backprojection as used for absorption-contrast imaging. Combining the Perona-Malik filter with this integration algorithm allows to successfully reveal relevant sample features, quantitatively confirmed by significantly increased structural similarity indices and contrast-to-noise ratios. With this concept, phase-contrast imaging can be performed at considerably lower dose.
Functional brain abnormalities in major depressive disorder using the Hilbert-Huang transform.
Yu, Haibin; Li, Feng; Wu, Tong; Li, Rui; Yao, Li; Wang, Chuanyue; Wu, Xia
2018-02-09
Major depressive disorder is a common disease worldwide, which is characterized by significant and persistent depression. Non-invasive accessory diagnosis of depression can be performed by resting-state functional magnetic resonance imaging (rs-fMRI). However, the fMRI signal may not satisfy linearity and stationarity. The Hilbert-Huang transform (HHT) is an adaptive time-frequency localization analysis method suitable for nonlinear and non-stationary signals. The objective of this study was to apply the HHT to rs-fMRI to find the abnormal brain areas of patients with depression. A total of 35 patients with depression and 37 healthy controls were subjected to rs-fMRI. The HHT was performed to extract the Hilbert-weighted mean frequency of the rs-fMRI signals, and multivariate receiver operating characteristic analysis was applied to find the abnormal brain regions with high sensitivity and specificity. We observed differences in Hilbert-weighted mean frequency between the patients and healthy controls mainly in the right hippocampus, right parahippocampal gyrus, left amygdala, and left and right caudate nucleus. Subsequently, the above-mentioned regions were included in the results obtained from the compared region homogeneity and the fractional amplitude of low frequency fluctuation method. We found brain regions with differences in the Hilbert-weighted mean frequency, and examined their sensitivity and specificity, which suggested a potential neuroimaging biomarker to distinguish between patients with depression and healthy controls. We further clarified the pathophysiological abnormality of these regions for the population with major depressive disorder.
Open Component Portability Infrastructure (OPENCPI)
2013-03-01
8 Figure 2. C Function vs . OpenCL Kernel...10 Figure 3. OpenCL vs . OpenCPI Layering...difference between a simple C function and the analogous OpenCL kernel. Figure 2. C Function vs . OpenCL Kernel These existing example OpenCL
Improved modeling of clinical data with kernel methods.
Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart
2012-02-01
Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems. Copyright © 2011 Elsevier B.V. All rights reserved.
A framework for optimal kernel-based manifold embedding of medical image data.
Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma
2015-04-01
Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.
Modeling adaptive kernels from probabilistic phylogenetic trees.
Nicotra, Luca; Micheli, Alessio
2009-01-01
Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.
Kernel functions and Baecklund transformations for relativistic Calogero-Moser and Toda systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallnaes, Martin; Ruijsenaars, Simon
We obtain kernel functions associated with the quantum relativistic Toda systems, both for the periodic version and for the nonperiodic version with its dual. This involves taking limits of previously known results concerning kernel functions for the elliptic and hyperbolic relativistic Calogero-Moser systems. We show that the special kernel functions at issue admit a limit that yields generating functions of Baecklund transformations for the classical relativistic Calogero-Moser and Toda systems. We also obtain the nonrelativistic counterparts of our results, which tie in with previous results in the literature.
Uniform sparse bounds for discrete quadratic phase Hilbert transforms
NASA Astrophysics Data System (ADS)
Kesler, Robert; Arias, Darío Mena
2017-09-01
For each α \\in T consider the discrete quadratic phase Hilbert transform acting on finitely supported functions f : Z → C according to H^{α }f(n):= \\sum _{m ≠ 0} e^{iα m^2} f(n - m)/m. We prove that, uniformly in α \\in T , there is a sparse bound for the bilinear form < H^{α } f , g > for every pair of finitely supported functions f,g : Z→ C . The sparse bound implies several mapping properties such as weighted inequalities in an intersection of Muckenhoupt and reverse Hölder classes.
Improvements to the kernel function method of steady, subsonic lifting surface theory
NASA Technical Reports Server (NTRS)
Medan, R. T.
1974-01-01
The application of a kernel function lifting surface method to three dimensional, thin wing theory is discussed. A technique for determining the influence functions is presented. The technique is shown to require fewer quadrature points, while still calculating the influence functions accurately enough to guarantee convergence with an increasing number of spanwise quadrature points. The method also treats control points on the wing leading and trailing edges. The report introduces and employs an aspect of the kernel function method which apparently has never been used before and which significantly enhances the efficiency of the kernel function approach.
Adaptive kernel function using line transect sampling
NASA Astrophysics Data System (ADS)
Albadareen, Baker; Ismail, Noriszura
2018-04-01
The estimation of f(0) is crucial in the line transect method which is used for estimating population abundance in wildlife survey's. The classical kernel estimator of f(0) has a high negative bias. Our study proposes an adaptation in the kernel function which is shown to be more efficient than the usual kernel estimator. A simulation study is adopted to compare the performance of the proposed estimators with the classical kernel estimators.
Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction
NASA Astrophysics Data System (ADS)
Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc
2018-02-01
Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.
Generation of dark hollow beams by using a fractional radial Hilbert transform system
NASA Astrophysics Data System (ADS)
Xie, Qiansen; Zhao, Daomu
2007-07-01
The radial Hilbert transform has been extend to the fractional field, which could be called the fractional radial Hilbert transform (FRHT). Using edge-enhancement characteristics of this transform, we convert a Gaussian light beam into a variety of dark hollow beams (DHBs). Based on the fact that a hard-edged aperture can be expanded approximately as a finite sum of complex Gaussian functions, the analytical expression of a Gaussian beam passing through a FRHT system has been derived. As a numerical example, the properties of the DHBs with different fractional orders are illustrated graphically. The calculation results obtained by use of the analytical method and the integral method are also compared.
Computer implemented empirical mode decomposition method, apparatus and article of manufacture
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
1999-01-01
A computer implemented physical signal analysis method is invented. This method includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.
Mixed kernel function support vector regression for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
NASA Technical Reports Server (NTRS)
Bland, S. R.
1982-01-01
Finite difference methods for unsteady transonic flow frequency use simplified equations in which certain of the time dependent terms are omitted from the governing equations. Kernel functions are derived for two dimensional subsonic flow, and provide accurate solutions of the linearized potential equation with the same time dependent terms omitted. These solutions make possible a direct evaluation of the finite difference codes for the linear problem. Calculations with two of these low frequency kernel functions verify the accuracy of the LTRAN2 and HYTRAN2 finite difference codes. Comparisons of the low frequency kernel function results with the Possio kernel function solution of the complete linear equations indicate the adequacy of the HYTRAN approximation for frequencies in the range of interest for flutter calculations.
Graph wavelet alignment kernels for drug virtual screening.
Smalter, Aaron; Huan, Jun; Lushington, Gerald
2009-06-01
In this paper, we introduce a novel statistical modeling technique for target property prediction, with applications to virtual screening and drug design. In our method, we use graphs to model chemical structures and apply a wavelet analysis of graphs to summarize features capturing graph local topology. We design a novel graph kernel function to utilize the topology features to build predictive models for chemicals via Support Vector Machine classifier. We call the new graph kernel a graph wavelet-alignment kernel. We have evaluated the efficacy of the wavelet-alignment kernel using a set of chemical structure-activity prediction benchmarks. Our results indicate that the use of the kernel function yields performance profiles comparable to, and sometimes exceeding that of the existing state-of-the-art chemical classification approaches. In addition, our results also show that the use of wavelet functions significantly decreases the computational costs for graph kernel computation with more than ten fold speedup.
Lagrangian single-particle turbulent statistics through the Hilbert-Huang transform.
Huang, Yongxiang; Biferale, Luca; Calzavarini, Enrico; Sun, Chao; Toschi, Federico
2013-04-01
The Hilbert-Huang transform is applied to analyze single-particle Lagrangian velocity data from numerical simulations of hydrodynamic turbulence. The velocity trajectory is described in terms of a set of intrinsic mode functions C(i)(t) and of their instantaneous frequency ω(i)(t). On the basis of this decomposition we define the ω-conditioned statistical moments of the C(i) modes, named q-order Hilbert spectra (HS). We show that such quantities have enhanced scaling properties as compared to traditional Fourier transform- or correlation-based (structure functions) statistical indicators, thus providing better insights into the turbulent energy transfer process. We present clear empirical evidence that the energylike quantity, i.e., the second-order HS, displays a linear scaling in time in the inertial range, as expected from a dimensional analysis. We also measure high-order moment scaling exponents in a direct way, without resorting to the extended self-similarity procedure. This leads to an estimate of the Lagrangian structure function exponents which are consistent with the multifractal prediction in the Lagrangian frame as proposed by Biferale et al. [Phys. Rev. Lett. 93, 064502 (2004)].
Towards a second law for Lovelock theories
NASA Astrophysics Data System (ADS)
Bhattacharyya, Sayantani; Haehl, Felix M.; Kundu, Nilay; Loganayagam, R.; Rangamani, Mukund
2017-03-01
In classical general relativity described by Einstein-Hilbert gravity, black holes behave as thermodynamic objects. In particular, the laws of black hole mechanics can be interpreted as laws of thermodynamics. The first law of black hole mechanics extends to higher derivative theories via the Noether charge construction of Wald. One also expects the statement of the second law, which in Einstein-Hilbert theory owes to Hawking's area theorem, to extend to higher derivative theories. To argue for this however one needs a notion of entropy for dynamical black holes, which the Noether charge construction does not provide. We propose such an entropy function for the family of Lovelock theories, treating the higher derivative terms as perturbations to the Einstein-Hilbert theory. Working around a dynamical black hole solution, and making no assumptions about the amplitude of departure from equilibrium, we construct a candidate entropy functional valid to all orders in the low energy effective field theory. This entropy functional satisfies a second law, modulo a certain subtle boundary term, which deserves further investigation in non-spherically symmetric situations.
Thermodynamic limit of random partitions and dispersionless Toda hierarchy
NASA Astrophysics Data System (ADS)
Takasaki, Kanehisa; Nakatsu, Toshio
2012-01-01
We study the thermodynamic limit of random partition models for the instanton sum of 4D and 5D supersymmetric U(1) gauge theories deformed by some physical observables. The physical observables correspond to external potentials in the statistical model. The partition function is reformulated in terms of the density function of Maya diagrams. The thermodynamic limit is governed by a limit shape of Young diagrams associated with dominant terms in the partition function. The limit shape is characterized by a variational problem, which is further converted to a scalar-valued Riemann-Hilbert problem. This Riemann-Hilbert problem is solved with the aid of a complex curve, which may be thought of as the Seiberg-Witten curve of the deformed U(1) gauge theory. This solution of the Riemann-Hilbert problem is identified with a special solution of the dispersionless Toda hierarchy that satisfies a pair of generalized string equations. The generalized string equations for the 5D gauge theory are shown to be related to hidden symmetries of the statistical model. The prepotential and the Seiberg-Witten differential are also considered.
Rotational relaxation of AlO+(1Σ+) in collision with He
NASA Astrophysics Data System (ADS)
Denis-Alpizar, O.; Trabelsi, T.; Hochlaf, M.; Stoecklin, T.
2018-03-01
The rate coefficients for the rotational de-excitation of AlO+ by collisions with He are determined. The possible production mechanisms of the AlO+ ion in both diffuse and dense molecular clouds are first discussed. A set of ab initio interaction energies is computed at the CCSD(T)-F12 level of theory, and a three-dimensional analytical model of the potential energy surface is obtained using a linear combination of reproducing kernel Hilbert space polynomials together with an analytical long range potential. The nuclear spin free close-coupling equations are solved and the de-excitation rotational rate coefficients for the lower 15 rotational states of AlO+ are reported. A propensity rule to favour Δj = -1 transitions is obtained while the hyperfine resolved state-to-state rate coefficients are also discussed.
A fast numerical method for ideal fluid flow in domains with multiple stirrers
NASA Astrophysics Data System (ADS)
Nasser, Mohamed M. S.; Green, Christopher C.
2018-03-01
A collection of arbitrarily-shaped solid objects, each moving at a constant speed, can be used to mix or stir ideal fluid, and can give rise to interesting flow patterns. Assuming these systems of fluid stirrers are two-dimensional, the mathematical problem of resolving the flow field—given a particular distribution of any finite number of stirrers of specified shape and speed—can be formulated as a Riemann-Hilbert (R-H) problem. We show that this R-H problem can be solved numerically using a fast and accurate algorithm for any finite number of stirrers based around a boundary integral equation with the generalized Neumann kernel. Various systems of fluid stirrers are considered, and our numerical scheme is shown to handle highly multiply connected domains (i.e. systems of many fluid stirrers) with minimal computational expense.
Hilbert and Blaschke phases in the temporal coherence function of stationary broadband light.
Fernández-Pousa, Carlos R; Maestre, Haroldo; Torregrosa, Adrián J; Capmany, Juan
2008-10-27
We show that the minimal phase of the temporal coherence function gamma (tau) of stationary light having a partially-coherent symmetric spectral peak can be computed as a relative logarithmic Hilbert transform of its amplitude with respect to its asymptotic behavior. The procedure is applied to experimental data from amplified spontaneous emission broadband sources in the 1.55 microm band with subpicosecond coherence times, providing examples of degrees of coherence with both minimal and non-minimal phase. In the latter case, the Blaschke phase is retrieved and the position of the Blaschke zeros determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertola, Marco, E-mail: Marco.Bertola@concordia.ca; Centre de Recherches Mathématiques, Université de Montréal, Montréal, Québec H3C 3J7; SISSA/ISAS, via Bonomea 265, Trieste
2015-06-15
Two-phase solutions of focusing NLS equation are classically constructed out of an appropriate Riemann surface of genus two and expressed in terms of the corresponding theta-function. We show here that in a certain limiting regime, such solutions reduce to some elementary ones called “Solitons on unstable condensate.” This degeneration turns out to be conveniently studied by means of basic tools from the theory of Riemann-Hilbert problems. In particular, no acquaintance with Riemann surfaces and theta-function is required for such analysis.
Improved specimen reconstruction by Hilbert phase contrast tomography.
Barton, Bastian; Joos, Friederike; Schröder, Rasmus R
2008-11-01
The low signal-to-noise ratio (SNR) in images of unstained specimens recorded with conventional defocus phase contrast makes it difficult to interpret 3D volumes obtained by electron tomography (ET). The high defocus applied for conventional tilt series generates some phase contrast but leads to an incomplete transfer of object information. For tomography of biological weak-phase objects, optimal image contrast and subsequently an optimized SNR are essential for the reconstruction of details such as macromolecular assemblies at molecular resolution. The problem of low contrast can be partially solved by applying a Hilbert phase plate positioned in the back focal plane (BFP) of the objective lens while recording images in Gaussian focus. Images recorded with the Hilbert phase plate provide optimized positive phase contrast at low spatial frequencies, and the contrast transfer in principle extends to the information limit of the microscope. The antisymmetric Hilbert phase contrast (HPC) can be numerically converted into isotropic contrast, which is equivalent to the contrast obtained by a Zernike phase plate. Thus, in-focus HPC provides optimal structure factor information without limiting effects of the transfer function. In this article, we present the first electron tomograms of biological specimens reconstructed from Hilbert phase plate image series. We outline the technical implementation of the phase plate and demonstrate that the technique is routinely applicable for tomography. A comparison between conventional defocus tomograms and in-focus HPC volumes shows an enhanced SNR and an improved specimen visibility for in-focus Hilbert tomography.
ERIC Educational Resources Information Center
Zheng, Yinggan; Gierl, Mark J.; Cui, Ying
2010-01-01
This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…
Inverse Problems and Imaging (Pitman Research Notes in Mathematics Series Number 245)
1991-01-01
Multiparamcter spectral theory in Hilbert space functional differential cquations B D Sleeman F Kappel and W Schappacher 24 Mathematical modelling...techniques 49 Sequence spaces R Aris W 11 Ruckle 25 Singular points of smooth mappings 50 Recent contributions to nonlinear C G Gibson partial...of convergence in the central limit T Husain theorem 86 Hamilton-Jacobi equations in Hilbert spaces Peter Hall V Barbu and G Da Prato 63 Solution of
2008-07-01
operators in Hilbert spaces. The homogenization procedure through successive multi- resolution projections is presented, followed by a numerical example of...is intended to be essentially self-contained. The mathematical ( Greenberg 1978; Gilbert 2006) and signal processing (Strang and Nguyen 1995...literature listed in the references. The ideas behind multi-resolution analysis unfold from the theory of linear operators in Hilbert spaces (Davis 1975
Spherical harmonics and rigged Hilbert spaces
NASA Astrophysics Data System (ADS)
Celeghini, E.; Gadella, M.; del Olmo, M. A.
2018-05-01
This paper is devoted to study discrete and continuous bases for spaces supporting representations of SO(3) and SO(3, 2) where the spherical harmonics are involved. We show how discrete and continuous bases coexist on appropriate choices of rigged Hilbert spaces. We prove the continuity of relevant operators and the operators in the algebras spanned by them using appropriate topologies on our spaces. Finally, we discuss the properties of the functionals that form the continuous basis.
A survey of kernel-type estimators for copula and their applications
NASA Astrophysics Data System (ADS)
Sumarjaya, I. W.
2017-10-01
Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.
Experimental validation of a structural damage detection method based on marginal Hilbert spectrum
NASA Astrophysics Data System (ADS)
Banerji, Srishti; Roy, Timir B.; Sabamehr, Ardalan; Bagchi, Ashutosh
2017-04-01
Structural Health Monitoring (SHM) using dynamic characteristics of structures is crucial for early damage detection. Damage detection can be performed by capturing and assessing structural responses. Instrumented structures are monitored by analyzing the responses recorded by deployed sensors in the form of signals. Signal processing is an important tool for the processing of the collected data to diagnose anomalies in structural behavior. The vibration signature of the structure varies with damage. In order to attain effective damage detection, preservation of non-linear and non-stationary features of real structural responses is important. Decomposition of the signals into Intrinsic Mode Functions (IMF) by Empirical Mode Decomposition (EMD) and application of Hilbert-Huang Transform (HHT) addresses the time-varying instantaneous properties of the structural response. The energy distribution among different vibration modes of the intact and damaged structure depicted by Marginal Hilbert Spectrum (MHS) detects location and severity of the damage. The present work investigates damage detection analytically and experimentally by employing MHS. The testing of this methodology for different damage scenarios of a frame structure resulted in its accurate damage identification. The sensitivity of Hilbert Spectral Analysis (HSA) is assessed with varying frequencies and damage locations by means of calculating Damage Indices (DI) from the Hilbert spectrum curves of the undamaged and damaged structures.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2004-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2002-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
NASA Technical Reports Server (NTRS)
Shen, Zheng (Inventor); Huang, Norden Eh (Inventor)
2003-01-01
A computer implemented physical signal analysis method is includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals based on local extrema and curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.
Generalization of the subsonic kernel function in the s-plane, with applications to flutter analysis
NASA Technical Reports Server (NTRS)
Cunningham, H. J.; Desmarais, R. N.
1984-01-01
A generalized subsonic unsteady aerodynamic kernel function, valid for both growing and decaying oscillatory motions, is developed and applied in a modified flutter analysis computer program to solve the boundaries of constant damping ratio as well as the flutter boundary. Rates of change of damping ratios with respect to dynamic pressure near flutter are substantially lower from the generalized-kernel-function calculations than from the conventional velocity-damping (V-g) calculation. A rational function approximation for aerodynamic forces used in control theory for s-plane analysis gave rather good agreement with kernel-function results, except for strongly damped motion at combinations of high (subsonic) Mach number and reduced frequency.
Deep neural mapping support vector machines.
Li, Yujian; Zhang, Ting
2017-09-01
The choice of kernel has an important effect on the performance of a support vector machine (SVM). The effect could be reduced by NEUROSVM, an architecture using multilayer perceptron for feature extraction and SVM for classification. In binary classification, a general linear kernel NEUROSVM can be theoretically simplified as an input layer, many hidden layers, and an SVM output layer. As a feature extractor, the sub-network composed of the input and hidden layers is first trained together with a virtual ordinary output layer by backpropagation, then with the output of its last hidden layer taken as input of the SVM classifier for further training separately. By taking the sub-network as a kernel mapping from the original input space into a feature space, we present a novel model, called deep neural mapping support vector machine (DNMSVM), from the viewpoint of deep learning. This model is also a new and general kernel learning method, where the kernel mapping is indeed an explicit function expressed as a sub-network, different from an implicit function induced by a kernel function traditionally. Moreover, we exploit a two-stage procedure of contrastive divergence learning and gradient descent for DNMSVM to jointly training an adaptive kernel mapping instead of a kernel function, without requirement of kernel tricks. As a whole of the sub-network and the SVM classifier, the joint training of DNMSVM is done by using gradient descent to optimize the objective function with the sub-network layer-wise pre-trained via contrastive divergence learning of restricted Boltzmann machines. Compared to the separate training of NEUROSVM, the joint training is a new algorithm for DNMSVM to have advantages over NEUROSVM. Experimental results show that DNMSVM can outperform NEUROSVM and RBFSVM (i.e., SVM with the kernel of radial basis function), demonstrating its effectiveness. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kernel machines for epilepsy diagnosis via EEG signal classification: a comparative study.
Lima, Clodoaldo A M; Coelho, André L V
2011-10-01
We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely, Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Watkins, Charles E; Berman, Julian H
1956-01-01
This report treats the Kernel function of the integral equation that relates a known or prescribed downwash distribution to an unknown lift distribution for harmonically oscillating wings in supersonic flow. The treatment is essentially an extension to supersonic flow of the treatment given in NACA report 1234 for subsonic flow. For the supersonic case the Kernel function is derived by use of a suitable form of acoustic doublet potential which employs a cutoff or Heaviside unit function. The Kernel functions are reduced to forms that can be accurately evaluated by considering the functions in two parts: a part in which the singularities are isolated and analytically expressed, and a nonsingular part which can be tabulated.
Spinor Structure and Internal Symmetries
NASA Astrophysics Data System (ADS)
Varlamov, V. V.
2015-10-01
Spinor structure and internal symmetries are considered within one theoretical framework based on the generalized spin and abstract Hilbert space. Complex momentum is understood as a generating kernel of the underlying spinor structure. It is shown that tensor products of biquaternion algebras are associated with the each irreducible representation of the Lorentz group. Space-time discrete symmetries P, T and their combination PT are generated by the fundamental automorphisms of this algebraic background (Clifford algebras). Charge conjugation C is presented by a pseudoautomorphism of the complex Clifford algebra. This description of the operation C allows one to distinguish charged and neutral particles including particle-antiparticle interchange and truly neutral particles. Spin and charge multiplets, based on the interlocking representations of the Lorentz group, are introduced. A central point of the work is a correspondence between Wigner definition of elementary particle as an irreducible representation of the Poincaré group and SU(3)-description (quark scheme) of the particle as a vector of the supermultiplet (irreducible representation of SU(3)). This correspondence is realized on the ground of a spin-charge Hilbert space. Basic hadron supermultiplets of SU(3)-theory (baryon octet and two meson octets) are studied in this framework. It is shown that quark phenomenologies are naturally incorporated into presented scheme. The relationship between mass and spin is established. The introduced spin-mass formula and its combination with Gell-Mann-Okubo mass formula allows one to take a new look at the problem of mass spectrum of elementary particles.
NASA Technical Reports Server (NTRS)
Huang, Norden E.
1999-01-01
A new method for analyzing nonlinear and nonstationary data has been developed. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time that give sharp identifications of imbedded structures. The final presentation of the results is an energy-frequency-time distribution, designated as the Hilbert Spectrum, Example of application of this method to earthquake and building response will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.
On Replacing "Quantum Thinking" with Counterfactual Reasoning
NASA Astrophysics Data System (ADS)
Narens, Louis
The probability theory used in quantum mechanics is currently being employed by psychologists to model the impact of context on decision. Its event space consists of closed subspaces of a Hilbert space, and its probability function sometimes violate the law of the finite additivity of probabilities. Results from the quantum mechanics literature indicate that such a "Hilbert space probability theory" cannot be extended in a useful way to standard, finitely additive, probability theory by the addition of new events with specific probabilities. This chapter presents a new kind of probability theory that shares many fundamental algebraic characteristics with Hilbert space probability theory but does extend to standard probability theory by adjoining new events with specific probabilities. The new probability theory arises from considerations about how psychological experiments are related through counterfactual reasoning.
Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform
NASA Astrophysics Data System (ADS)
Zheng, Yang; Chen, Xihao; Zhu, Rui
2017-07-01
Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.
NASA Technical Reports Server (NTRS)
Bennett, Robert M.; Batina, John T.
1989-01-01
The application and assessment of a computer program called CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) for flutter predictions are described. Flutter calculations are presented for two thin swept-and-tapered wing planforms with well-defined modal properties. One planform is a series of 45-degree swept wings and the other planform is a clipped delta wing. Comparisons are made between the results of CAP-TSD using the linear equation and no airfoil thickness and the results obtained from a subsonic kernel function analysis. The calculations cover a Mach number range from low subsonic to low supersonic values, including the transonic range, and are compared with subsonic linear theory and experimental data. It is noted that since both wings have very thin airfoil sections, the effects of thickness are minimal.
A Riemann-Hilbert Approach to Complex Sharma-Tasso-Olver Equation on Half Line*
NASA Astrophysics Data System (ADS)
Zhang, Ning; Xia, Tie-Cheng; Hu, Bei-Bei
2017-11-01
In this paper, the Fokas unified method is used to analyze the initial-boundary value problem of a complex Sharma-Tasso-Olver (cSTO) equation on the half line. We show that the solution can be expressed in terms of the solution of a Riemann-Hilbert problem. The relevant jump matrices are explicitly given in terms of the matrix-value spectral functions spectral functions \\{a(λ ),b(λ )\\} and \\{A(λ ),B(λ )\\} , which depending on initial data {u}0(x)=u(x,0) and boundary data {g}0(y)=u(0,y), {g}1(y)={u}x(0,y), {g}2(y)={u}{xx}(0,y). These spectral functions are not independent, they satisfy a global relation.
Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.
Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe
2018-02-19
Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.
NASA Astrophysics Data System (ADS)
Hellgren, Maria; Gross, E. K. U.
2013-11-01
We present a detailed study of the exact-exchange (EXX) kernel of time-dependent density-functional theory with an emphasis on its discontinuity at integer particle numbers. It was recently found that this exact property leads to sharp peaks and step features in the kernel that diverge in the dissociation limit of diatomic systems [Hellgren and Gross, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.85.022514 85, 022514 (2012)]. To further analyze the discontinuity of the kernel, we here make use of two different approximations to the EXX kernel: the Petersilka Gossmann Gross (PGG) approximation and a common energy denominator approximation (CEDA). It is demonstrated that whereas the PGG approximation neglects the discontinuity, the CEDA includes it explicitly. By studying model molecular systems it is shown that the so-called field-counteracting effect in the density-functional description of molecular chains can be viewed in terms of the discontinuity of the static kernel. The role of the frequency dependence is also investigated, highlighting its importance for long-range charge-transfer excitations as well as inner-shell excitations.
NASA Astrophysics Data System (ADS)
Man'ko, V. I.; Markovich, L. A.
2018-02-01
Quantum correlations in the state of four-level atom are investigated by using generic unitary transforms of the classical (diagonal) density matrix. Partial cases of pure state, X-state, Werner state are studied in details. The geometrical meaning of unitary Hilbert reference-frame rotations generating entanglement in the initially separable state is discussed. Characteristics of the entanglement in terms of concurrence, entropy and negativity are obtained as functions of the unitary matrix rotating the reference frame.
New gravitational solutions via a Riemann-Hilbert approach
NASA Astrophysics Data System (ADS)
Cardoso, G. L.; Serra, J. C.
2018-03-01
We consider the Riemann-Hilbert factorization approach to solving the field equations of dimensionally reduced gravity theories. First we prove that functions belonging to a certain class possess a canonical factorization due to properties of the underlying spectral curve. Then we use this result, together with appropriate matricial decompositions, to study the canonical factorization of non-meromorphic monodromy matrices that describe deformations of seed monodromy matrices associated with known solutions. This results in new solutions, with unusual features, to the field equations.
A new discriminative kernel from probabilistic models.
Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert
2002-10-01
Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; von Davier, Alina A.
2008-01-01
The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…
Gabor-based kernel PCA with fractional power polynomial models for face recognition.
Liu, Chengjun
2004-05-01
This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.
Kernel Machine SNP-set Testing under Multiple Candidate Kernels
Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.
2013-01-01
Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868
The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal
NASA Astrophysics Data System (ADS)
Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis
2016-08-01
The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.
A New Scheme for the Design of Hilbert Transform Pairs of Biorthogonal Wavelet Bases
NASA Astrophysics Data System (ADS)
Shi, Hongli; Luo, Shuqian
2010-12-01
In designing the Hilbert transform pairs of biorthogonal wavelet bases, it has been shown that the requirements of the equal-magnitude responses and the half-sample phase offset on the lowpass filters are the necessary and sufficient condition. In this paper, the relationship between the phase offset and the vanishing moment difference of biorthogonal scaling filters is derived, which implies a simple way to choose the vanishing moments so that the phase response requirement can be satisfied structurally. The magnitude response requirement is approximately achieved by a constrained optimization procedure, where the objective function and constraints are all expressed in terms of the auxiliary filters of scaling filters rather than the scaling filters directly. Generally, the calculation burden in the design implementation will be less than that of the current schemes. The integral of magnitude response difference between the primal and dual scaling filters has been chosen as the objective function, which expresses the magnitude response requirements in the whole frequency range. Two design examples illustrate that the biorthogonal wavelet bases designed by the proposed scheme are very close to Hilbert transform pairs.
A dynamic kernel modifier for linux
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minnich, R. G.
2002-09-03
Dynamic Kernel Modifier, or DKM, is a kernel module for Linux that allows user-mode programs to modify the execution of functions in the kernel without recompiling or modifying the kernel source in any way. Functions may be traced, either function entry only or function entry and exit; nullified; or replaced with some other function. For the tracing case, function execution results in the activation of a watchpoint. When the watchpoint is activated, the address of the function is logged in a FIFO buffer that is readable by external applications. The watchpoints are time-stamped with the resolution of the processor highmore » resolution timers, which on most modem processors are accurate to a single processor tick. DKM is very similar to earlier systems such as the SunOS trace device or Linux TT. Unlike these two systems, and other similar systems, DKM requires no kernel modifications. DKM allows users to do initial probing of the kernel to look for performance problems, or even to resolve potential problems by turning functions off or replacing them. DKM watchpoints are not without cost: it takes about 200 nanoseconds to make a log entry on an 800 Mhz Pentium-Ill. The overhead numbers are actually competitive with other hardware-based trace systems, although it has less 'Los Alamos National Laboratory is operated by the University of California for the National Nuclear Security Administration of the United States Department of Energy under contract W-7405-ENG-36. accuracy than an In-Circuit Emulator such as the American Arium. Once the user has zeroed in on a problem, other mechanisms with a higher degree of accuracy can be used.« less
Connes distance function on fuzzy sphere and the connection between geometry and statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devi, Yendrembam Chaoba, E-mail: chaoba@bose.res.in; Chakraborty, Biswajit, E-mail: biswajit@bose.res.in; Prajapat, Shivraj, E-mail: shraprajapat@gmail.com
An algorithm to compute Connes spectral distance, adaptable to the Hilbert-Schmidt operatorial formulation of non-commutative quantum mechanics, was developed earlier by introducing the appropriate spectral triple and used to compute infinitesimal distances in the Moyal plane, revealing a deep connection between geometry and statistics. In this paper, using the same algorithm, the Connes spectral distance has been calculated in the Hilbert-Schmidt operatorial formulation for the fuzzy sphere whose spatial coordinates satisfy the su(2) algebra. This has been computed for both the discrete and the Perelemov’s SU(2) coherent state. Here also, we get a connection between geometry and statistics which ismore » shown by computing the infinitesimal distance between mixed states on the quantum Hilbert space of a particular fuzzy sphere, indexed by n ∈ ℤ/2.« less
Semiclassical propagation: Hilbert space vs. Wigner representation
NASA Astrophysics Data System (ADS)
Gottwald, Fabian; Ivanov, Sergei D.
2018-03-01
A unified viewpoint on the van Vleck and Herman-Kluk propagators in Hilbert space and their recently developed counterparts in Wigner representation is presented. Based on this viewpoint, the Wigner Herman-Kluk propagator is conceptually the most general one. Nonetheless, the respective semiclassical expressions for expectation values in terms of the density matrix and the Wigner function are mathematically proven here to coincide. The only remaining difference is a mere technical flexibility of the Wigner version in choosing the Gaussians' width for the underlying coherent states beyond minimal uncertainty. This flexibility is investigated numerically on prototypical potentials and it turns out to provide neither qualitative nor quantitative improvements. Given the aforementioned generality, utilizing the Wigner representation for semiclassical propagation thus leads to the same performance as employing the respective most-developed (Hilbert-space) methods for the density matrix.
NASA Technical Reports Server (NTRS)
Cunningham, A. M., Jr.
1973-01-01
The method presented uses a collocation technique with the nonplanar kernel function to solve supersonic lifting surface problems with and without interference. A set of pressure functions are developed based on conical flow theory solutions which account for discontinuities in the supersonic pressure distributions. These functions permit faster solution convergence than is possible with conventional supersonic pressure functions. An improper integral of a 3/2 power singularity along the Mach hyperbola of the nonplanar supersonic kernel function is described and treated. The method is compared with other theories and experiment for a variety of cases.
2013-01-01
Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
Online selective kernel-based temporal difference learning.
Chen, Xingguo; Gao, Yang; Wang, Ruili
2013-12-01
In this paper, an online selective kernel-based temporal difference (OSKTD) learning algorithm is proposed to deal with large scale and/or continuous reinforcement learning problems. OSKTD includes two online procedures: online sparsification and parameter updating for the selective kernel-based value function. A new sparsification method (i.e., a kernel distance-based online sparsification method) is proposed based on selective ensemble learning, which is computationally less complex compared with other sparsification methods. With the proposed sparsification method, the sparsified dictionary of samples is constructed online by checking if a sample needs to be added to the sparsified dictionary. In addition, based on local validity, a selective kernel-based value function is proposed to select the best samples from the sample dictionary for the selective kernel-based value function approximator. The parameters of the selective kernel-based value function are iteratively updated by using the temporal difference (TD) learning algorithm combined with the gradient descent technique. The complexity of the online sparsification procedure in the OSKTD algorithm is O(n). In addition, two typical experiments (Maze and Mountain Car) are used to compare with both traditional and up-to-date O(n) algorithms (GTD, GTD2, and TDC using the kernel-based value function), and the results demonstrate the effectiveness of our proposed algorithm. In the Maze problem, OSKTD converges to an optimal policy and converges faster than both traditional and up-to-date algorithms. In the Mountain Car problem, OSKTD converges, requires less computation time compared with other sparsification methods, gets a better local optima than the traditional algorithms, and converges much faster than the up-to-date algorithms. In addition, OSKTD can reach a competitive ultimate optima compared with the up-to-date algorithms.
Noise kernels of stochastic gravity in conformally-flat spacetimes
NASA Astrophysics Data System (ADS)
Cho, H. T.; Hu, B. L.
2015-03-01
The central object in the theory of semiclassical stochastic gravity is the noise kernel, which is the symmetric two point correlation function of the stress-energy tensor. Using the corresponding Wightman functions in Minkowski, Einstein and open Einstein spaces, we construct the noise kernels of a conformally coupled scalar field in these spacetimes. From them we show that the noise kernels in conformally-flat spacetimes, including the Friedmann-Robertson-Walker universes, can be obtained in closed analytic forms by using a combination of conformal and coordinate transformations.
Gene function prediction with gene interaction networks: a context graph kernel approach.
Li, Xin; Chen, Hsinchun; Li, Jiexun; Zhang, Zhu
2010-01-01
Predicting gene functions is a challenge for biologists in the postgenomic era. Interactions among genes and their products compose networks that can be used to infer gene functions. Most previous studies adopt a linkage assumption, i.e., they assume that gene interactions indicate functional similarities between connected genes. In this study, we propose to use a gene's context graph, i.e., the gene interaction network associated with the focal gene, to infer its functions. In a kernel-based machine-learning framework, we design a context graph kernel to capture the information in context graphs. Our experimental study on a testbed of p53-related genes demonstrates the advantage of using indirect gene interactions and shows the empirical superiority of the proposed approach over linkage-assumption-based methods, such as the algorithm to minimize inconsistent connected genes and diffusion kernels.
Applications of Hilbert Spectral Analysis for Speech and Sound Signals
NASA Technical Reports Server (NTRS)
Huang, Norden E.
2003-01-01
A new method for analyzing nonlinear and nonstationary data has been developed, and the natural applications are to speech and sound signals. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time, which give sharp identifications of imbedded structures. This method invention can be used to process all acoustic signals. Specifically, it can process the speech signals for Speech synthesis, Speaker identification and verification, Speech recognition, and Sound signal enhancement and filtering. Additionally, as the acoustical signals from machinery are essentially the way the machines are talking to us. Therefore, the acoustical signals, from the machines, either from sound through air or vibration on the machines, can tell us the operating conditions of the machines. Thus, we can use the acoustic signal to diagnosis the problems of machines.
Hydrothermal treatment of maize: Changes in physical, chemical, and functional properties.
Rocha-Villarreal, Verónica; Hoffmann, Jessica Fernanda; Vanier, Nathan Levien; Serna-Saldivar, Sergio O; García-Lara, Silverio
2018-10-15
The objective of this work was to assess the effects of a traditional parboiling treatment on physical, chemical and functional properties of yellow maize kernels. For this, maize kernels were subjected to the three main stages of a traditional parboiling process (soaking, steaming, and drying) at different moisture contents (15%, 25%, or 35%), and different pressure steaming times (0, 15, or 30 min). Kernels were evaluated for physical and chemical changes, while manually generated endosperm fractions were further evaluated for nutritional and functional changes. The parboiling process negatively altered the maize kernels properties by increasing the number of kernels with burst pericarp and decreasing the total carotenoid content in the endosperm by 42%. However, the most intense conditions (35% moisture and 30 min steam) lowered the number of broken kernels by 41%, and the number of stress cracks by 36%. Results also demonstrated that soaking enhanced the nutritional value of soaked yellow maize by increasing the thiamine content and the bound phenolic content in the endosperm fraction up to 102%. The proper implementation of this hydrothermal treatment could lead to significant enhancements in nutritional and functionality of maize products. Copyright © 2018 Elsevier Ltd. All rights reserved.
Application of kernel method in fluorescence molecular tomography
NASA Astrophysics Data System (ADS)
Zhao, Yue; Baikejiang, Reheman; Li, Changqing
2017-02-01
Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.
Geographically weighted regression model on poverty indicator
NASA Astrophysics Data System (ADS)
Slamet, I.; Nugroho, N. F. T. A.; Muslich
2017-12-01
In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.
Chen, Lili; Hao, Yaru
2017-01-01
Preterm birth (PTB) is the leading cause of perinatal mortality and long-term morbidity, which results in significant health and economic problems. The early detection of PTB has great significance for its prevention. The electrohysterogram (EHG) related to uterine contraction is a noninvasive, real-time, and automatic novel technology which can be used to detect, diagnose, or predict PTB. This paper presents a method for feature extraction and classification of EHG between pregnancy and labour group, based on Hilbert-Huang transform (HHT) and extreme learning machine (ELM). For each sample, each channel was decomposed into a set of intrinsic mode functions (IMFs) using empirical mode decomposition (EMD). Then, the Hilbert transform was applied to IMF to obtain analytic function. The maximum amplitude of analytic function was extracted as feature. The identification model was constructed based on ELM. Experimental results reveal that the best classification performance of the proposed method can reach an accuracy of 88.00%, a sensitivity of 91.30%, and a specificity of 85.19%. The area under receiver operating characteristic (ROC) curve is 0.88. Finally, experimental results indicate that the method developed in this work could be effective in the classification of EHG between pregnancy and labour group.
Guo, Qi; Shen, Shu-Ting
2016-04-29
There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.
Oscillatory supersonic kernel function method for interfering surfaces
NASA Technical Reports Server (NTRS)
Cunningham, A. M., Jr.
1974-01-01
In the method presented in this paper, a collocation technique is used with the nonplanar supersonic kernel function to solve multiple lifting surface problems with interference in steady or oscillatory flow. The pressure functions used are based on conical flow theory solutions and provide faster solution convergence than is possible with conventional functions. In the application of the nonplanar supersonic kernel function, an improper integral of a 3/2 power singularity along the Mach hyperbola is described and treated. The method is compared with other theories and experiment for two wing-tail configurations in steady and oscillatory flow.
Power Spectral Density and Hilbert Transform
2016-12-01
Fourier transform, Hilbert transform, digital filter , SDR 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER...terms. A very good approximation to the ideal Hilbert transform is a low-pass finite impulse response (FIR) filter . In Fig. 7, we show a real signal...220), converted to an analytic signal using a 255-tap Hilbert transform low-pass filter . For an ideal Hilbert
Bayne, Michael G; Scher, Jeremy A; Ellis, Benjamin H; Chakraborty, Arindam
2018-05-21
Electron-hole or quasiparticle representation plays a central role in describing electronic excitations in many-electron systems. For charge-neutral excitation, the electron-hole interaction kernel is the quantity of interest for calculating important excitation properties such as optical gap, optical spectra, electron-hole recombination and electron-hole binding energies. The electron-hole interaction kernel can be formally derived from the density-density correlation function using both Green's function and TDDFT formalism. The accurate determination of the electron-hole interaction kernel remains a significant challenge for precise calculations of optical properties in the GW+BSE formalism. From the TDDFT perspective, the electron-hole interaction kernel has been viewed as a path to systematic development of frequency-dependent exchange-correlation functionals. Traditional approaches, such as MBPT formalism, use unoccupied states (which are defined with respect to Fermi vacuum) to construct the electron-hole interaction kernel. However, the inclusion of unoccupied states has long been recognized as the leading computational bottleneck that limits the application of this approach for larger finite systems. In this work, an alternative derivation that avoids using unoccupied states to construct the electron-hole interaction kernel is presented. The central idea of this approach is to use explicitly correlated geminal functions for treating electron-electron correlation for both ground and excited state wave functions. Using this ansatz, it is derived using both diagrammatic and algebraic techniques that the electron-hole interaction kernel can be expressed only in terms of linked closed-loop diagrams. It is proved that the cancellation of unlinked diagrams is a consequence of linked-cluster theorem in real-space representation. The electron-hole interaction kernel derived in this work was used to calculate excitation energies in many-electron systems and results were found to be in good agreement with the EOM-CCSD and GW+BSE methods. The numerical results highlight the effectiveness of the developed method for overcoming the computational barrier of accurately determining the electron-hole interaction kernel to applications of large finite systems such as quantum dots and nanorods.
Yi, Cai; Lin, Jianhui; Zhang, Weihua; Ding, Jianming
2015-01-01
As train loads and travel speeds have increased over time, railway axle bearings have become critical elements which require more efficient non-destructive inspection and fault diagnostics methods. This paper presents a novel and adaptive procedure based on ensemble empirical mode decomposition (EEMD) and Hilbert marginal spectrum for multi-fault diagnostics of axle bearings. EEMD overcomes the limitations that often hypothesize about data and computational efforts that restrict the application of signal processing techniques. The outputs of this adaptive approach are the intrinsic mode functions that are treated with the Hilbert transform in order to obtain the Hilbert instantaneous frequency spectrum and marginal spectrum. Anyhow, not all the IMFs obtained by the decomposition should be considered into Hilbert marginal spectrum. The IMFs’ confidence index arithmetic proposed in this paper is fully autonomous, overcoming the major limit of selection by user with experience, and allows the development of on-line tools. The effectiveness of the improvement is proven by the successful diagnosis of an axle bearing with a single fault or multiple composite faults, e.g., outer ring fault, cage fault and pin roller fault. PMID:25970256
Hilbert-Huang transform analysis of dynamic and earthquake motion recordings
Zhang, R.R.; Ma, S.; Safak, E.; Hartzell, S.
2003-01-01
This study examines the rationale of Hilbert-Huang transform (HHT) for analyzing dynamic and earthquake motion recordings in studies of seismology and engineering. In particular, this paper first provides the fundamentals of the HHT method, which consist of the empirical mode decomposition (EMD) and the Hilbert spectral analysis. It then uses the HHT to analyze recordings of hypothetical and real wave motion, the results of which are compared with the results obtained by the Fourier data processing technique. The analysis of the two recordings indicates that the HHT method is able to extract some motion characteristics useful in studies of seismology and engineering, which might not be exposed effectively and efficiently by Fourier data processing technique. Specifically, the study indicates that the decomposed components in EMD of HHT, namely, the intrinsic mode function (IMF) components, contain observable, physical information inherent to the original data. It also shows that the grouped IMF components, namely, the EMD-based low- and high-frequency components, can faithfully capture low-frequency pulse-like as well as high-frequency wave signals. Finally, the study illustrates that the HHT-based Hilbert spectra are able to reveal the temporal-frequency energy distribution for motion recordings precisely and clearly.
Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen
2014-09-01
For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Numerical optimization in Hilbert space using inexact function and gradient evaluations
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.
Interference in the classical probabilistic model and its representation in complex Hilbert space
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei Yu.
2005-10-01
The notion of a context (complex of physical conditions, that is to say: specification of the measurement setup) is basic in this paper.We show that the main structures of quantum theory (interference of probabilities, Born's rule, complex probabilistic amplitudes, Hilbert state space, representation of observables by operators) are present already in a latent form in the classical Kolmogorov probability model. However, this model should be considered as a calculus of contextual probabilities. In our approach it is forbidden to consider abstract context independent probabilities: “first context and only then probability”. We construct the representation of the general contextual probabilistic dynamics in the complex Hilbert space. Thus dynamics of the wave function (in particular, Schrödinger's dynamics) can be considered as Hilbert space projections of a realistic dynamics in a “prespace”. The basic condition for representing of the prespace-dynamics is the law of statistical conservation of energy-conservation of probabilities. In general the Hilbert space projection of the “prespace” dynamics can be nonlinear and even irreversible (but it is always unitary). Methods developed in this paper can be applied not only to quantum mechanics, but also to classical statistical mechanics. The main quantum-like structures (e.g., interference of probabilities) might be found in some models of classical statistical mechanics. Quantum-like probabilistic behavior can be demonstrated by biological systems. In particular, it was recently found in some psychological experiments.
Cosmic transit and anisotropic models in f(R,T) gravity
NASA Astrophysics Data System (ADS)
Sahu, S. K.; Tripathy, S. K.; Sahoo, P. K.; Nath, A.
2017-06-01
Accelerating cosmological models are constructed in a modified gravity theory dubbed as $f(R,T)$ gravity at the backdrop of an anisotropic Bianchi type-III universe. $f(R,T)$ is a function of the Ricci scalar $R$ and the trace $T$ of the energy-momentum tensor and it replaces the Ricci scalar in the Einstein-Hilbert action of General Relativity. The models are constructed for two different ways of modification of the Einstein-Hilbert action. Exact solutions of the field equations are obtained by a novel method of integration. We have explored the behaviour of the cosmic transit from an decelerated phase of expansion to an accelerated phase to get the dynamical features of the universe. Within the formalism of the present work, it is found that, the modification of the Einstein-Hilbert action does not affect the scale factor. However the dynamics of the effective dark energy equation of state is significantly affected.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2001-01-01
A computer implemented method of processing two-dimensional physical signals includes five basic components and the associated presentation techniques of the results. The first component decomposes the two-dimensional signal into one-dimensional profiles. The second component is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF's) from each profile based on local extrema and/or curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the profiles. In the third component, the IMF's of each profile are then subjected to a Hilbert Transform. The fourth component collates the Hilbert transformed IMF's of the profiles to form a two-dimensional Hilbert Spectrum. A fifth component manipulates the IMF's by, for example, filtering the two-dimensional signal by reconstructing the two-dimensional signal from selected IMF(s).
Liquid identification by Hilbert spectroscopy
NASA Astrophysics Data System (ADS)
Lyatti, M.; Divin, Y.; Poppe, U.; Urban, K.
2009-11-01
Fast and reliable identification of liquids is of great importance in, for example, security, biology and the beverage industry. An unambiguous identification of liquids can be made by electromagnetic measurements of their dielectric functions in the frequency range of their main dispersions, but this frequency range, from a few GHz to a few THz, is not covered by any conventional spectroscopy. We have developed a concept of liquid identification based on our new Hilbert spectroscopy and high- Tc Josephson junctions, which can operate at the intermediate range from microwaves to THz frequencies. A demonstration setup has been developed consisting of a polychromatic radiation source and a compact Hilbert spectrometer integrated in a Stirling cryocooler. Reflection polychromatic spectra of various bottled liquids have been measured at the spectral range of 15-300 GHz with total scanning time down to 0.2 s and identification of liquids has been demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y M; Han, B; Xing, L
2016-06-15
Purpose: EPID-based patient-specific quality assurance provides verification of the planning setup and delivery process that phantomless QA and log-file based virtual dosimetry methods cannot achieve. We present a method for EPID-based QA utilizing spatially-variant EPID response kernels that allows for direct calculation of the entrance fluence and 3D phantom dose. Methods: An EPID dosimetry system was utilized for 3D dose reconstruction in a cylindrical phantom for the purposes of end-to-end QA. Monte Carlo (MC) methods were used to generate pixel-specific point-spread functions (PSFs) characterizing the spatially non-uniform EPID portal response in the presence of phantom scatter. The spatially-variant PSFs weremore » decomposed into spatially-invariant basis PSFs with the symmetric central-axis kernel as the primary basis kernel and off-axis representing orthogonal perturbations in pixel-space. This compact and accurate characterization enables the use of a modified Richardson-Lucy deconvolution algorithm to directly reconstruct entrance fluence from EPID images without iterative scatter subtraction. High-resolution phantom dose kernels were cogenerated in MC with the PSFs enabling direct recalculation of the resulting phantom dose by rapid forward convolution once the entrance fluence was calculated. A Delta4 QA phantom was used to validate the dose reconstructed in this approach. Results: The spatially-invariant representation of the EPID response accurately reproduced the entrance fluence with >99.5% fidelity with a simultaneous reduction of >60% in computational overhead. 3D dose for 10{sub 6} voxels was reconstructed for the entire phantom geometry. A 3D global gamma analysis demonstrated a >95% pass rate at 3%/3mm. Conclusion: Our approach demonstrates the capabilities of an EPID-based end-to-end QA methodology that is more efficient than traditional EPID dosimetry methods. Displacing the point of measurement external to the QA phantom reduces the necessary complexity of the phantom itself while offering a method that is highly scalable and inherently generalizable to rotational and trajectory based deliveries. This research was partially supported by Varian.« less
A Classification of Remote Sensing Image Based on Improved Compound Kernels of Svm
NASA Astrophysics Data System (ADS)
Zhao, Jianing; Gao, Wanlin; Liu, Zili; Mou, Guifen; Lu, Lin; Yu, Lina
The accuracy of RS classification based on SVM which is developed from statistical learning theory is high under small number of train samples, which results in satisfaction of classification on RS using SVM methods. The traditional RS classification method combines visual interpretation with computer classification. The accuracy of the RS classification, however, is improved a lot based on SVM method, because it saves much labor and time which is used to interpret images and collect training samples. Kernel functions play an important part in the SVM algorithm. It uses improved compound kernel function and therefore has a higher accuracy of classification on RS images. Moreover, compound kernel improves the generalization and learning ability of the kernel.
Using the Intel Math Kernel Library on Peregrine | High-Performance
Computing | NREL the Intel Math Kernel Library on Peregrine Using the Intel Math Kernel Library on Peregrine Learn how to use the Intel Math Kernel Library (MKL) with Peregrine system software. MKL architectures. Core math functions in MKL include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier
Different kernel functions due to rainfall response from borehole strainmeter in Taiwan
NASA Astrophysics Data System (ADS)
Yen Chen, Chih; Hu, Jyr Ching; LIu, Chi Ching
2014-05-01
In order to realize reasons inducing earthquakes, project of monitoring of the fault activity using 3-component Gladwin Tensor Strainmeter (GTSM) has been initiated since 2003 in Taiwan, which is one of the most active seismic regions in the world. Observed strain contains several different effects within including barometric, tidal, groundwater, precipitation, tectonics, seismic and other irregular noise. After removing the response of tides and air pressure on strain, we still can find some anomalies highly related to the rainfall in short time in days. The strain response induced by rainfall can be separated into two parts as observation in groundwater, slow response and quick response, respectively. Quick response reflects the strain responding to the load of falling water drops on the ground surface. A kernel function shows the continual response induced by unit precipitation water in time domain. We split the quick response from data removing tidal and barometric response, and then calculate the kernel function by use of deconvolution method. More, an average kernel function was calculated to reduce the noise level. There are five of the sites installed by CGS Taiwan were selected to calculate kernel functions for individual sites. The results show there may be different on rainfall response in different environmental settings. In the case of stations site on gentle terrain, kernel function for each site shows the similar trend, it rises quickly to maximum in 1 to 2 hrs, and then goes down near to zero gently in period of 2-3 days. But in the case of sites settled side by the rivers, there will be 2nd peak of function when collected water in the catchment flows along by the sites related to the hydrograph of creeks. More, landslides will occur in some sites in hazard of landslide with more rainfall stored on, just like DARB in ChiaYi. The curve of kernel function will be controlled by landslides and debris flows.
Lu, Zhao; Sun, Jing; Butts, Kenneth
2014-05-01
Support vector regression for approximating nonlinear dynamic systems is more delicate than the approximation of indicator functions in support vector classification, particularly for systems that involve multitudes of time scales in their sampled data. The kernel used for support vector learning determines the class of functions from which a support vector machine can draw its solution, and the choice of kernel significantly influences the performance of a support vector machine. In this paper, to bridge the gap between wavelet multiresolution analysis and kernel learning, the closed-form orthogonal wavelet is exploited to construct new multiscale asymmetric orthogonal wavelet kernels for linear programming support vector learning. The closed-form multiscale orthogonal wavelet kernel provides a systematic framework to implement multiscale kernel learning via dyadic dilations and also enables us to represent complex nonlinear dynamics effectively. To demonstrate the superiority of the proposed multiscale wavelet kernel in identifying complex nonlinear dynamic systems, two case studies are presented that aim at building parallel models on benchmark datasets. The development of parallel models that address the long-term/mid-term prediction issue is more intricate and challenging than the identification of series-parallel models where only one-step ahead prediction is required. Simulation results illustrate the effectiveness of the proposed multiscale kernel learning.
Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F
2012-01-01
Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.
Wavelet-based study of valence-arousal model of emotions on EEG signals with LabVIEW.
Guzel Aydin, Seda; Kaya, Turgay; Guler, Hasan
2016-06-01
This paper illustrates the wavelet-based feature extraction for emotion assessment using electroencephalogram (EEG) signal through graphical coding design. Two-dimensional (valence-arousal) emotion model was studied. Different emotions (happy, joy, melancholy, and disgust) were studied for assessment. These emotions were stimulated by video clips. EEG signals obtained from four subjects were decomposed into five frequency bands (gamma, beta, alpha, theta, and delta) using "db5" wavelet function. Relative features were calculated to obtain further information. Impact of the emotions according to valence value was observed to be optimal on power spectral density of gamma band. The main objective of this work is not only to investigate the influence of the emotions on different frequency bands but also to overcome the difficulties in the text-based program. This work offers an alternative approach for emotion evaluation through EEG processing. There are a number of methods for emotion recognition such as wavelet transform-based, Fourier transform-based, and Hilbert-Huang transform-based methods. However, the majority of these methods have been applied with the text-based programming languages. In this study, we proposed and implemented an experimental feature extraction with graphics-based language, which provides great convenience in bioelectrical signal processing.
Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment
NASA Astrophysics Data System (ADS)
Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty
2017-12-01
Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.
NASA Astrophysics Data System (ADS)
Chu, Weiqi; Li, Xiantao
2018-01-01
We present some estimates for the memory kernel function in the generalized Langevin equation, derived using the Mori-Zwanzig formalism from a one-dimensional lattice model, in which the particles interactions are through nearest and second nearest neighbors. The kernel function can be explicitly expressed in a matrix form. The analysis focuses on the decay properties, both spatially and temporally, revealing a power-law behavior in both cases. The dependence on the level of coarse-graining is also studied.
Computing Instantaneous Frequency by normalizing Hilbert Transform
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2005-01-01
This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.
Computing Instantaneous Frequency by normalizing Hilbert Transform
Huang, Norden E.
2005-05-31
This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Z; MD Anderson Cancer Center, Houston, TX; Ho, A
Purpose: To develop and validate a prediction model using radiomics features extracted from MR images to distinguish radiation necrosis from tumor progression for brain metastases treated with Gamma knife radiosurgery. Methods: The images used to develop the model were T1 post-contrast MR scans from 71 patients who had had pathologic confirmation of necrosis or progression; 1 lesion was identified per patient (17 necrosis and 54 progression). Radiomics features were extracted from 2 images at 2 time points per patient, both obtained prior to resection. Each lesion was manually contoured on each image, and 282 radiomics features were calculated for eachmore » lesion. The correlation for each radiomics feature between two time points was calculated within each group to identify a subset of features with distinct values between two groups. The delta of this subset of radiomics features, characterizing changes from the earlier time to the later one, was included as a covariate to build a prediction model using support vector machines with a cubic polynomial kernel function. The model was evaluated with a 10-fold cross-validation. Results: Forty radiomics features were selected based on consistent correlation values of approximately 0 for the necrosis group and >0.2 for the progression group. In performing the 10-fold cross-validation, we narrowed this number down to 11 delta radiomics features for the model. This 11-delta-feature model showed an overall prediction accuracy of 83.1%, with a true positive rate of 58.8% in predicting necrosis and 90.7% for predicting tumor progression. The area under the curve for the prediction model was 0.79. Conclusion: These delta radiomics features extracted from MR scans showed potential for distinguishing radiation necrosis from tumor progression. This tool may be a useful, noninvasive means of determining the status of an enlarging lesion after radiosurgery, aiding decision-making regarding surgical resection versus conservative medical management.« less
Nixtamalized flour from quality protein maize (Zea mays L). optimization of alkaline processing.
Milán-Carrillo, J; Gutiérrez-Dorado, R; Cuevas-Rodríguez, E O; Garzón-Tiznado, J A; Reyes-Moreno, C
2004-01-01
Quality of maize proteins is poor, they are deficient in the essential amino acids lysine and tryptophan. Recently, in Mexico were successfully developed nutritionally improved 26 new hybrids and cultivars called quality protein maize (QPM) which contain greater amounts of lysine and tryptophan. Alkaline cooking of maize with lime (nixtamalization) is the first step for producing several maize products (masa, tortillas, flours, snacks). Processors adjust nixtamalization variables based on experience. The objective of this work was to determine the best combination of nixtamalization process variables for producing nixtamalized maize flour (NMF) from QPM V-537 variety. Nixtamalization conditions were selected from factorial combinations of process variables: nixtamalization time (NT, 20-85 min), lime concentration (LC, 3.3-6.7 g Ca(OH)2/l, in distilled water), and steep time (ST, 8-16 hours). Nixtamalization temperature and ratio of grain to cooking medium were 85 degrees C and 1:3 (w/v), respectively. At the end of each cooking treatment the steeping started for the required time. Steeping was finished by draining the cooking liquor (nejayote). Nixtamal (alkaline-cooked maize kernels) was washed with running tap water. Wet nixtamal was dried (24 hours, 55 degrees C) and milled to pass through 80-US mesh screen to obtain NMF. Response surface methodology (RSM) was applied as optimization technique, over four response variables: In vitro protein digestibility (PD), total color difference (deltaE), water absorption index (WAI), and pH. Predictive models for response variables were developed as a function of process variables. Conventional graphical method was applied to obtain maximum PD, WAI and minimum deltaE, pH. Contour plots of each of the response variables were utilized applying superposition surface methodology, to obtain three contour plots for observation and selection of best combination of NT (31 min), LC (5.4 g Ca(OH)2/l), and ST (8.1 hours) for producing optimized NMF from QPM.
Diamond High Assurance Security Program: Trusted Computing Exemplar
2002-09-01
computing component, the Embedded MicroKernel Prototype. A third-party evaluation of the component will be initiated during development (e.g., once...target technologies and larger projects is a topic for future research. Trusted Computing Reference Component – The Embedded MicroKernel Prototype We...Kernel The primary security function of the Embedded MicroKernel will be to enforce process and data-domain separation, while providing primitive
Weighted Feature Gaussian Kernel SVM for Emotion Recognition
Jia, Qingxuan
2016-01-01
Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443
Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen
2016-07-07
Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.
Properties of highly frustrated magnetic molecules studied by the finite-temperature Lanczos method
NASA Astrophysics Data System (ADS)
Schnack, J.; Wendland, O.
2010-12-01
The very interesting magnetic properties of frustrated magnetic molecules are often hardly accessible due to the prohibitive size of the related Hilbert spaces. The finite-temperature Lanczos method is able to treat spin systems for Hilbert space sizes up to 109. Here we first demonstrate for exactly solvable systems that the method is indeed accurate. Then we discuss the thermal properties of one of the biggest magnetic molecules synthesized to date, the icosidodecahedron with antiferromagnetically coupled spins of s = 1/2. We show how genuine quantum features such as the magnetization plateau behave as a function of temperature.
NASA Astrophysics Data System (ADS)
Plymen, Roger; Robinson, Paul
1995-01-01
Infinite-dimensional Clifford algebras and their Fock representations originated in the quantum mechanical study of electrons. In this book, the authors give a definitive account of the various Clifford algebras over a real Hilbert space and of their Fock representations. A careful consideration of the latter's transformation properties under Bogoliubov automorphisms leads to the restricted orthogonal group. From there, a study of inner Bogoliubov automorphisms enables the authors to construct infinite-dimensional spin groups. Apart from assuming a basic background in functional analysis and operator algebras, the presentation is self-contained with complete proofs, many of which offer a fresh perspective on the subject.
Hentschinski, M; Kusina, A; Kutak, K; Serino, M
2018-01-01
We calculate the transverse momentum dependent gluon-to-gluon splitting function within [Formula: see text]-factorization, generalizing the framework employed in the calculation of the quark splitting functions in Hautmann et al. (Nucl Phys B 865:54-66, arXiv:1205.1759, 2012), Gituliar et al. (JHEP 01:181, arXiv:1511.08439, 2016), Hentschinski et al. (Phys Rev D 94(11):114013, arXiv:1607.01507, 2016) and demonstrate at the same time the consistency of the extended formalism with previous results. While existing versions of [Formula: see text] factorized evolution equations contain already a gluon-to-gluon splitting function i.e. the leading order Balitsky-Fadin-Kuraev-Lipatov (BFKL) kernel or the Ciafaloni-Catani-Fiorani-Marchesini (CCFM) kernel, the obtained splitting function has the important property that it reduces both to the leading order BFKL kernel in the high energy limit, to the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) gluon-to-gluon splitting function in the collinear limit as well as to the CCFM kernel in the soft limit. At the same time we demonstrate that this splitting kernel can be obtained from a direct calculation of the QCD Feynman diagrams, based on a combined implementation of the Curci-Furmanski-Petronzio formalism for the calculation of the collinear splitting functions and the framework of high energy factorization.
Proteome analysis of the almond kernel (Prunus dulcis).
Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu
2016-08-01
Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Detection of maize kernels breakage rate based on K-means clustering
NASA Astrophysics Data System (ADS)
Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping
2017-04-01
In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.
Invariance of Topological Indices Under Hilbert Space Truncation
Huang, Zhoushen; Zhu, Wei; Arovas, Daniel P.; ...
2018-01-05
Here, we show that the topological index of a wave function, computed in the space of twisted boundary phases, is preserved under Hilbert space truncation, provided the truncated state remains normalizable. If truncation affects the boundary condition of the resulting state, the invariant index may acquire a different physical interpretation. If the index is symmetry protected, the truncation should preserve the protecting symmetry. We discuss implications of this invariance using paradigmatic integer and fractional Chern insulators, Z 2 topological insulators, and spin-1 Affleck-Kennedy-Lieb-Tasaki and Heisenberg chains, as well as its relation with the notion of bulk entanglement. As a possiblemore » application, we propose a partial quantum tomography scheme from which the topological index of a generic multicomponent wave function can be extracted by measuring only a small subset of wave function components, equivalent to the measurement of a bulk entanglement topological index.« less
Invariance of Topological Indices Under Hilbert Space Truncation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhoushen; Zhu, Wei; Arovas, Daniel P.
Here, we show that the topological index of a wave function, computed in the space of twisted boundary phases, is preserved under Hilbert space truncation, provided the truncated state remains normalizable. If truncation affects the boundary condition of the resulting state, the invariant index may acquire a different physical interpretation. If the index is symmetry protected, the truncation should preserve the protecting symmetry. We discuss implications of this invariance using paradigmatic integer and fractional Chern insulators, Z 2 topological insulators, and spin-1 Affleck-Kennedy-Lieb-Tasaki and Heisenberg chains, as well as its relation with the notion of bulk entanglement. As a possiblemore » application, we propose a partial quantum tomography scheme from which the topological index of a generic multicomponent wave function can be extracted by measuring only a small subset of wave function components, equivalent to the measurement of a bulk entanglement topological index.« less
Using Adjoint Methods to Improve 3-D Velocity Models of Southern California
NASA Astrophysics Data System (ADS)
Liu, Q.; Tape, C.; Maggi, A.; Tromp, J.
2006-12-01
We use adjoint methods popular in climate and ocean dynamics to calculate Fréchet derivatives for tomographic inversions in southern California. The Fréchet derivative of an objective function χ(m), where m denotes the Earth model, may be written in the generic form δχ=int Km(x) δln m(x) d3x, where δln m=δ m/m denotes the relative model perturbation. For illustrative purposes, we construct the 3-D finite-frequency banana-doughnut kernel Km, corresponding to the misfit of a single traveltime measurement, by simultaneously computing the 'adjoint' wave field s† forward in time and reconstructing the regular wave field s backward in time. The adjoint wave field is produced by using the time-reversed velocity at the receiver as a fictitious source, while the regular wave field is reconstructed on the fly by propagating the last frame of the wave field saved by a previous forward simulation backward in time. The approach is based upon the spectral-element method, and only two simulations are needed to produce density, shear-wave, and compressional-wave sensitivity kernels. This method is applied to the SCEC southern California velocity model. Various density, shear-wave, and compressional-wave sensitivity kernels are presented for different phases in the seismograms. We also generate 'event' kernels for Pnl, S and surface waves, which are the Fréchet kernels of misfit functions that measure the P, S or surface wave traveltime residuals at all the receivers simultaneously for one particular event. Effectively, an event kernel is a sum of weighted Fréchet kernels, with weights determined by the associated traveltime anomalies. By the nature of the 3-D simulation, every event kernel is also computed based upon just two simulations, i.e., its construction costs the same amount of computation time as an individual banana-doughnut kernel. One can think of the sum of the event kernels for all available earthquakes, called the 'misfit' kernel, as a graphical representation of the gradient of the misfit function. With the capability of computing both the value of the misfit function and its gradient, which assimilates the traveltime anomalies, we are ready to use a non-linear conjugate gradient algorithm to iteratively improve velocity models of southern California.
Reactive collisions for NO(2Π) + N(4S) at temperatures relevant to the hypersonic flight regime.
Denis-Alpizar, Otoniel; Bemish, Raymond J; Meuwly, Markus
2017-01-18
The NO(X 2 Π) + N( 4 S) reaction which occurs entirely in the triplet manifold of N 2 O is investigated using quasiclassical trajectories and quantum simulations. Fully-dimensional potential energy surfaces for the 3 A' and 3 A'' states are computed at the MRCI+Q level of theory and are represented using a reproducing kernel Hilbert space. The N-exchange and N 2 -formation channels are followed by using the multi-state adiabatic reactive molecular dynamics method. Up to 5000 K these reactions occur predominantly on the N 2 O 3 A'' surface. However, for higher temperatures the contributions of the 3 A' and 3 A'' states are comparable and the final state distributions are far from thermal equilibrium. From the trajectory simulations a new set of thermal rate coefficients of up to 20 000 K is determined. Comparison of the quasiclassical trajectory and quantum simulations shows that a classical description is a good approximation as determined from the final state analysis.
Assessing Predictive Properties of Genome-Wide Selection in Soybeans
Xavier, Alencar; Muir, William M.; Rainey, Katy Martin
2016-01-01
Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr). We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set. PMID:27317786
NASA Astrophysics Data System (ADS)
Nemes, Csaba; Barcza, Gergely; Nagy, Zoltán; Legeza, Örs; Szolgay, Péter
2014-06-01
In the numerical analysis of strongly correlated quantum lattice models one of the leading algorithms developed to balance the size of the effective Hilbert space and the accuracy of the simulation is the density matrix renormalization group (DMRG) algorithm, in which the run-time is dominated by the iterative diagonalization of the Hamilton operator. As the most time-dominant step of the diagonalization can be expressed as a list of dense matrix operations, the DMRG is an appealing candidate to fully utilize the computing power residing in novel kilo-processor architectures. In the paper a smart hybrid CPU-GPU implementation is presented, which exploits the power of both CPU and GPU and tolerates problems exceeding the GPU memory size. Furthermore, a new CUDA kernel has been designed for asymmetric matrix-vector multiplication to accelerate the rest of the diagonalization. Besides the evaluation of the GPU implementation, the practical limits of an FPGA implementation are also discussed.
PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation
NASA Astrophysics Data System (ADS)
Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long
2018-06-01
We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.
Nonlinear Deep Kernel Learning for Image Annotation.
Jiu, Mingyuan; Sahbi, Hichem
2017-02-08
Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Diallo, S. O.; Lin, J. Y. Y.; Abernathy, D. L.; Azuah, R. T.
2016-11-01
Inelastic neutron scattering at high momentum transfers (i.e. Q ≥ 20 A ˚), commonly known as deep inelastic neutron scattering (DINS), provides direct observation of the momentum distribution of light atoms, making it a powerful probe for studying single-particle motions in liquids and solids. The quantitative analysis of DINS data requires an accurate knowledge of the instrument resolution function Ri(Q , E) at each momentum Q and energy transfer E, where the label i indicates whether the resolution was experimentally observed i = obs or simulated i=sim. Here, we describe two independent methods for determining the total resolution function Ri(Q , E) of the ARCS neutron instrument at the Spallation Neutron Source, Oak Ridge National Laboratory. The first method uses experimental data from an archetypical system (liquid 4He) studied with DINS, which are then numerically deconvoluted using its previously determined intrinsic scattering function to yield Robs(Q , E). The second approach uses accurate Monte Carlo simulations of the ARCS spectrometer, which account for all instrument contributions, coupled to a representative scattering kernel to reproduce the experimentally observed response S(Q , E). Using a delta function as scattering kernel, the simulation yields a resolution function Rsim(Q , E) with comparable lineshape and features as Robs(Q , E), but somewhat narrower due to the ideal nature of the model. Using each of these two Ri(Q , E) separately, we extract characteristic parameters of liquid 4He such as the intrinsic linewidth α2 (which sets the atomic kinetic energy 〈 K 〉 ∼α2) in the normal liquid and the Bose-Einstein condensate parameter n0 in the superfluid phase. The extracted α2 values agree well with previous measurements at saturated vapor pressure (SVP) as well as at elevated pressure (24 bars) within experimental precision, independent of which Ri(Q , y) is used to analyze the data. The actual observed n0 values at each Q vary little with the model Ri(Q , E), and the effective Q-averaged n0 values are consistent with each other, and with previously reported values.
Improving the Bandwidth Selection in Kernel Equating
ERIC Educational Resources Information Center
Andersson, Björn; von Davier, Alina A.
2014-01-01
We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2016-10-14
A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.
Unconventional Signal Processing Using the Cone Kernel Time-Frequency Representation.
1992-10-30
Wigner - Ville distribution ( WVD ), the Choi- Williams distribution , and the cone kernel distribution were compared with the spectrograms. Results were...ambiguity function. Figures A-18(c) and (d) are the Wigner - Ville Distribution ( WVD ) and CK-TFR Doppler maps. In this noiseless case all three exhibit...kernel is the basis for the well known Wigner - Ville distribution . In A-9(2), the cone kernel defined by Zhao, Atlas and Marks [21 is described
Kernel-Correlated Levy Field Driven Forward Rate and Application to Derivative Pricing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bo Lijun; Wang Yongjin; Yang Xuewei, E-mail: xwyangnk@yahoo.com.cn
2013-08-01
We propose a term structure of forward rates driven by a kernel-correlated Levy random field under the HJM framework. The kernel-correlated Levy random field is composed of a kernel-correlated Gaussian random field and a centered Poisson random measure. We shall give a criterion to preclude arbitrage under the risk-neutral pricing measure. As applications, an interest rate derivative with general payoff functional is priced under this pricing measure.
NASA Astrophysics Data System (ADS)
Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin
2015-10-01
The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Tape, Carl; Liu, Qinya; Tromp, Jeroen
2007-03-01
We employ adjoint methods in a series of synthetic seismic tomography experiments to recover surface wave phase-speed models of southern California. Our approach involves computing the Fréchet derivative for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a 2-D spectral-element method (SEM) and a phase-speed model for southern California. A `target' phase-speed model is used to generate the `data' at the receivers. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the remaining differences between data and synthetics are time-reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernels. An event kernel may be thought of as a weighted sum of phase-specific (e.g. P) banana-doughnut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, that is, the Fréchet derivative. A non-linear conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. We illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions and joint source-structure inversions. Finally, we draw connections between classical Hessian-based tomography and gradient-based adjoint tomography.
Mapping quantitative trait loci for a unique 'super soft' kernel trait in soft white wheat
USDA-ARS?s Scientific Manuscript database
Wheat (Triticum sp.) kernel texture is an important factor affecting milling, flour functionality, and end-use quality. Kernel texture is normally characterized as either hard or soft, the two major classes of texture. However, further variation is typically encountered in each class. Soft wheat var...
Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain
ERIC Educational Resources Information Center
Hannagan, Thomas; Grainger, Jonathan
2012-01-01
It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jakel, Scholkopf, & Wichmann, 2009). We point out that "String kernels," initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal…
Combinatorial quantisation of the Euclidean torus universe
NASA Astrophysics Data System (ADS)
Meusburger, C.; Noui, K.
2010-12-01
We quantise the Euclidean torus universe via a combinatorial quantisation formalism based on its formulation as a Chern-Simons gauge theory and on the representation theory of the Drinfel'd double DSU(2). The resulting quantum algebra of observables is given by two commuting copies of the Heisenberg algebra, and the associated Hilbert space can be identified with the space of square integrable functions on the torus. We show that this Hilbert space carries a unitary representation of the modular group and discuss the role of modular invariance in the theory. We derive the classical limit of the theory and relate the quantum observables to the geometry of the torus universe.
A Functional Central Limit Theorem for the Becker-Döring Model
NASA Astrophysics Data System (ADS)
Sun, Wen
2018-04-01
We investigate the fluctuations of the stochastic Becker-Döring model of polymerization when the initial size of the system converges to infinity. A functional central limit problem is proved for the vector of the number of polymers of a given size. It is shown that the stochastic process associated to fluctuations is converging to the strong solution of an infinite dimensional stochastic differential equation (SDE) in a Hilbert space. We also prove that, at equilibrium, the solution of this SDE is a Gaussian process. The proofs are based on a specific representation of the evolution equations, the introduction of a convenient Hilbert space and several technical estimates to control the fluctuations, especially of the first coordinate which interacts with all components of the infinite dimensional vector representing the state of the process.
Analysis of turbine-grid interaction of grid-connected wind turbine using HHT
NASA Astrophysics Data System (ADS)
Chen, A.; Wu, W.; Miao, J.; Xie, D.
2018-05-01
This paper processes the output power of the grid-connected wind turbine with the denoising and extracting method based on Hilbert Huang transform (HHT) to discuss the turbine-grid interaction. At first, the detailed Empirical Mode Decomposition (EMD) and the Hilbert Transform (HT) are introduced. Then, on the premise of decomposing the output power of the grid-connected wind turbine into a series of Intrinsic Mode Functions (IMFs), energy ratio and power volatility are calculated to detect the unessential components. Meanwhile, combined with vibration function of turbine-grid interaction, data fitting of instantaneous amplitude and phase of each IMF is implemented to extract characteristic parameters of different interactions. Finally, utilizing measured data of actual parallel-operated wind turbines in China, this work accurately obtains the characteristic parameters of turbine-grid interaction of grid-connected wind turbine.
Terahertz bandwidth all-optical Hilbert transformers based on long-period gratings.
Ashrafi, Reza; Azaña, José
2012-07-01
A novel, all-optical design for implementing terahertz (THz) bandwidth real-time Hilbert transformers is proposed and numerically demonstrated. An all-optical Hilbert transformer can be implemented using a uniform-period long-period grating (LPG) with a properly designed amplitude-only grating apodization profile, incorporating a single π-phase shift in the middle of the grating length. The designed LPG-based Hilbert transformers can be practically implemented using either fiber-optic or integrated-waveguide technologies. As a generalization, photonic fractional Hilbert transformers are also designed based on the same optical platform. In this general case, the resulting LPGs have multiple π-phase shifts along the grating length. Our numerical simulations confirm that all-optical Hilbert transformers capable of processing arbitrary optical signals with bandwidths well in the THz range can be implemented using feasible fiber/waveguide LPG designs.
Hilbert's sixth problem and the failure of the Boltzmann to Euler limit
NASA Astrophysics Data System (ADS)
Slemrod, Marshall
2018-04-01
This paper addresses the main issue of Hilbert's sixth problem, namely the rigorous passage of solutions to the mesoscopic Boltzmann equation to macroscopic solutions of the Euler equations of compressible gas dynamics. The results of the paper are that (i) in general Hilbert's program will fail because of the appearance of van der Waals-Korteweg capillarity terms in a macroscopic description of motion of a gas, and (ii) the van der Waals-Korteweg theory itself might satisfy Hilbert's quest for a map from the `atomistic view' to the laws of motion of continua. This article is part of the theme issue `Hilbert's sixth problem'.
Kim, Sungjin; Jinich, Adrián; Aspuru-Guzik, Alán
2017-04-24
We propose a multiple descriptor multiple kernel (MultiDK) method for efficient molecular discovery using machine learning. We show that the MultiDK method improves both the speed and accuracy of molecular property prediction. We apply the method to the discovery of electrolyte molecules for aqueous redox flow batteries. Using multiple-type-as opposed to single-type-descriptors, we obtain more relevant features for machine learning. Following the principle of "wisdom of the crowds", the combination of multiple-type descriptors significantly boosts prediction performance. Moreover, by employing multiple kernels-more than one kernel function for a set of the input descriptors-MultiDK exploits nonlinear relations between molecular structure and properties better than a linear regression approach. The multiple kernels consist of a Tanimoto similarity kernel and a linear kernel for a set of binary descriptors and a set of nonbinary descriptors, respectively. Using MultiDK, we achieve an average performance of r 2 = 0.92 with a test set of molecules for solubility prediction. We also extend MultiDK to predict pH-dependent solubility and apply it to a set of quinone molecules with different ionizable functional groups to assess their performance as flow battery electrolytes.
On one solution of Volterra integral equations of second kind
NASA Astrophysics Data System (ADS)
Myrhorod, V.; Hvozdeva, I.
2016-10-01
A solution of Volterra integral equations of the second kind with separable and difference kernels based on solutions of corresponding equations linking the kernel and resolvent is suggested. On the basis of a discrete functions class, the equations linking the kernel and resolvent are obtained and the methods of their analytical solutions are proposed. A mathematical model of the gas-turbine engine state modification processes in the form of Volterra integral equation of the second kind with separable kernel is offered.
Alternative Derivations for the Poisson Integral Formula
ERIC Educational Resources Information Center
Chen, J. T.; Wu, C. S.
2006-01-01
Poisson integral formula is revisited. The kernel in the Poisson integral formula can be derived in a series form through the direct BEM free of the concept of image point by using the null-field integral equation in conjunction with the degenerate kernels. The degenerate kernels for the closed-form Green's function and the series form of Poisson…
Application of kernel functions for accurate similarity search in large chemical databases.
Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H
2010-04-29
Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.
SVM and SVM Ensembles in Breast Cancer Prediction.
Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong
2017-01-01
Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers.
SVM and SVM Ensembles in Breast Cancer Prediction
Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong
2017-01-01
Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers. PMID:28060807
Chiral Bosonization of Superconformal Ghosts
NASA Technical Reports Server (NTRS)
Shi, Deheng; Shen, Yang; Liu, Jinling; Xiong, Yongjian
1996-01-01
We explain the difference of the Hilbert space of the superconformal ghosts (beta,gamma) system from that of its bosonized fields phi and chi. We calculate the chiral correlation functions of phi, chi fields by inserting appropriate projectors.
Embedded real-time operating system micro kernel design
NASA Astrophysics Data System (ADS)
Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng
2005-12-01
Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.
Graph Kernels for Molecular Similarity.
Rupp, Matthias; Schneider, Gisbert
2010-04-12
Molecular similarity measures are important for many cheminformatics applications like ligand-based virtual screening and quantitative structure-property relationships. Graph kernels are formal similarity measures defined directly on graphs, such as the (annotated) molecular structure graph. Graph kernels are positive semi-definite functions, i.e., they correspond to inner products. This property makes them suitable for use with kernel-based machine learning algorithms such as support vector machines and Gaussian processes. We review the major types of kernels between graphs (based on random walks, subgraphs, and optimal assignments, respectively), and discuss their advantages, limitations, and successful applications in cheminformatics. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Dejun, E-mail: dejun.lin@gmail.com
2015-09-21
Accurate representation of intermolecular forces has been the central task of classical atomic simulations, known as molecular mechanics. Recent advancements in molecular mechanics models have put forward the explicit representation of permanent and/or induced electric multipole (EMP) moments. The formulas developed so far to calculate EMP interactions tend to have complicated expressions, especially in Cartesian coordinates, which can only be applied to a specific kernel potential function. For example, one needs to develop a new formula each time a new kernel function is encountered. The complication of these formalisms arises from an intriguing and yet obscured mathematical relation between themore » kernel functions and the gradient operators. Here, I uncover this relation via rigorous derivation and find that the formula to calculate EMP interactions is basically invariant to the potential kernel functions as long as they are of the form f(r), i.e., any Green’s function that depends on inter-particle distance. I provide an algorithm for efficient evaluation of EMP interaction energies, forces, and torques for any kernel f(r) up to any arbitrary rank of EMP moments in Cartesian coordinates. The working equations of this algorithm are essentially the same for any kernel f(r). Recently, a few recursive algorithms were proposed to calculate EMP interactions. Depending on the kernel functions, the algorithm here is about 4–16 times faster than these algorithms in terms of the required number of floating point operations and is much more memory efficient. I show that it is even faster than a theoretically ideal recursion scheme, i.e., one that requires 1 floating point multiplication and 1 addition per recursion step. This algorithm has a compact vector-based expression that is optimal for computer programming. The Cartesian nature of this algorithm makes it fit easily into modern molecular simulation packages as compared with spherical coordinate-based algorithms. A software library based on this algorithm has been implemented in C++11 and has been released.« less
Ideal regularization for learning kernels from labels.
Pan, Binbin; Lai, Jianhuang; Shen, Lixin
2014-08-01
In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.
Integrating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Champagne, Nathan J.; Wilton, Donald R.
2008-01-01
A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form
Ranking Support Vector Machine with Kernel Approximation
Dou, Yong
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Small convolution kernels for high-fidelity image restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1991-01-01
An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.
An algorithm of improving speech emotional perception for hearing aid
NASA Astrophysics Data System (ADS)
Xi, Ji; Liang, Ruiyu; Fei, Xianju
2017-07-01
In this paper, a speech emotion recognition (SER) algorithm was proposed to improve the emotional perception of hearing-impaired people. The algorithm utilizes multiple kernel technology to overcome the drawback of SVM: slow training speed. Firstly, in order to improve the adaptive performance of Gaussian Radial Basis Function (RBF), the parameter determining the nonlinear mapping was optimized on the basis of Kernel target alignment. Then, the obtained Kernel Function was used as the basis kernel of Multiple Kernel Learning (MKL) with slack variable that could solve the over-fitting problem. However, the slack variable also brings the error into the result. Therefore, a soft-margin MKL was proposed to balance the margin against the error. Moreover, the relatively iterative algorithm was used to solve the combination coefficients and hyper-plane equations. Experimental results show that the proposed algorithm can acquire an accuracy of 90% for five kinds of emotions including happiness, sadness, anger, fear and neutral. Compared with KPCA+CCA and PIM-FSVM, the proposed algorithm has the highest accuracy.
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
Towards Seismic Tomography Based Upon Adjoint Methods
NASA Astrophysics Data System (ADS)
Tromp, J.; Liu, Q.; Tape, C.; Maggi, A.
2006-12-01
We outline the theory behind tomographic inversions based on 3D reference models, fully numerical 3D wave propagation, and adjoint methods. Our approach involves computing the Fréchet derivatives for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a spectral-element method (SEM) and a heterogeneous wave-speed model, and stored as synthetic seismograms at particular receivers for which there is data. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the differences between the data and the synthetics are time reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernel. These kernels may be thought of as weighted sums of measurement-specific banana-donut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. A conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. Using 2D examples for Rayleigh wave phase-speed maps of southern California, we illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions, and joint source-structure inversions. We also illustrate the characteristics of these 3D finite-frequency kernels based upon adjoint simulations for a variety of global arrivals, e.g., Pdiff, P'P', and SKS, and we illustrate how the approach may be used to investigate body- and surface-wave anisotropy. In adjoint tomography any time segment in which the data and synthetics match reasonably well is suitable for measurement, and this implies a much greater number of phases per seismogram can be used compared to classical tomography in which the sensitivity of the measurements is determined analytically for specific arrivals, e.g., P. We use an automated picking algorithm based upon short-term/long-term averages and strict phase and amplitude anomaly criteria to determine arrivals and time windows suitable for measurement. For shallow global events the algorithm typically identifies of the order of 1000~windows suitable for measurement, whereas for a deep event the number can reach 4000. For southern California earthquakes the number of phases is of the order of 100 for a magnitude 4.0 event and up to 450 for a magnitude 5.0 event. We will show examples of event kernels for both global and regional earthquakes. These event kernels form the basis of adjoint tomography.
Quantum probability and Hilbert's sixth problem
NASA Astrophysics Data System (ADS)
Accardi, Luigi
2018-04-01
With the birth of quantum mechanics, the two disciplines that Hilbert proposed to axiomatize, probability and mechanics, became entangled and a new probabilistic model arose in addition to the classical one. Thus, to meet Hilbert's challenge, an axiomatization should account deductively for the basic features of all three disciplines. This goal was achieved within the framework of quantum probability. The present paper surveys the quantum probabilistic axiomatization. This article is part of the themed issue `Hilbert's sixth problem'.
NASA Astrophysics Data System (ADS)
Fernandes, Adji Achmad Rinaldo; Solimun, Arisoesilaningsih, Endang
2017-12-01
The aim of this research is to estimate the spline in Path Analysis-based on Nonparametric Regression using Penalized Weighted Least Square (PWLS) approach. Approach used is Reproducing Kernel Hilbert Space at sobolev space. Nonparametric path analysis model on the equation y1 i=f1.1(x1 i)+ε1 i; y2 i=f1.2(x1 i)+f2.2(y1 i)+ε2 i; i =1 ,2 ,…,n Nonparametric Path Analysis which meet the criteria of minimizing PWLS min fw .k∈W2m[aw .k,bw .k], k =1 ,2 { (2n ) -1(y˜-f ˜ ) TΣ-1(y ˜-f ˜ ) + ∑k =1 2 ∑w =1 2 λw .k ∫aw .k bw .k [fw.k (m )(xi) ] 2d xi } is f ˜^=Ay ˜ with A=T1(T1TU1-1∑-1T1)-1T1TU1-1∑-1+V1U1-1∑-1[I-T1(T1TU1-1∑-1T1)-1T1TU1-1∑-1] columnalign="left">+T2(T2TU2-1∑-1T2)-1T2TU2-1∑-1+V2U2-1∑-1[I1-T2(T2TU2-1∑-1T2) -1T2TU2-1∑-1
Rebouças, Marina Cabral; Rodrigues, Maria do Carmo Passos; Afonso, Marcos Rodrigues Amorim
2014-07-01
The aim of this research was to develop a prebiotic beverage from a hydrosoluble extract of broken cashew nut kernels and passion fruit juice using response surface methodology in order to optimize acceptance of its sensory attributes. A 2(2) central composite rotatable design was used, which produced 9 formulations, which were then evaluated using different concentrations of hydrosoluble cashew nut kernel, passion fruit juice, oligofructose, and 3% sugar. The use of response surface methodology to interpret the sensory data made it possible to obtain a formulation with satisfactory acceptance which met the criteria of bifidogenic action and use of hydrosoluble cashew nut kernels by using 14% oligofructose and 33% passion fruit juice. As a result of this study, it was possible to obtain a new functional prebiotic product, which combined the nutritional and functional properties of cashew nut kernels and oligofructose with the sensory properties of passion fruit juice in a beverage with satisfactory sensory acceptance. This new product emerges as a new alternative for the industrial processing of broken cashew nut kernels, which have very low market value, enabling this sector to increase its profits. © 2014 Institute of Food Technologists®
Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.
Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao
2017-06-21
In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.
Simultaneous multiple non-crossing quantile regression estimation using kernel constraints
Liu, Yufeng; Wu, Yichao
2011-01-01
Quantile regression (QR) is a very useful statistical tool for learning the relationship between the response variable and covariates. For many applications, one often needs to estimate multiple conditional quantile functions of the response variable given covariates. Although one can estimate multiple quantiles separately, it is of great interest to estimate them simultaneously. One advantage of simultaneous estimation is that multiple quantiles can share strength among them to gain better estimation accuracy than individually estimated quantile functions. Another important advantage of joint estimation is the feasibility of incorporating simultaneous non-crossing constraints of QR functions. In this paper, we propose a new kernel-based multiple QR estimation technique, namely simultaneous non-crossing quantile regression (SNQR). We use kernel representations for QR functions and apply constraints on the kernel coefficients to avoid crossing. Both unregularised and regularised SNQR techniques are considered. Asymptotic properties such as asymptotic normality of linear SNQR and oracle properties of the sparse linear SNQR are developed. Our numerical results demonstrate the competitive performance of our SNQR over the original individual QR estimation. PMID:22190842
NASA Astrophysics Data System (ADS)
Constantin, Lucian A.; Fabiano, Eduardo; Della Sala, Fabio
2018-05-01
Orbital-free density functional theory (OF-DFT) promises to describe the electronic structure of very large quantum systems, being its computational cost linear with the system size. However, the OF-DFT accuracy strongly depends on the approximation made for the kinetic energy (KE) functional. To date, the most accurate KE functionals are nonlocal functionals based on the linear-response kernel of the homogeneous electron gas, i.e., the jellium model. Here, we use the linear-response kernel of the jellium-with-gap model to construct a simple nonlocal KE functional (named KGAP) which depends on the band-gap energy. In the limit of vanishing energy gap (i.e., in the case of metals), the KGAP is equivalent to the Smargiassi-Madden (SM) functional, which is accurate for metals. For a series of semiconductors (with different energy gaps), the KGAP performs much better than SM, and results are close to the state-of-the-art functionals with sophisticated density-dependent kernels.
Arbitrary-order Hilbert Spectral Analysis and Intermittency in Solar Wind Density Fluctuations
NASA Astrophysics Data System (ADS)
Carbone, Francesco; Sorriso-Valvo, Luca; Alberti, Tommaso; Lepreti, Fabio; Chen, Christopher H. K.; Němeček, Zdenek; Šafránková, Jana
2018-05-01
The properties of inertial- and kinetic-range solar wind turbulence have been investigated with the arbitrary-order Hilbert spectral analysis method, applied to high-resolution density measurements. Due to the small sample size and to the presence of strong nonstationary behavior and large-scale structures, the classical analysis in terms of structure functions may prove to be unsuccessful in detecting the power-law behavior in the inertial range, and may underestimate the scaling exponents. However, the Hilbert spectral method provides an optimal estimation of the scaling exponents, which have been found to be close to those for velocity fluctuations in fully developed hydrodynamic turbulence. At smaller scales, below the proton gyroscale, the system loses its intermittent multiscaling properties and converges to a monofractal process. The resulting scaling exponents, obtained at small scales, are in good agreement with those of classical fractional Brownian motion, indicating a long-term memory in the process, and the absence of correlations around the spectral-break scale. These results provide important constraints on models of kinetic-range turbulence in the solar wind.
Hilbert-Schmidt quantum coherence in multi-qudit systems
NASA Astrophysics Data System (ADS)
Maziero, Jonas
2017-11-01
Using Bloch's parametrization for qudits ( d-level quantum systems), we write the Hilbert-Schmidt distance (HSD) between two generic n-qudit states as an Euclidean distance between two vectors of observables mean values in R^{Π_{s=1}nds2-1}, where ds is the dimension for qudit s. Then, applying the generalized Gell-Mann's matrices to generate SU(ds), we use that result to obtain the Hilbert-Schmidt quantum coherence (HSC) of n-qudit systems. As examples, we consider in detail one-qubit, one-qutrit, two-qubit, and two copies of one-qubit states. In this last case, the possibility for controlling local and non-local coherences by tuning local populations is studied, and the contrasting behaviors of HSC, l1-norm coherence, and relative entropy of coherence in this regard are noticed. We also investigate the decoherent dynamics of these coherence functions under the action of qutrit dephasing and dissipation channels. At last, we analyze the non-monotonicity of HSD under tensor products and report the first instance of a consequence (for coherence quantification) of this kind of property of a quantum distance measure.
ADHM and the 4d quantum Hall effect
NASA Astrophysics Data System (ADS)
Barns-Graham, Alec; Dorey, Nick; Lohitsiri, Nakarin; Tong, David; Turner, Carl
2018-04-01
Yang-Mills instantons are solitonic particles in d = 4 + 1 dimensional gauge theories. We construct and analyse the quantum Hall states that arise when these particles are restricted to the lowest Landau level. We describe the ground state wavefunctions for both Abelian and non-Abelian quantum Hall states. Although our model is purely bosonic, we show that the excitations of this 4d quantum Hall state are governed by the Nekrasov partition function of a certain five dimensional supersymmetric gauge theory with Chern-Simons term. The partition function can also be interpreted as a variant of the Hilbert series of the instanton moduli space, counting holomorphic sections rather than holomorphic functions. It is known that the Hilbert series of the instanton moduli space can be rewritten using mirror symmetry of 3d gauge theories in terms of Coulomb branch variables. We generalise this approach to include the effect of a five dimensional Chern-Simons term. We demonstrate that the resulting Coulomb branch formula coincides with the corresponding Higgs branch Molien integral which, in turn, reproduces the standard formula for the Nekrasov partition function.
Construction of CASCI-type wave functions for very large active spaces.
Boguslawski, Katharina; Marti, Konrad H; Reiher, Markus
2011-06-14
We present a procedure to construct a configuration-interaction expansion containing arbitrary excitations from an underlying full-configuration-interaction-type wave function defined for a very large active space. Our procedure is based on the density-matrix renormalization group (DMRG) algorithm that provides the necessary information in terms of the eigenstates of the reduced density matrices to calculate the coefficient of any basis state in the many-particle Hilbert space. Since the dimension of the Hilbert space scales binomially with the size of the active space, a sophisticated Monte Carlo sampling routine is employed. This sampling algorithm can also construct such configuration-interaction-type wave functions from any other type of tensor network states. The configuration-interaction information obtained serves several purposes. It yields a qualitatively correct description of the molecule's electronic structure, it allows us to analyze DMRG wave functions converged for the same molecular system but with different parameter sets (e.g., different numbers of active-system (block) states), and it can be considered a balanced reference for the application of a subsequent standard multi-reference configuration-interaction method.
Baker-Akhiezer Spinor Kernel and Tau-functions on Moduli Spaces of Meromorphic Differentials
NASA Astrophysics Data System (ADS)
Kalla, C.; Korotkin, D.
2014-11-01
In this paper we study the Baker-Akhiezer spinor kernel on moduli spaces of meromorphic differentials on Riemann surfaces. We introduce the Baker-Akhiezer tau-function which is related to both the Bergman tau-function (which was studied before in the context of Hurwitz spaces and spaces of holomorphic Abelian and quadratic differentials) and the KP tau-function on such spaces. In particular, we derive variational formulas of Rauch-Ahlfors type on moduli spaces of meromorphic differentials with prescribed singularities: we use the system of homological coordinates, consisting of absolute and relative periods of the meromorphic differential, and show how to vary the fundamental objects associated to a Riemann surface (the matrix of b-periods, normalized Abelian differentials, the Bergman bidifferential, the Szegö kernel and the Baker-Akhiezer spinor kernel) with respect to these coordinates. The variational formulas encode dependence both on the moduli of the Riemann surface and on the choice of meromorphic differential (variation of the meromorphic differential while keeping the Riemann surface fixed corresponds to flows of KP type). Analyzing the global properties of the Bergman and Baker-Akhiezer tau-functions, we establish relationships between various divisor classes on the moduli spaces.
Ice cream and orbifold Riemann-Roch
NASA Astrophysics Data System (ADS)
Buckley, Anita; Reid, Miles; Zhou, Shengtian
2013-06-01
We give an orbifold Riemann-Roch formula in closed form for the Hilbert series of a quasismooth polarized n-fold (X,D), under the assumption that X is projectively Gorenstein with only isolated orbifold points. Our formula is a sum of parts each of which is integral and Gorenstein symmetric of the same canonical weight; the orbifold parts are called ice cream functions. This form of the Hilbert series is particularly useful for computer algebra, and we illustrate it on examples of {K3} surfaces and Calabi-Yau 3-folds. These results apply also with higher dimensional orbifold strata (see [1] and [2]), although the precise statements are considerably trickier. We expect to return to this in future publications.
New Treatment of Strongly Anisotropic Scattering Phase Functions: The Delta-M+ Method
NASA Astrophysics Data System (ADS)
Stamnes, K. H.; Lin, Z.; Chen, N.; Fan, Y.; Li, W.; Stamnes, S.
2017-12-01
The treatment of strongly anisotropic scattering phase functions is still a challenge for accurate radiance computations. The new Delta-M+ method resolves this problem by introducing a reliable, fast, accurate, and easy-to-use Legendre expansion of the scattering phase function with modified moments. Delta-M+ is an upgrade of the widely-used Delta-M method that truncates the forward scattering cone into a Dirac-delta-function (a direct beam), where the + symbol indicates that it essentially matches moments above the first 2M terms. Compared with the original Delta-M method, Delta-M+ has the same computational efficiency, but the accuracy has been increased dramatically. Tests show that the errors for strongly forward-peaked scattering phase functions are greatly reduced. Furthermore, the accuracy and stability of radiance computations are also significantly improved by applying the new Delta-M+ method.
Gradient-based adaptation of general gaussian kernels.
Glasmachers, Tobias; Igel, Christian
2005-10-01
Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.
On the large eddy simulation of turbulent flows in complex geometry
NASA Technical Reports Server (NTRS)
Ghosal, Sandip
1993-01-01
Application of the method of Large Eddy Simulation (LES) to a turbulent flow consists of three separate steps. First, a filtering operation is performed on the Navier-Stokes equations to remove the small spatial scales. The resulting equations that describe the space time evolution of the 'large eddies' contain the subgrid-scale (sgs) stress tensor that describes the effect of the unresolved small scales on the resolved scales. The second step is the replacement of the sgs stress tensor by some expression involving the large scales - this is the problem of 'subgrid-scale modeling'. The final step is the numerical simulation of the resulting 'closed' equations for the large scale fields on a grid small enough to resolve the smallest of the large eddies, but still much larger than the fine scale structures at the Kolmogorov length. In dividing a turbulent flow field into 'large' and 'small' eddies, one presumes that a cut-off length delta can be sensibly chosen such that all fluctuations on a scale larger than delta are 'large eddies' and the remainder constitute the 'small scale' fluctuations. Typically, delta would be a length scale characterizing the smallest structures of interest in the flow. In an inhomogeneous flow, the 'sensible choice' for delta may vary significantly over the flow domain. For example, in a wall bounded turbulent flow, most statistical averages of interest vary much more rapidly with position near the wall than far away from it. Further, there are dynamically important organized structures near the wall on a scale much smaller than the boundary layer thickness. Therefore, the minimum size of eddies that need to be resolved is smaller near the wall. In general, for the LES of inhomogeneous flows, the width of the filtering kernel delta must be considered to be a function of position. If a filtering operation with a nonuniform filter width is performed on the Navier-Stokes equations, one does not in general get the standard large eddy equations. The complication is caused by the fact that a filtering operation with a nonuniform filter width in general does not commute with the operation of differentiation. This is one of the issues that we have looked at in detail as it is basic to any attempt at applying LES to complex geometry flows. Our principal findings are summarized.
An Internal Data Non-hiding Type Real-time Kernel and its Application to the Mechatronics Controller
NASA Astrophysics Data System (ADS)
Yoshida, Toshio
For the mechatronics equipment controller that controls robots and machine tools, high-speed motion control processing is essential. The software system of the controller like other embedded systems is composed of three layers software such as real-time kernel layer, middleware layer, and application software layer on the dedicated hardware. The application layer in the top layer is composed of many numbers of tasks, and application function of the system is realized by the cooperation between these tasks. In this paper we propose an internal data non-hiding type real-time kernel in which customizing the task control is possible only by change in the program code of the task side without any changes in the program code of real-time kernel. It is necessary to reduce the overhead caused by the real-time kernel task control for the speed-up of the motion control of the mechatronics equipment. For this, customizing the task control function is needed. We developed internal data non-cryptic type real-time kernel ZRK to evaluate this method, and applied to the control of the multi system automatic lathe. The effect of the speed-up of the task cooperation processing was able to be confirmed by combined task control processing on the task side program code using an internal data non-hiding type real-time kernel ZRK.
KITTEN Lightweight Kernel 0.1 Beta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne
2007-12-12
The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency andmore » scalability than with general purpose OS kernels.« less
Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.
Direct Images, Fields of Hilbert Spaces, and Geometric Quantization
NASA Astrophysics Data System (ADS)
Lempert, László; Szőke, Róbert
2014-04-01
Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.
Cui, Fa; Fan, Xiaoli; Chen, Mei; Zhang, Na; Zhao, Chunhua; Zhang, Wei; Han, Jie; Ji, Jun; Zhao, Xueqiang; Yang, Lijuan; Zhao, Zongwu; Tong, Yiping; Wang, Tao; Li, Junming
2016-03-01
QTLs for kernel characteristics and tolerance to N stress were identified, and the functions of ten known genes with regard to these traits were specified. Kernel size and quality characteristics in wheat (Triticum aestivum L.) ultimately determine the end use of the grain and affect its commodity price, both of which are influenced by the application of nitrogen (N) fertilizer. This study characterized quantitative trait loci (QTLs) for kernel size and quality and examined the responses of these traits to low-N stress using a recombinant inbred line population derived from Kenong 9204 × Jing 411. Phenotypic analyses were conducted in five trials that each included low- and high-N treatments. We identified 109 putative additive QTLs for 11 kernel size and quality characteristics and 49 QTLs for tolerance to N stress, 27 and 14 of which were stable across the tested environments, respectively. These QTLs were distributed across all wheat chromosomes except for chromosomes 3A, 4D, 6D, and 7B. Eleven QTL clusters that simultaneously affected kernel size- and quality-related traits were identified. At nine locations, 25 of the 49 QTLs for N deficiency tolerance coincided with the QTLs for kernel characteristics, indicating their genetic independence. The feasibility of indirect selection of a superior genotype for kernel size and quality under high-N conditions in breeding programs designed for a lower input management system are discussed. In addition, we specified the functions of Glu-A1, Glu-B1, Glu-A3, Glu-B3, TaCwi-A1, TaSus2, TaGS2-D1, PPO-D1, Rht-B1, and Ha with regard to kernel characteristics and the sensitivities of these characteristics to N stress. This study provides useful information for the genetic improvement of wheat kernel size, quality, and resistance to N stress.
Stable Local Volatility Calibration Using Kernel Splines
NASA Astrophysics Data System (ADS)
Coleman, Thomas F.; Li, Yuying; Wang, Cheng
2010-09-01
We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.
On the Hilbert-Huang Transform Theoretical Developments
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Blank, Karin; Flatley, Thomas; Huang, Norden E.; Patrick, David; Hestnes, Phyllis
2005-01-01
One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). Both carry strong a-priori assumptions about the source data, such as linearity, of being stationary, and of satisfying the Dirichlet conditions. A recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT), proposes a novel approach to the solution for the nonlinear class of spectrum analysis problems. Using a-posteriori data processing based on the Empirical Mode Decomposition (EMD) sifting process (algorithm), followed by the normalized Hilbert Transform of the decomposition data, the HHT allows spectrum analysis of nonlinear and nonstationary data. The EMD sifting process results in a non-constrained decomposition of a source real value data vector into a finite set of Intrinsic Mode Functions (IMF). These functions form a near orthogonal adaptive basis, a basis that is derived from the data. The IMFs can be further analyzed for spectrum interpretation by the classical Hilbert Transform. A new engineering spectrum analysis tool using HHT has been developed at NASA GSFC, the HHT Data Processing System (HHT-DPS). As the HHT-DPS has been successfully used and commercialized, new applications post additional questions about the theoretical basis behind the HHT and EMD algorithms. Why is the fastest changing component of a composite signal being sifted out first in the EMD sifting process? Why does the EMD sifting process seemingly converge and why does it converge rapidly? Does an IMF have a distinctive structure? Why are the IMFs near orthogonal? We address these questions and develop the initial theoretical background for the HHT. This will contribute to the developments of new HHT processing options, such as real-time and 2-D processing using Field Programmable Array (FPGA) computational resources, enhanced HHT synthesis, and broaden the scope of HHT applications for signal processing.
On Certain Theoretical Developments Underlying the Hilbert-Huang Transform
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Blank, Karin; Flatley, Thomas; Huang, Norden E.; Petrick, David; Hestness, Phyllis
2006-01-01
One of the main traditional tools used in scientific and engineering data spectral analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). Both carry strong a-priori assumptions about the source data, such as being linear and stationary, and of satisfying the Dirichlet conditions. A recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT), proposes a novel approach to the solution for the nonlinear class of spectral analysis problems. Using a-posteriori data processing based on the Empirical Mode Decomposition (EMD) sifting process (algorithm), followed by the normalized Hilbert Transform of the decomposed data, the HHT allows spectral analysis of nonlinear and nonstationary data. The EMD sifting process results in a non-constrained decomposition of a source real-value data vector into a finite set of Intrinsic Mode Functions (IMF). These functions form a nearly orthogonal derived from the data (adaptive) basis. The IMFs can be further analyzed for spectrum content by using the classical Hilbert Transform. A new engineering spectral analysis tool using HHT has been developed at NASA GSFC, the HHT Data Processing System (HHT-DPS). As the HHT-DPS has been successfully used and commercialized, new applications pose additional questions about the theoretical basis behind the HHT and EMD algorithms. Why is the fastest changing component of a composite signal being sifted out first in the EMD sifting process? Why does the EMD sifting process seemingly converge and why does it converge rapidly? Does an IMF have a distinctive structure? Why are the IMFs nearly orthogonal? We address these questions and develop the initial theoretical background for the HHT. This will contribute to the development of new HHT processing options, such as real-time and 2-D processing using Field Programmable Gate Array (FPGA) computational resources,
Lévy processes on a generalized fractal comb
NASA Astrophysics Data System (ADS)
Sandev, Trifce; Iomin, Alexander; Méndez, Vicenç
2016-09-01
Comb geometry, constituted of a backbone and fingers, is one of the most simple paradigm of a two-dimensional structure, where anomalous diffusion can be realized in the framework of Markov processes. However, the intrinsic properties of the structure can destroy this Markovian transport. These effects can be described by the memory and spatial kernels. In particular, the fractal structure of the fingers, which is controlled by the spatial kernel in both the real and the Fourier spaces, leads to the Lévy processes (Lévy flights) and superdiffusion. This generalization of the fractional diffusion is described by the Riesz space fractional derivative. In the framework of this generalized fractal comb model, Lévy processes are considered, and exact solutions for the probability distribution functions are obtained in terms of the Fox H-function for a variety of the memory kernels, and the rate of the superdiffusive spreading is studied by calculating the fractional moments. For a special form of the memory kernels, we also observed a competition between long rests and long jumps. Finally, we considered the fractal structure of the fingers controlled by a Weierstrass function, which leads to the power-law kernel in the Fourier space. This is a special case, when the second moment exists for superdiffusion in this competition between long rests and long jumps.
Classification of Microarray Data Using Kernel Fuzzy Inference System
Kumar Rath, Santanu
2014-01-01
The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature selection method. Kernel functions are used to map original data points into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function ϕ through a mathematical process called the kernel trick. This paper also presents a comparative study for classification using K-FIS along with support vector machine (SVM) for different set of features (genes). Performance parameters available in the literature such as precision, recall, specificity, F-measure, ROC curve, and accuracy are considered to analyze the efficiency of the classification model. From the proposed approach, it is apparent that K-FIS model obtains similar results when compared with SVM model. This is an indication that the proposed approach relies on kernel function. PMID:27433543
Differentiable representations of finite dimensional Lie groups in rigged Hilbert spaces
NASA Astrophysics Data System (ADS)
Wickramasekara, Sujeewa
The inceptive motivation for introducing rigged Hilbert spaces (RHS) in quantum physics in the mid 1960's was to provide the already well established Dirac formalism with a proper mathematical context. It has since become clear, however, that this mathematical framework is lissome enough to accommodate a class of solutions to the dynamical equations of quantum physics that includes some which are not possible in the normative Hilbert space theory. Among the additional solutions, in particular, are those which describe aspects of scattering and decay phenomena that have eluded the orthodox quantum physics. In this light, the RHS formulation seems to provide a mathematical rubric under which various phenomenological observations and calculational techniques, commonly known in the study of resonance scattering and decay as ``effective theories'' (e.g., the Wigner- Weisskopf method), receive a unified theoretical foundation. These observations lead to the inference that a theory founded upon the RHS mathematics may prove to be of better utility and value in understanding quantum physical phenomena. This dissertation primarily aims to contribute to the general formalism of the RHS theory of quantum mechanics by undertaking a study of differentiable representations of finite dimensional Lie groups. In particular, it is shown that a finite dimensional operator Lie algebra G in a rigged Hilbert space can be always integrated, provided one parameter integrability holds true for the elements of any basis for G . This result differs from and extends the well known integration theorem of E. Nelson and the subsequent works of others on unitary representations in that it does not require any assumptions on the existence of analytic vectors. Also presented here is a construction of a particular rigged Hilbert space of Hardy class functions that appears useful in formulating a relativistic version of the RHS theory of resonances and decay. As a contexture for the construction, a synopsis of the new relativistic theory is presented.
Control Transfer in Operating System Kernels
1994-05-13
microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating
Semi-supervised learning for ordinal Kernel Discriminant Analysis.
Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C
2016-12-01
Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.
Center-of-Mass Tomography and Wigner Function for Multimode Photon States
NASA Astrophysics Data System (ADS)
Dudinets, Ivan V.; Man'ko, Vladimir I.
2018-06-01
Tomographic probability representation of multimode electromagnetic field states in the scheme of center-of-mass tomography is reviewed. Both connection of the field state Wigner function and observable Weyl symbols with the center-of-mass tomograms as well as connection of the Grönewold kernel with the center-of-mass tomographic kernel determining the noncommutative product of the tomograms are obtained. The dual center-of-mass tomogram of the photon states are constructed and the dual tomographic kernel is obtained. The models of other generalized center-of-mass tomographies are discussed. Example of two-mode even and odd Schrödinger cat states is presented in details.
ERIC Educational Resources Information Center
Gary, Ronald K.
2004-01-01
The concentration dependence of (delta)S term in the Gibbs free energy function is described in relation to its application to reversible reactions in biochemistry. An intuitive and non-mathematical argument for the concentration dependence of the (delta)S term in the Gibbs free energy equation is derived and the applicability of the equation to…
P- and S-wave Receiver Function Imaging with Scattering Kernels
NASA Astrophysics Data System (ADS)
Hansen, S. M.; Schmandt, B.
2017-12-01
Full waveform inversion provides a flexible approach to the seismic parameter estimation problem and can account for the full physics of wave propagation using numeric simulations. However, this approach requires significant computational resources due to the demanding nature of solving the forward and adjoint problems. This issue is particularly acute for temporary passive-source seismic experiments (e.g. PASSCAL) that have traditionally relied on teleseismic earthquakes as sources resulting in a global scale forward problem. Various approximation strategies have been proposed to reduce the computational burden such as hybrid methods that embed a heterogeneous regional scale model in a 1D global model. In this study, we focus specifically on the problem of scattered wave imaging (migration) using both P- and S-wave receiver function data. The proposed method relies on body-wave scattering kernels that are derived from the adjoint data sensitivity kernels which are typically used for full waveform inversion. The forward problem is approximated using ray theory yielding a computationally efficient imaging algorithm that can resolve dipping and discontinuous velocity interfaces in 3D. From the imaging perspective, this approach is closely related to elastic reverse time migration. An energy stable finite-difference method is used to simulate elastic wave propagation in a 2D hypothetical subduction zone model. The resulting synthetic P- and S-wave receiver function datasets are used to validate the imaging method. The kernel images are compared with those generated by the Generalized Radon Transform (GRT) and Common Conversion Point stacking (CCP) methods. These results demonstrate the potential of the kernel imaging approach to constrain lithospheric structure in complex geologic environments with sufficiently dense recordings of teleseismic data. This is demonstrated using a receiver function dataset from the Central California Seismic Experiment which shows several dipping interfaces related to the tectonic assembly of this region. Figure 1. Scattering kernel examples for three receiver function phases. A) direct P-to-s (Ps), B) direct S-to-p and C) free-surface PP-to-s (PPs).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Yongbin; White, R. D.
In the calculation of the linearized Boltzmann collision operator for an inverse-square force law interaction (Coulomb interaction) F(r)=κ/r{sup 2}, we found the widely used scattering angle cutoff θ≥θ{sub min} is a wrong practise since the divergence still exists after the cutoff has been made. When the correct velocity change cutoff |v′−v|≥δ{sub min} is employed, the scattering angle can be integrated. A unified linearized Boltzmann collision operator for both inverse-square force law and rigid-sphere interactions is obtained. Like many other unified quantities such as transition moments, Fokker-Planck expansion coefficients and energy exchange rates obtained recently [Y. B. Chang and L. A.more » Viehland, AIP Adv. 1, 032128 (2011)], the difference between the two kinds of interactions is characterized by a parameter, γ, which is 1 for rigid-sphere interactions and −3 for inverse-square force law interactions. When the cutoff is removed by setting δ{sub min}=0, Hilbert's well known kernel for rigid-sphere interactions is recovered for γ = 1.« less
NASA Astrophysics Data System (ADS)
Rachmatia, H.; Kusuma, W. A.; Hasibuan, L. S.
2017-05-01
Selection in plant breeding could be more effective and more efficient if it is based on genomic data. Genomic selection (GS) is a new approach for plant-breeding selection that exploits genomic data through a mechanism called genomic prediction (GP). Most of GP models used linear methods that ignore effects of interaction among genes and effects of higher order nonlinearities. Deep belief network (DBN), one of the architectural in deep learning methods, is able to model data in high level of abstraction that involves nonlinearities effects of the data. This study implemented DBN for developing a GP model utilizing whole-genome Single Nucleotide Polymorphisms (SNPs) as data for training and testing. The case study was a set of traits in maize. The maize dataset was acquisitioned from CIMMYT’s (International Maize and Wheat Improvement Center) Global Maize program. Based on Pearson correlation, DBN is outperformed than other methods, kernel Hilbert space (RKHS) regression, Bayesian LASSO (BL), best linear unbiased predictor (BLUP), in case allegedly non-additive traits. DBN achieves correlation of 0.579 within -1 to 1 range.
Genome-wide regression and prediction with the BGLR statistical package.
Pérez, Paulino; de los Campos, Gustavo
2014-10-01
Many modern genomic data analyses require implementing regressions where the number of parameters (p, e.g., the number of marker effects) exceeds sample size (n). Implementing these large-p-with-small-n regressions poses several statistical and computational challenges, some of which can be confronted using Bayesian methods. This approach allows integrating various parametric and nonparametric shrinkage and variable selection procedures in a unified and consistent manner. The BGLR R-package implements a large collection of Bayesian regression models, including parametric variable selection and shrinkage methods and semiparametric procedures (Bayesian reproducing kernel Hilbert spaces regressions, RKHS). The software was originally developed for genomic applications; however, the methods implemented are useful for many nongenomic applications as well. The response can be continuous (censored or not) or categorical (either binary or ordinal). The algorithm is based on a Gibbs sampler with scalar updates and the implementation takes advantage of efficient compiled C and Fortran routines. In this article we describe the methods implemented in BGLR, present examples of the use of the package, and discuss practical issues emerging in real-data analysis. Copyright © 2014 by the Genetics Society of America.
Multitask SVM learning for remote sensing data classification
NASA Astrophysics Data System (ADS)
Leiva-Murillo, Jose M.; Gómez-Chova, Luis; Camps-Valls, Gustavo
2010-10-01
Many remote sensing data processing problems are inherently constituted by several tasks that can be solved either individually or jointly. For instance, each image in a multitemporal classification setting could be taken as an individual task but relation to previous acquisitions should be properly considered. In such problems, different modalities of the data (temporal, spatial, angular) gives rise to changes between the training and test distributions, which constitutes a difficult learning problem known as covariate shift. Multitask learning methods aim at jointly solving a set of prediction problems in an efficient way by sharing information across tasks. This paper presents a novel kernel method for multitask learning in remote sensing data classification. The proposed method alleviates the dataset shift problem by imposing cross-information in the classifiers through matrix regularization. We consider the support vector machine (SVM) as core learner and two regularization schemes are introduced: 1) the Euclidean distance of the predictors in the Hilbert space; and 2) the inclusion of relational operators between tasks. Experiments are conducted in the challenging remote sensing problems of cloud screening from multispectral MERIS images and for landmine detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, D; Danielewicz, P
2002-03-15
This is the manual for a collection of programs that can be used to invert angled-averaged (i.e. one dimensional) two-particle correlation functions. This package consists of several programs that generate kernel matrices (basically the relative wavefunction of the pair, squared), programs that generate test correlation functions from test sources of various types and the program that actually inverts the data using the kernel matrix.
Fission Product Release and Survivability of UN-Kernel LWR TRISO Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besmann, Theodore M; Ferber, Mattison K; Lin, Hua-Tay
2014-01-01
A thermomechanical assessment of the LWR application of TRISO fuel with UN kernels was performed. Fission product release under operational and transient temperature conditions was determined by extrapolation from range calculations and limited data from irradiated UN pellets. Both fission recoil and diffusive release were considered and internal particle pressures computed for both 650 and 800 m diameter kernels as a function of buffer layer thickness. These pressures were used in conjunction with a finite element program to compute the radial and tangential stresses generated with a TRISO particle as a function of fluence. Creep and swelling of the innermore » and outer pyrolytic carbon layers were included in the analyses. A measure of reliability of the TRISO particle was obtained by measuring the probability of survival of the SiC barrier layer and the maximum tensile stress generated in the pyrolytic carbon layers as a function of fluence. These reliability estimates were obtained as functions of the kernel diameter, buffer layer thickness, and pyrolytic carbon layer thickness. The value of the probability of survival at the end of irradiation was inversely proportional to the maximum pressure.« less
NASA Astrophysics Data System (ADS)
Lee, Chung-Shuo; Chen, Yan-Yu; Yu, Chi-Hua; Hsu, Yu-Chuan; Chen, Chuin-Shan
2017-07-01
We present a semi-analytical solution of a time-history kernel for the generalized absorbing boundary condition in molecular dynamics (MD) simulations. To facilitate the kernel derivation, the concept of virtual atoms in real space that can conform with an arbitrary boundary in an arbitrary lattice is adopted. The generalized Langevin equation is regularized using eigenvalue decomposition and, consequently, an analytical expression of an inverse Laplace transform is obtained. With construction of dynamical matrices in the virtual domain, a semi-analytical form of the time-history kernel functions for an arbitrary boundary in an arbitrary lattice can be found. The time-history kernel functions for different crystal lattices are derived to show the generality of the proposed method. Non-equilibrium MD simulations in a triangular lattice with and without the absorbing boundary condition are conducted to demonstrate the validity of the solution.
Approaches to defining deltaic sustainability in the 21st century
NASA Astrophysics Data System (ADS)
Day, John W.; Agboola, Julius; Chen, Zhongyuan; D'Elia, Christopher; Forbes, Donald L.; Giosan, Liviu; Kemp, Paul; Kuenzer, Claudia; Lane, Robert R.; Ramachandran, Ramesh; Syvitski, James; Yañez-Arancibia, Alejandro
2016-12-01
Deltas are among the most productive and economically important of global ecosystems but unfortunately they are also among the most threatened by human activities. Here we discuss deltas and human impact, several approaches to defining deltaic sustainability and present a ranking of sustainability. Delta sustainability must be considered within the context of global biophysical and socioeconomic constraints that include thermodynamic limitations, scale and embeddedness, and constraints at the level of the biosphere/geosphere. The development, functioning, and sustainability of deltas are the result of external and internal inputs of energy and materials, such as sediments and nutrients, that include delta lobe development, channel switching, crevasse formation, river floods, storms and associated waves and storm surges, and tides and other ocean currents. Modern deltas developed over the past several thousand years with relatively stable global mean sea level, predictable material inputs from drainage basins and the sea, and as extremely open systems. Human activity has changed these conditions to make deltas less sustainable, in that they are unable to persist through time structurally or functionally. Deltaic sustainability can be considered from geomorphic, ecological, and economic perspectives, with functional processes at these three levels being highly interactive. Changes in this functioning can lead to either enhanced or diminished sustainability, but most changes have been detrimental. There is a growing understanding that the trajectories of global environmental change and cost of energy will make achieving delta sustainability more challenging and limit options for management. Several delta types are identified in terms of sustainability including those in arid regions, those with high and low energy-intensive management systems, deltas below sea level, tropical deltas, and Arctic deltas. Representative deltas are ranked on a sustainability range. Success in sustainable delta management will depend on utilizing natural delta functioning and an ecological engineering approach.
Seismic Imaging of VTI, HTI and TTI based on Adjoint Methods
NASA Astrophysics Data System (ADS)
Rusmanugroho, H.; Tromp, J.
2014-12-01
Recent studies show that isotropic seismic imaging based on adjoint method reduces low-frequency artifact caused by diving waves, which commonly occur in two-wave wave-equation migration, such as Reverse Time Migration (RTM). Here, we derive new expressions of sensitivity kernels for Vertical Transverse Isotropy (VTI) using the Thomsen parameters (ɛ, δ, γ) plus the P-, and S-wave speeds (α, β) as well as via the Chen & Tromp (GJI 2005) parameters (A, C, N, L, F). For Horizontal Transverse Isotropy (HTI), these parameters depend on an azimuthal angle φ, where the tilt angle θ is equivalent to 90°, and for Tilted Transverse Isotropy (TTI), these parameters depend on both the azimuth and tilt angles. We calculate sensitivity kernels for each of these two approaches. Individual kernels ("images") are numerically constructed based on the interaction between the regular and adjoint wavefields in smoothed models which are in practice estimated through Full-Waveform Inversion (FWI). The final image is obtained as a result of summing all shots, which are well distributed to sample the target model properly. The impedance kernel, which is a sum of sensitivity kernels of density and the Thomsen or Chen & Tromp parameters, looks crisp and promising for seismic imaging. The other kernels suffer from low-frequency artifacts, similar to traditional seismic imaging conditions. However, all sensitivity kernels are important for estimating the gradient of the misfit function, which, in combination with a standard gradient-based inversion algorithm, is used to minimize the objective function in FWI.
NASA Astrophysics Data System (ADS)
Vanfleteren, Diederik; Van Neck, Dimitri; Bultinck, Patrick; Ayers, Paul W.; Waroquier, Michel
2010-12-01
A double-atom partitioning of the molecular one-electron density matrix is used to describe atoms and bonds. All calculations are performed in Hilbert space. The concept of atomic weight functions (familiar from Hirshfeld analysis of the electron density) is extended to atomic weight matrices. These are constructed to be orthogonal projection operators on atomic subspaces, which has significant advantages in the interpretation of the bond contributions. In close analogy to the iterative Hirshfeld procedure, self-consistency is built in at the level of atomic charges and occupancies. The method is applied to a test set of about 67 molecules, representing various types of chemical binding. A close correlation is observed between the atomic charges and the Hirshfeld-I atomic charges.
Heavy and Heavy-Light Mesons in the Covariant Spectator Theory
NASA Astrophysics Data System (ADS)
Stadler, Alfred; Leitão, Sofia; Peña, M. T.; Biernat, Elmar P.
2018-05-01
The masses and vertex functions of heavy and heavy-light mesons, described as quark-antiquark bound states, are calculated with the Covariant Spectator Theory (CST). We use a kernel with an adjustable mixture of Lorentz scalar, pseudoscalar, and vector linear confining interaction, together with a one-gluon-exchange kernel. A series of fits to the heavy and heavy-light meson spectrum were calculated, and we discuss what conclusions can be drawn from it, especially about the Lorentz structure of the kernel. We also apply the Brodsky-Huang-Lepage prescription to express the CST wave functions for heavy quarkonia in terms of light-front variables. They agree remarkably well with light-front wave functions obtained in the Hamiltonian basis light-front quantization approach, even in excited states.
NASA Astrophysics Data System (ADS)
Cho, Jeonghyun; Han, Cheolheui; Cho, Leesang; Cho, Jinsoo
2003-08-01
This paper treats the kernel function of an integral equation that relates a known or prescribed upwash distribution to an unknown lift distribution for a finite wing. The pressure kernel functions of the singular integral equation are summarized for all speed range in the Laplace transform domain. The sonic kernel function has been reduced to a form, which can be conveniently evaluated as a finite limit from both the subsonic and supersonic sides when the Mach number tends to one. Several examples are solved including rectangular wings, swept wings, a supersonic transport wing and a harmonically oscillating wing. Present results are given with other numerical data, showing continuous results through the unit Mach number. Computed results are in good agreement with other numerical results.
Ghorai, Santanu; Mukherjee, Anirban; Dutta, Pranab K
2010-06-01
In this brief we have proposed the multiclass data classification by computationally inexpensive discriminant analysis through vector-valued regularized kernel function approximation (VVRKFA). VVRKFA being an extension of fast regularized kernel function approximation (FRKFA), provides the vector-valued response at single step. The VVRKFA finds a linear operator and a bias vector by using a reduced kernel that maps a pattern from feature space into the low dimensional label space. The classification of patterns is carried out in this low dimensional label subspace. A test pattern is classified depending on its proximity to class centroids. The effectiveness of the proposed method is experimentally verified and compared with multiclass support vector machine (SVM) on several benchmark data sets as well as on gene microarray data for multi-category cancer classification. The results indicate the significant improvement in both training and testing time compared to that of multiclass SVM with comparable testing accuracy principally in large data sets. Experiments in this brief also serve as comparison of performance of VVRKFA with stratified random sampling and sub-sampling.
[Study on application of SVM in prediction of coronary heart disease].
Zhu, Yue; Wu, Jianghua; Fang, Ying
2013-12-01
Base on the data of blood pressure, plasma lipid, Glu and UA by physical test, Support Vector Machine (SVM) was applied to identify coronary heart disease (CHD) in patients and non-CHD individuals in south China population for guide of further prevention and treatment of the disease. Firstly, the SVM classifier was built using radial basis kernel function, liner kernel function and polynomial kernel function, respectively. Secondly, the SVM penalty factor C and kernel parameter sigma were optimized by particle swarm optimization (PSO) and then employed to diagnose and predict the CHD. By comparison with those from artificial neural network with the back propagation (BP) model, linear discriminant analysis, logistic regression method and non-optimized SVM, the overall results of our calculation demonstrated that the classification performance of optimized RBF-SVM model could be superior to other classifier algorithm with higher accuracy rate, sensitivity and specificity, which were 94.51%, 92.31% and 96.67%, respectively. So, it is well concluded that SVM could be used as a valid method for assisting diagnosis of CHD.
New KF-PP-SVM classification method for EEG in brain-computer interfaces.
Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian
2014-01-01
Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.
An orthogonal oriented quadrature hexagonal image pyramid
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.
1987-01-01
An image pyramid has been developed with basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The pyramid operates on a hexagonal sample lattice. The set of seven basis functions consist of three even high-pass kernels, three odd high-pass kernels, and one low-pass kernel. The three even kernels are identified when rotated by 60 or 120 deg, and likewise for the odd. The seven basis functions occupy a point and a hexagon of six nearest neighbors on a hexagonal sample lattice. At the lowest level of the pyramid, the input lattice is the image sample lattice. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing sq rt 7 larger than the previous level, so that the number of coefficients is reduced by a factor of 7 at each level. The relationship between this image code and the processing architecture of the primate visual cortex is discussed.
Hybrid approach of selecting hyperparameters of support vector machine for regression.
Jeng, Jin-Tsong
2006-06-01
To select the hyperparameters of the support vector machine for regression (SVR), a hybrid approach is proposed to determine the kernel parameter of the Gaussian kernel function and the epsilon value of Vapnik's epsilon-insensitive loss function. The proposed hybrid approach includes a competitive agglomeration (CA) clustering algorithm and a repeated SVR (RSVR) approach. Since the CA clustering algorithm is used to find the nearly "optimal" number of clusters and the centers of clusters in the clustering process, the CA clustering algorithm is applied to select the Gaussian kernel parameter. Additionally, an RSVR approach that relies on the standard deviation of a training error is proposed to obtain an epsilon in the loss function. Finally, two functions, one real data set (i.e., a time series of quarterly unemployment rate for West Germany) and an identification of nonlinear plant are used to verify the usefulness of the hybrid approach.
Equation for the Nakanishi Weight Function Using the Inverse Stieltjes Transform
NASA Astrophysics Data System (ADS)
Karmanov, V. A.; Carbonell, J.; Frederico, T.
2018-05-01
The bound state Bethe-Salpeter amplitude was expressed by Nakanishi in terms of a smooth weight function g. By using the generalized Stieltjes transform, we derive an integral equation for the Nakanishi function g for a bound state case. It has the standard form g= \\hat{V} g, where \\hat{V} is a two-dimensional integral operator. The prescription for obtaining the kernel V starting with the kernel K of the Bethe-Salpeter equation is given.
New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.
Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto
2017-06-21
We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.
Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C
Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.
Cid, Jaime A; von Davier, Alina A
2015-05-01
Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.
Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies
Manitz, Juliane; Burger, Patricia; Amos, Christopher I.; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike
2017-01-01
The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility. PMID:28785300
Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies.
Friedrichs, Stefanie; Manitz, Juliane; Burger, Patricia; Amos, Christopher I; Risch, Angela; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike; Hofner, Benjamin
2017-01-01
The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.
On a canonical quantization of 3D Anti de Sitter pure gravity
NASA Astrophysics Data System (ADS)
Kim, Jihun; Porrati, Massimo
2015-10-01
We perform a canonical quantization of pure gravity on AdS 3 using as a technical tool its equivalence at the classical level with a Chern-Simons theory with gauge group SL(2,{R})× SL(2,{R}) . We first quantize the theory canonically on an asymptotically AdS space -which is topologically the real line times a Riemann surface with one connected boundary. Using the "constrain first" approach we reduce canonical quantization to quantization of orbits of the Virasoro group and Kähler quantization of Teichmüller space. After explicitly computing the Kähler form for the torus with one boundary component and after extending that result to higher genus, we recover known results, such as that wave functions of SL(2,{R}) Chern-Simons theory are conformal blocks. We find new restrictions on the Hilbert space of pure gravity by imposing invariance under large diffeomorphisms and normalizability of the wave function. The Hilbert space of pure gravity is shown to be the target space of Conformal Field Theories with continuous spectrum and a lower bound on operator dimensions. A projection defined by topology changing amplitudes in Euclidean gravity is proposed. It defines an invariant subspace that allows for a dual interpretation in terms of a Liouville CFT. Problems and features of the CFT dual are assessed and a new definition of the Hilbert space, exempt from those problems, is proposed in the case of highly-curved AdS 3.
Source imaging of potential fields through a matrix space-domain algorithm
NASA Astrophysics Data System (ADS)
Baniamerian, Jamaledin; Oskooi, Behrooz; Fedi, Maurizio
2017-01-01
Imaging of potential fields yields a fast 3D representation of the source distribution of potential fields. Imaging methods are all based on multiscale methods allowing the source parameters of potential fields to be estimated from a simultaneous analysis of the field at various scales or, in other words, at many altitudes. Accuracy in performing upward continuation and differentiation of the field has therefore a key role for this class of methods. We here describe an accurate method for performing upward continuation and vertical differentiation in the space-domain. We perform a direct discretization of the integral equations for upward continuation and Hilbert transform; from these equations we then define matrix operators performing the transformation, which are symmetric (upward continuation) or anti-symmetric (differentiation), respectively. Thanks to these properties, just the first row of the matrices needs to be computed, so to decrease dramatically the computation cost. Our approach allows a simple procedure, with the advantage of not involving large data extension or tapering, as due instead in case of Fourier domain computation. It also allows level-to-drape upward continuation and a stable differentiation at high frequencies; finally, upward continuation and differentiation kernels may be merged into a single kernel. The accuracy of our approach is shown to be important for multi-scale algorithms, such as the continuous wavelet transform or the DEXP (depth from extreme point method), because border errors, which tend to propagate largely at the largest scales, are radically reduced. The application of our algorithm to synthetic and real-case gravity and magnetic data sets confirms the accuracy of our space domain strategy over FFT algorithms and standard convolution procedures.
A robust background regression based score estimation algorithm for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
2016-12-01
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.
Hilbert's 'Foundations of Physics': Gravitation and electromagnetism within the axiomatic method
NASA Astrophysics Data System (ADS)
Brading, K. A.; Ryckman, T. A.
2008-01-01
In November and December 1915, Hilbert presented two communications to the Göttingen Academy of Sciences under the common title 'The Foundations of Physics'. Versions of each eventually appeared in the Nachrichten of the Academy. Hilbert's first communication has received significant reconsideration in recent years, following the discovery of printer's proofs of this paper, dated 6 December 1915. The focus has been primarily on the 'priority dispute' over the Einstein field equations. Our contention, in contrast, is that the discovery of the December proofs makes it possible to see the thematic linkage between the material that Hilbert cut from the published version of the first communication and the content of the second, as published in 1917. The latter has been largely either disregarded or misinterpreted, and our aim is to show that (a) Hilbert's two communications should be regarded as part of a wider research program within the overarching framework of 'the axiomatic method' (as Hilbert expressly stated was the case), and (b) the second communication is a fine and coherent piece of work within this framework, whose principal aim is to address an apparent tension between general invariance and causality (in the precise sense of Cauchy determination), pinpointed in Theorem I of the first communication. This is not the same problem as that found in Einstein's 'hole argument'-something that, we argue, never confused Hilbert.
Chemical components of cold pressed kernel oils from different Torreya grandis cultivars.
He, Zhiyong; Zhu, Haidong; Li, Wangling; Zeng, Maomao; Wu, Shengfang; Chen, Shangwei; Qin, Fang; Chen, Jie
2016-10-15
The chemical compositions of cold pressed kernel oils of seven Torreya grandis cultivars from China were analyzed in this study. The contents of the chemical components of T. grandis kernels and kernel oils varied to different extents with the cultivar. The T. grandis kernels contained relatively high oil and protein content (45.80-53.16% and 10.34-14.29%, respectively). The kernel oils were rich in unsaturated fatty acids including linoleic (39.39-47.77%), oleic (30.47-37.54%) and eicosatrienoic acid (6.78-8.37%). The kernel oils contained some abundant bioactive substances such as tocopherols (0.64-1.77mg/g) consisting of α-, β-, γ- and δ-isomers; sterols including β-sitosterol (0.90-1.29mg/g), campesterol (0.06-0.32mg/g) and stigmasterol (0.04-0.18mg/g) in addition to polyphenols (9.22-22.16μgGAE/g). The results revealed that the T. grandis kernel oils possessed the potentially important nutrition and health benefits and could be used as oils in the human diet or functional ingredients in the food industry. Copyright © 2016 Elsevier Ltd. All rights reserved.
An Alternative to the Gauge Theoretic Setting
NASA Astrophysics Data System (ADS)
Schroer, Bert
2011-10-01
The standard formulation of quantum gauge theories results from the Lagrangian (functional integral) quantization of classical gauge theories. A more intrinsic quantum theoretical access in the spirit of Wigner's representation theory shows that there is a fundamental clash between the pointlike localization of zero mass (vector, tensor) potentials and the Hilbert space (positivity, unitarity) structure of QT. The quantization approach has no other way than to stay with pointlike localization and sacrifice the Hilbert space whereas the approach built on the intrinsic quantum concept of modular localization keeps the Hilbert space and trades the conflict creating pointlike generation with the tightest consistent localization: semiinfinite spacelike string localization. Whereas these potentials in the presence of interactions stay quite close to associated pointlike field strengths, the interacting matter fields to which they are coupled bear the brunt of the nonlocal aspect in that they are string-generated in a way which cannot be undone by any differentiation. The new stringlike approach to gauge theory also revives the idea of a Schwinger-Higgs screening mechanism as a deeper and less metaphoric description of the Higgs spontaneous symmetry breaking and its accompanying tale about "God's particle" and its mass generation for all the other particles.
NASA Astrophysics Data System (ADS)
Barnhart, B. L.; Eichinger, W. E.; Prueger, J. H.
2010-12-01
Hilbert-Huang transform (HHT) is a relatively new data analysis tool which is used to analyze nonstationary and nonlinear time series data. It consists of an algorithm, called empirical mode decomposition (EMD), which extracts the cyclic components embedded within time series data, as well as Hilbert spectral analysis (HSA) which displays the time and frequency dependent energy contributions from each component in the form of a spectrogram. The method can be considered a generalized form of Fourier analysis which can describe the intrinsic cycles of data with basis functions whose amplitudes and phases may vary with time. The HHT will be introduced and compared to current spectral analysis tools such as Fourier analysis, short-time Fourier analysis, wavelet analysis and Wigner-Ville distributions. A number of applications are also presented which demonstrate the strengths and limitations of the tool, including analyzing sunspot number variability and total solar irradiance proxies as well as global averaged temperature and carbon dioxide concentration. Also, near-surface atmospheric quantities such as temperature and wind velocity are analyzed to demonstrate the nonstationarity of the atmosphere.
The resolvent of singular integral equations. [of kernel functions in mixed boundary value problems
NASA Technical Reports Server (NTRS)
Williams, M. H.
1977-01-01
The investigation reported is concerned with the construction of the resolvent for any given kernel function. In problems with ill-behaved inhomogeneous terms as, for instance, in the aerodynamic problem of flow over a flapped airfoil, direct numerical methods become very difficult. A description is presented of a solution method by resolvent which can be employed in such problems.
Kernel Extended Real-Valued Negative Selection Algorithm (KERNSA)
2013-06-01
are discarded, which is similar to how T-cells function in the BIS. An unlabeled, future sample is considered non -self if any detectors match it. This...Affinity Performs Best With Each type of Dataset 65 5.1.4 More Kernel Functions . . . . . . . . . . . . . . . . . . . . . . . . 65 5.1.5 Automate the...13 2.5 The Negative Selection Algorithm (NSA). . . . . . . . . . . . . . . . . . . . . 16 2.6 Illustration of self and non -self
Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila
2018-05-07
Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.
NASA Astrophysics Data System (ADS)
Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko
2018-04-01
Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.
Characteristics and Mechanisms of Low-Level Jets in the Yangtze River Delta of China
NASA Astrophysics Data System (ADS)
Wei, W.; Wu, B. G.; Ye, X. X.; Wang, H. X.; Zhang, H. S.
2013-12-01
A dataset obtained using a wind-profile radar located at the Yangtze River Delta in China (N, E) in 2009 was used to investigate the characteristics and evolution of low-level jets (LLJs) along the east China coast. The study investigated the daily and seasonal structures of LLJs as well as several possible causes. A total of 1,407 1-h LLJ periods were detected based on an adaptive definition that enabled determination of four LLJ categories. The majority (77 %) of LLJs were found to have speeds 14.0 m s (maximum of 34.6 m s and occur at an average altitude below 600 m (76 % of the observed LLJs). The dominant direction of the LLJs was from the south-south-west, which accounted for nearly 32 %, with the second most common wind direction ranging from to , albeit with a number of stronger LLJs from the west-south-west. A comparison of LLJs and South-west Jets revealed that the frequencies of occurrence in summer are totally different. Results also revealed that in spring and summer, most LLJs originate from the south-south-west, whereas in autumn and winter, north-east is the dominant direction of origin. The peak heights of LLJs tended to be higher in winter than in other seasons. The horizontal wind speed and peak height of the LLJs displayed pronounced diurnal cycles. The Hilbert-Huang transform technique was applied to demonstrate that the intrinsic mode functions with a cycle of nearly 23 h at levels below 800 m, and the instantaneous amplitudes of inertial events (0.0417-0.0476 h frequencies) have large values at 300-600 m. The variations in the occurrences of LLJs suggested connections between the formation mechanisms of LLJs and the South-west Jet stream, steady occupation of synoptic-scale pressure system, and land-sea temperature contrasts.
NASA Astrophysics Data System (ADS)
Chandran, A.; Schulz, Marc D.; Burnell, F. J.
2016-12-01
Many phases of matter, including superconductors, fractional quantum Hall fluids, and spin liquids, are described by gauge theories with constrained Hilbert spaces. However, thermalization and the applicability of quantum statistical mechanics has primarily been studied in unconstrained Hilbert spaces. In this paper, we investigate whether constrained Hilbert spaces permit local thermalization. Specifically, we explore whether the eigenstate thermalization hypothesis (ETH) holds in a pinned Fibonacci anyon chain, which serves as a representative case study. We first establish that the constrained Hilbert space admits a notion of locality by showing that the influence of a measurement decays exponentially in space. This suggests that the constraints are no impediment to thermalization. We then provide numerical evidence that ETH holds for the diagonal and off-diagonal matrix elements of various local observables in a generic disorder-free nonintegrable model. We also find that certain nonlocal observables obey ETH.
Inference of Spatio-Temporal Functions Over Graphs via Multikernel Kriged Kalman Filtering
NASA Astrophysics Data System (ADS)
Ioannidis, Vassilis N.; Romero, Daniel; Giannakis, Georgios B.
2018-06-01
Inference of space-time varying signals on graphs emerges naturally in a plethora of network science related applications. A frequently encountered challenge pertains to reconstructing such dynamic processes, given their values over a subset of vertices and time instants. The present paper develops a graph-aware kernel-based kriged Kalman filter that accounts for the spatio-temporal variations, and offers efficient online reconstruction, even for dynamically evolving network topologies. The kernel-based learning framework bypasses the need for statistical information by capitalizing on the smoothness that graph signals exhibit with respect to the underlying graph. To address the challenge of selecting the appropriate kernel, the proposed filter is combined with a multi-kernel selection module. Such a data-driven method selects a kernel attuned to the signal dynamics on-the-fly within the linear span of a pre-selected dictionary. The novel multi-kernel learning algorithm exploits the eigenstructure of Laplacian kernel matrices to reduce computational complexity. Numerical tests with synthetic and real data demonstrate the superior reconstruction performance of the novel approach relative to state-of-the-art alternatives.
Incorporation of memory effects in coarse-grained modeling via the Mori-Zwanzig formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhen; Bian, Xin; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu
2015-12-28
The Mori-Zwanzig formalism for coarse-graining a complex dynamical system typically introduces memory effects. The Markovian assumption of delta-correlated fluctuating forces is often employed to simplify the formulation of coarse-grained (CG) models and numerical implementations. However, when the time scales of a system are not clearly separated, the memory effects become strong and the Markovian assumption becomes inaccurate. To this end, we incorporate memory effects into CG modeling by preserving non-Markovian interactions between CG variables, and the memory kernel is evaluated directly from microscopic dynamics. For a specific example, molecular dynamics (MD) simulations of star polymer melts are performed while themore » corresponding CG system is defined by grouping many bonded atoms into single clusters. Then, the effective interactions between CG clusters as well as the memory kernel are obtained from the MD simulations. The constructed CG force field with a memory kernel leads to a non-Markovian dissipative particle dynamics (NM-DPD). Quantitative comparisons between the CG models with Markovian and non-Markovian approximations indicate that including the memory effects using NM-DPD yields similar results as the Markovian-based DPD if the system has clear time scale separation. However, for systems with small separation of time scales, NM-DPD can reproduce correct short-time properties that are related to how the system responds to high-frequency disturbances, which cannot be captured by the Markovian-based DPD model.« less
Asymptotic expansions of the kernel functions for line formation with continuous absorption
NASA Technical Reports Server (NTRS)
Hummer, D. G.
1991-01-01
Asymptotic expressions are obtained for the kernel functions M2(tau, alpha, beta) and K2(tau, alpha, beta) appearing in the theory of line formation with complete redistribution over a Voigt profile with damping parameter a, in the presence of a source of continuous opacity parameterized by beta. For a greater than 0, each coefficient in the asymptotic series is expressed as the product of analytic functions of a and eta. For Doppler broadening, only the leading term can be evaluated analytically.
Miao, Jun; Wong, Wilbur C K; Narayan, Sreenath; Wilson, David L
2011-11-01
Partially parallel imaging (PPI) greatly accelerates MR imaging by using surface coil arrays and under-sampling k-space. However, the reduction factor (R) in PPI is theoretically constrained by the number of coils (N(C)). A symmetrically shaped kernel is typically used, but this often prevents even the theoretically possible R from being achieved. Here, the authors propose a kernel design method to accelerate PPI faster than R = N(C). K-space data demonstrates an anisotropic pattern that is correlated with the object itself and to the asymmetry of the coil sensitivity profile, which is caused by coil placement and B(1) inhomogeneity. From spatial analysis theory, reconstruction of such pattern is best achieved by a signal-dependent anisotropic shape kernel. As a result, the authors propose the use of asymmetric kernels to improve k-space reconstruction. The authors fit a bivariate Gaussian function to the local signal magnitude of each coil, then threshold this function to extract the kernel elements. A perceptual difference model (Case-PDM) was employed to quantitatively evaluate image quality. A MR phantom experiment showed that k-space anisotropy increased as a function of magnetic field strength. The authors tested a K-spAce Reconstruction with AnisOtropic KErnel support ("KARAOKE") algorithm with both MR phantom and in vivo data sets, and compared the reconstructions to those produced by GRAPPA, a popular PPI reconstruction method. By exploiting k-space anisotropy, KARAOKE was able to better preserve edges, which is particularly useful for cardiac imaging and motion correction, while GRAPPA failed at a high R near or exceeding N(C). KARAOKE performed comparably to GRAPPA at low Rs. As a rule of thumb, KARAOKE reconstruction should always be used for higher quality k-space reconstruction, particularly when PPI data is acquired at high Rs and/or high field strength.
Miao, Jun; Wong, Wilbur C. K.; Narayan, Sreenath; Wilson, David L.
2011-01-01
Purpose: Partially parallel imaging (PPI) greatly accelerates MR imaging by using surface coil arrays and under-sampling k-space. However, the reduction factor (R) in PPI is theoretically constrained by the number of coils (NC). A symmetrically shaped kernel is typically used, but this often prevents even the theoretically possible R from being achieved. Here, the authors propose a kernel design method to accelerate PPI faster than R = NC. Methods: K-space data demonstrates an anisotropic pattern that is correlated with the object itself and to the asymmetry of the coil sensitivity profile, which is caused by coil placement and B1 inhomogeneity. From spatial analysis theory, reconstruction of such pattern is best achieved by a signal-dependent anisotropic shape kernel. As a result, the authors propose the use of asymmetric kernels to improve k-space reconstruction. The authors fit a bivariate Gaussian function to the local signal magnitude of each coil, then threshold this function to extract the kernel elements. A perceptual difference model (Case-PDM) was employed to quantitatively evaluate image quality. Results: A MR phantom experiment showed that k-space anisotropy increased as a function of magnetic field strength. The authors tested a K-spAce Reconstruction with AnisOtropic KErnel support (“KARAOKE”) algorithm with both MR phantom and in vivo data sets, and compared the reconstructions to those produced by GRAPPA, a popular PPI reconstruction method. By exploiting k-space anisotropy, KARAOKE was able to better preserve edges, which is particularly useful for cardiac imaging and motion correction, while GRAPPA failed at a high R near or exceeding NC. KARAOKE performed comparably to GRAPPA at low Rs. Conclusions: As a rule of thumb, KARAOKE reconstruction should always be used for higher quality k-space reconstruction, particularly when PPI data is acquired at high Rs and∕or high field strength. PMID:22047378
Schwinger-Keldysh superspace in quantum mechanics
NASA Astrophysics Data System (ADS)
Geracie, Michael; Haehl, Felix M.; Loganayagam, R.; Narayan, Prithvi; Ramirez, David M.; Rangamani, Mukund
2018-05-01
We examine, in a quantum mechanical setting, the Hilbert space representation of the Becchi, Rouet, Stora, and Tyutin (BRST) symmetry associated with Schwinger-Keldysh path integrals. This structure had been postulated to encode important constraints on influence functionals in coarse-grained systems with dissipation, or in open quantum systems. Operationally, this entails uplifting the standard Schwinger-Keldysh two-copy formalism into superspace by appending BRST ghost degrees of freedom. These statements were previously argued at the level of the correlation functions. We provide herein a complementary perspective by working out the Hilbert space structure explicitly. Our analysis clarifies two crucial issues not evident in earlier works: first, certain background ghost insertions necessary to reproduce the correct Schwinger-Keldysh correlators arise naturally, and, second, the Schwinger-Keldysh difference operators are systematically dressed by the ghost bilinears, which turn out to be necessary to give rise to a consistent operator algebra. We also elaborate on the structure of the final state (which is BRST closed) and the future boundary condition of the ghost fields.
A Precise Drunk Driving Detection Using Weighted Kernel Based on Electrocardiogram.
Wu, Chung Kit; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei
2016-05-09
Globally, 1.2 million people die and 50 million people are injured annually due to traffic accidents. These traffic accidents cost $500 billion dollars. Drunk drivers are found in 40% of the traffic crashes. Existing drunk driving detection (DDD) systems do not provide accurate detection and pre-warning concurrently. Electrocardiogram (ECG) is a proven biosignal that accurately and simultaneously reflects human's biological status. In this letter, a classifier for DDD based on ECG is investigated in an attempt to reduce traffic accidents caused by drunk drivers. At this point, it appears that there is no known research or literature found on ECG classifier for DDD. To identify drunk syndromes, the ECG signals from drunk drivers are studied and analyzed. As such, a precise ECG-based DDD (ECG-DDD) using a weighted kernel is developed. From the measurements, 10 key features of ECG signals were identified. To incorporate the important features, the feature vectors are weighted in the customization of kernel functions. Four commonly adopted kernel functions are studied. Results reveal that weighted feature vectors improve the accuracy by 11% compared to the computation using the prime kernel. Evaluation shows that ECG-DDD improved the accuracy by 8% to 18% compared to prevailing methods.
NASA Astrophysics Data System (ADS)
Wu, Jianping; Geng, Xianguo
2017-12-01
The inverse scattering transform of the coupled modified Korteweg-de Vries equation is studied by the Riemann-Hilbert approach. In the direct scattering process, the spectral analysis of the Lax pair is performed, from which a Riemann-Hilbert problem is established for the equation. In the inverse scattering process, by solving Riemann-Hilbert problems corresponding to the reflectionless cases, three types of multi-soliton solutions are obtained. The multi-soliton classification is based on the zero structures of the Riemann-Hilbert problem. In addition, some figures are given to illustrate the soliton characteristics of the coupled modified Korteweg-de Vries equation.
ERIC Educational Resources Information Center
Ferrando, Pere J.
2004-01-01
This study used kernel-smoothing procedures to estimate the item characteristic functions (ICFs) of a set of continuous personality items. The nonparametric ICFs were compared with the ICFs estimated (a) by the linear model and (b) by Samejima's continuous-response model. The study was based on a conditioned approach and used an error-in-variables…
Insights from Classifying Visual Concepts with Multiple Kernel Learning
Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki
2012-01-01
Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970
A novel analysis method for near infrared spectroscopy based on Hilbert-Huang transform
NASA Astrophysics Data System (ADS)
Zhou, Zhenyu; Yang, Hongyu; Liu, Yun; Ruan, Zongcai; Luo, Qingming; Gong, Hui; Lu, Zuhong
2007-05-01
Near Infrared Imager (NIRI) has been widely used to access the brain functional activity non-invasively. We use a portable, multi-channel and continuous-wave NIR topography instrument to measure the concentration changes of each hemoglobin species and map cerebral cortex functional activation. By extracting some essential features from the BOLD signals, optical tomography is able to be a new way of neuropsychological studies. Fourier spectral analysis provides a common framework for examining the distribution of global energy in the frequency domain. However, this method assumes that the signal should be stationary, which limits its application in non-stationary system. The hemoglobin species concentration changes are of such kind. In this work we develop a new signal processing method using Hilbert-Huang transform to perform spectral analysis of the functional NIRI signals. Compared with wavelet based multi-resolution analysis (MRA), we demonstrated the extraction of task related signal for observation of activation in the prefrontal cortex (PFC) in vision stimulation experiment. This method provides a new analysis tool for functional NIRI signals. Our experimental results show that the proposed approach provides the unique method for reconstructing target signal without losing original information and enables us to understand the episode of functional NIRI more precisely.
NASA Astrophysics Data System (ADS)
Gangeh, Mehrdad J.; Fung, Brandon; Tadayyon, Hadi; Tran, William T.; Czarnota, Gregory J.
2016-03-01
A non-invasive computer-aided-theragnosis (CAT) system was developed for the early assessment of responses to neoadjuvant chemotherapy in patients with locally advanced breast cancer. The CAT system was based on quantitative ultrasound spectroscopy methods comprising several modules including feature extraction, a metric to measure the dissimilarity between "pre-" and "mid-treatment" scans, and a supervised learning algorithm for the classification of patients to responders/non-responders. One major requirement for the successful design of a high-performance CAT system is to accurately measure the changes in parametric maps before treatment onset and during the course of treatment. To this end, a unified framework based on Hilbert-Schmidt independence criterion (HSIC) was used for the design of feature extraction from parametric maps and the dissimilarity measure between the "pre-" and "mid-treatment" scans. For the feature extraction, HSIC was used to design a supervised dictionary learning (SDL) method by maximizing the dependency between the scans taken from "pre-" and "mid-treatment" with "dummy labels" given to the scans. For the dissimilarity measure, an HSIC-based metric was employed to effectively measure the changes in parametric maps as an indication of treatment effectiveness. The HSIC-based feature extraction and dissimilarity measure used a kernel function to nonlinearly transform input vectors into a higher dimensional feature space and computed the population means in the new space, where enhanced group separability was ideally obtained. The results of the classification using the developed CAT system indicated an improvement of performance compared to a CAT system with basic features using histogram of intensity.
Signal Processing for Determining Water Height in Steam Pipes with Dynamic Surface Conditions
NASA Technical Reports Server (NTRS)
Lih, Shyh-Shiuh; Lee, Hyeong Jae; Bar-Cohen, Yoseph
2015-01-01
An enhanced signal processing method based on the filtered Hilbert envelope of the auto-correlation function of the wave signal has been developed to monitor the height of condensed water through the steel wall of steam pipes with dynamic surface conditions. The developed signal processing algorithm can also be used to estimate the thickness of the pipe to determine the cut-off frequency for the low pass filter frequency of the Hilbert Envelope. Testing and analysis results by using the developed technique for dynamic surface conditions are presented. A multiple array of transducers setup and methodology are proposed for both the pulse-echo and pitch-catch signals to monitor the fluctuation of the water height due to disturbance, water flow, and other anomaly conditions.
NASA Astrophysics Data System (ADS)
Cai, Jianhua
2017-05-01
The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.
Research in Parallel Computing: 1987-1990
1994-08-05
emulation, we layered UNIX BSD 4.3 functionality above the kernel primitives, but packaged both as a monolithic unit running in privileged state. This...further, so that only a "pure kernel " or " microkernel " runs in privileged mode, while the other components of the environment execute as one or more client... kernel DTIC TAB 24 2.2.2 Nectar’s communication software Unannounced 0 25 2.2.3 A Nectar programming interface Justification 25 2.3 System evaluation 26
Hybrid Techniques for Quantum Circuit Simulation
2014-02-01
Detailed theorems and proofs describing these results are included in our published manuscript [10]. Embedding of stabilizer geometry in the Hilbert ...space. We also describe how the discrete embedding of stabilizer geometry in Hilbert space complicates several natural geometric tasks. As described...the Hilbert space in which they are embedded, and that they are arranged in a fairly uniform pattern. These factors suggest that, if one seeks a
Testing the Dimension of Hilbert Spaces
NASA Astrophysics Data System (ADS)
Brunner, Nicolas; Pironio, Stefano; Acin, Antonio; Gisin, Nicolas; Méthot, André Allan; Scarani, Valerio
2008-05-01
Given a set of correlations originating from measurements on a quantum state of unknown Hilbert space dimension, what is the minimal dimension d necessary to describe such correlations? We introduce the concept of dimension witness to put lower bounds on d. This work represents a first step in a broader research program aiming to characterize Hilbert space dimension in various contexts related to fundamental questions and quantum information applications.
Boisdenghien, Zino; Fias, Stijn; Van Alsenoy, Christian; De Proft, Frank; Geerlings, Paul
2014-07-28
Most of the work done on the linear response kernel χ(r,r') has focussed on its atom-atom condensed form χAB. Our previous work [Boisdenghien et al., J. Chem. Theory Comput., 2013, 9, 1007] was the first effort to truly focus on the non-condensed form of this function for closed (sub)shell atoms in a systematic fashion. In this work, we extend our method to the open shell case. To simplify the plotting of our results, we average our results to a symmetrical quantity χ(r,r'). This allows us to plot the linear response kernel for all elements up to and including argon and to investigate the periodicity throughout the first three rows in the periodic table and in the different representations of χ(r,r'). Within the context of Spin Polarized Conceptual Density Functional Theory, the first two-dimensional plots of spin polarized linear response functions are presented and commented on for some selected cases on the basis of the atomic ground state electronic configurations. Using the relation between the linear response kernel and the polarizability we compare the values of the polarizability tensor calculated using our method to high-level values.
Tricoli, Ugo; Macdonald, Callum M; Durduran, Turgut; Da Silva, Anabela; Markel, Vadim A
2018-02-01
Diffuse correlation tomography (DCT) uses the electric-field temporal autocorrelation function to measure the mean-square displacement of light-scattering particles in a turbid medium over a given exposure time. The movement of blood particles is here estimated through a Brownian-motion-like model in contrast to ordered motion as in blood flow. The sensitivity kernel relating the measurable field correlation function to the mean-square displacement of the particles can be derived by applying a perturbative analysis to the correlation transport equation (CTE). We derive an analytical expression for the CTE sensitivity kernel in terms of the Green's function of the radiative transport equation, which describes the propagation of the intensity. We then evaluate the kernel numerically. The simulations demonstrate that, in the transport regime, the sensitivity kernel provides sharper spatial information about the medium as compared with the correlation diffusion approximation. Also, the use of the CTE allows one to explore some additional degrees of freedom in the data such as the collimation direction of sources and detectors. Our results can be used to improve the spatial resolution of DCT, in particular, with applications to blood flow imaging in regions where the Brownian motion is dominant.
NASA Astrophysics Data System (ADS)
Tricoli, Ugo; Macdonald, Callum M.; Durduran, Turgut; Da Silva, Anabela; Markel, Vadim A.
2018-02-01
Diffuse correlation tomography (DCT) uses the electric-field temporal autocorrelation function to measure the mean-square displacement of light-scattering particles in a turbid medium over a given exposure time. The movement of blood particles is here estimated through a Brownian-motion-like model in contrast to ordered motion as in blood flow. The sensitivity kernel relating the measurable field correlation function to the mean-square displacement of the particles can be derived by applying a perturbative analysis to the correlation transport equation (CTE). We derive an analytical expression for the CTE sensitivity kernel in terms of the Green's function of the radiative transport equation, which describes the propagation of the intensity. We then evaluate the kernel numerically. The simulations demonstrate that, in the transport regime, the sensitivity kernel provides sharper spatial information about the medium as compared with the correlation diffusion approximation. Also, the use of the CTE allows one to explore some additional degrees of freedom in the data such as the collimation direction of sources and detectors. Our results can be used to improve the spatial resolution of DCT, in particular, with applications to blood flow imaging in regions where the Brownian motion is dominant.
Oversampling the Minority Class in the Feature Space.
Perez-Ortiz, Maria; Gutierrez, Pedro Antonio; Tino, Peter; Hervas-Martinez, Cesar
2016-09-01
The imbalanced nature of some real-world data is one of the current challenges for machine learning researchers. One common approach oversamples the minority class through convex combination of its patterns. We explore the general idea of synthetic oversampling in the feature space induced by a kernel function (as opposed to input space). If the kernel function matches the underlying problem, the classes will be linearly separable and synthetically generated patterns will lie on the minority class region. Since the feature space is not directly accessible, we use the empirical feature space (EFS) (a Euclidean space isomorphic to the feature space) for oversampling purposes. The proposed method is framed in the context of support vector machines, where the imbalanced data sets can pose a serious hindrance. The idea is investigated in three scenarios: 1) oversampling in the full and reduced-rank EFSs; 2) a kernel learning technique maximizing the data class separation to study the influence of the feature space structure (implicitly defined by the kernel function); and 3) a unified framework for preferential oversampling that spans some of the previous approaches in the literature. We support our investigation with extensive experiments over 50 imbalanced data sets.
H-SLAM: Rao-Blackwellized Particle Filter SLAM Using Hilbert Maps.
Vallicrosa, Guillem; Ridao, Pere
2018-05-01
Occupancy Grid maps provide a probabilistic representation of space which is important for a variety of robotic applications like path planning and autonomous manipulation. In this paper, a SLAM (Simultaneous Localization and Mapping) framework capable of obtaining this representation online is presented. The H-SLAM (Hilbert Maps SLAM) is based on Hilbert Map representation and uses a Particle Filter to represent the robot state. Hilbert Maps offer a continuous probabilistic representation with a small memory footprint. We present a series of experimental results carried both in simulation and with real AUVs (Autonomous Underwater Vehicles). These results demonstrate that our approach is able to represent the environment more consistently while capable of running online.
NASA Technical Reports Server (NTRS)
Lan, C. E.; Lamar, J. E.
1977-01-01
A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.
Improving the accuracy of electronic moisture meters for runner-type peanuts
USDA-ARS?s Scientific Manuscript database
Runner-type peanut kernel moisture content (MC) is measured periodically during curing and post harvest processing with electronic moisture meters for marketing and quality control. MC is predicted for 250 g samples of kernels with a mathematical function from measurements of various physical prope...
On the solution of integral equations with a generalized cauchy kernel
NASA Technical Reports Server (NTRS)
Kaya, A. C.; Erdogan, F.
1986-01-01
In this paper a certain class of singular integral equations that may arise from the mixed boundary value problems in nonhomogeneous materials is considered. The distinguishing feature of these equations is that in addition to the Cauchy singularity, the kernels contain terms that are singular only at the end points. In the form of the singular integral equations adopted, the density function is a potential or a displacement and consequently the kernel has strong singularities of the form (t-x) sup-2, x sup n-2 (t+x) sup n, (n or = 2, 0x,tb). The complex function theory is used to determine the fundamental function of the problem for the general case and a simple numerical technique is described to solve the integral equation. Two examples from the theory of elasticity are then considered to show the application of the technique.
NASA Astrophysics Data System (ADS)
Nugroho, N. F. T. A.; Slamet, I.
2018-05-01
Poverty is a socio-economic condition of a person or group of people who can not fulfil their basic need to maintain and develop a dignified life. This problem still cannot be solved completely in Central Java Province. Currently, the percentage of poverty in Central Java is 13.32% which is higher than the national poverty rate which is 11.13%. In this research, data of percentage of poor people in Central Java Province has been analyzed through geographically weighted regression (GWR). The aim of this research is therefore to model poverty percentage data in Central Java Province using GWR with weighted function of kernel bisquare, and tricube. As the results, we obtained GWR model with bisquare and tricube kernel weighted function on poverty percentage data in Central Java province. From the GWR model, there are three categories of region which are influenced by different of significance factors.
Hilbert's axiomatic method and Carnap's general axiomatics.
Stöltzner, Michael
2015-10-01
This paper compares the axiomatic method of David Hilbert and his school with Rudolf Carnap's general axiomatics that was developed in the late 1920s, and that influenced his understanding of logic of science throughout the 1930s, when his logical pluralism developed. The distinct perspectives become visible most clearly in how Richard Baldus, along the lines of Hilbert, and Carnap and Friedrich Bachmann analyzed the axiom system of Hilbert's Foundations of Geometry—the paradigmatic example for the axiomatization of science. Whereas Hilbert's axiomatic method started from a local analysis of individual axiom systems in which the foundations of mathematics as a whole entered only when establishing the system's consistency, Carnap and his Vienna Circle colleague Hans Hahn instead advocated a global analysis of axiom systems in general. A primary goal was to evade, or formalize ex post, mathematicians' 'material' talk about axiom systems for such talk was held to be error-prone and susceptible to metaphysics. Copyright © 2015 Elsevier Ltd. All rights reserved.
The place of probability in Hilbert's axiomatization of physics, ca. 1900-1928
NASA Astrophysics Data System (ADS)
Verburgt, Lukas M.
2016-02-01
Although it has become a common place to refer to the 'sixth problem' of Hilbert's (1900) Paris lecture as the starting point for modern axiomatized probability theory, his own views on probability have received comparatively little explicit attention. The central aim of this paper is to provide a detailed account of this topic in light of the central observation that the development of Hilbert's project of the axiomatization of physics went hand-in-hand with a redefinition of the status of probability theory and the meaning of probability. Where Hilbert first regarded the theory as a mathematizable physical discipline and later approached it as a 'vague' mathematical application in physics, he eventually understood probability, first, as a feature of human thought and, then, as an implicitly defined concept without a fixed physical interpretation. It thus becomes possible to suggest that Hilbert came to question, from the early 1920s on, the very possibility of achieving the goal of the axiomatization of probability as described in the 'sixth problem' of 1900.
NASA Astrophysics Data System (ADS)
Kumar, Keshav; Shukla, Sumitra; Singh, Sachin Kumar
2018-04-01
Periodic impulses arise due to localised defects in rolling element bearing. At the early stage of defects, the weak impulses are immersed in strong machinery vibration. This paper proposes a combined approach based upon Hilbert envelop and zero frequency resonator for the detection of the weak periodic impulses. In the first step, the strength of impulses is increased by taking normalised Hilbert envelop of the signal. It also helps in better localization of these impulses on time axis. In the second step, Hilbert envelope of the signal is passed through the zero frequency resonator for the exact localization of the periodic impulses. Spectrum of the resonator output gives peak at the fault frequency. Simulated noisy signal with periodic impulses is used to explain the working of the algorithm. The proposed technique is verified with experimental data also. A comparison of the proposed method with Hilbert-Haung transform (HHT) based method is presented to establish the effectiveness of the proposed method.
Takashima, Ryoichi; Takiguchi, Tetsuya; Ariki, Yasuo
2013-02-01
This paper presents a method for discriminating the location of the sound source (talker) using only a single microphone. In a previous work, the single-channel approach for discriminating the location of the sound source was discussed, where the acoustic transfer function from a user's position is estimated by using a hidden Markov model of clean speech in the cepstral domain. In this paper, each cepstral dimension of the acoustic transfer function is newly weighted, in order to obtain the cepstral dimensions having information that is useful for classifying the user's position. Then, this paper proposes a feature-weighting method for the cepstral parameter using multiple kernel learning, defining the base kernels for each cepstral dimension of the acoustic transfer function. The user's position is trained and classified by support vector machine. The effectiveness of this method has been confirmed by sound source (talker) localization experiments performed in different room environments.
Upper-Division Student Difficulties with the Dirac Delta Function
ERIC Educational Resources Information Center
Wilcox, Bethany R.; Pollock, Steven J.
2015-01-01
The Dirac delta function is a standard mathematical tool that appears repeatedly in the undergraduate physics curriculum in multiple topical areas including electrostatics, and quantum mechanics. While Dirac delta functions are often introduced in order to simplify a problem mathematically, students still struggle to manipulate and interpret them.…
Amin, Furheen; Masoodi, F A; Baba, Waqas N; Khan, Asma Ashraf; Ganie, Bashir Ahmad
2017-11-01
Packing tissue between and around the kernel halves just turning brown (PTB) is a phenological indicator of kernel ripening at harvest in walnuts. The effect of three ripening stages (Pre-PTB, PTB and Post-PTB) on kernel quality characteristics, mineral composition, lipid characterization, sensory analysis, antioxidant and antibacterial activity were investigated in fresh kernels of indigenous numbered walnut selection of Kashmir valley "SKAU-02". Proximate composition, physical properties and sensory analysis of walnut kernels showed better results for Pre-PTB and PTB while higher mineral content was seen for kernels at Post-PTB stage in comparison to other stages of ripening. Kernels showed significantly higher levels of Omega-3 PUFA (C18:3 n3 ) and low n6/n3 ratio when harvested at Pre-PTB and PTB stages. The highest phenolic content and antioxidant activity was observed at the first stage of ripening and a steady decrease was observed at later stages. TBARS values increased as ripening advanced but did not show any significant difference in malonaldehyde formation during early ripening stages whereas it showed marked increase in walnut kernels at post-PTB stage. Walnut extracts inhibited growth of Gram-positive bacteria ( B. cereus, B. subtilis, and S. aureus ) with respective MICs of 1, 1 and 5 mg/mL and gram negative bacteria ( E. coli, P. and K. pneumonia ) with MIC of 100 mg/mL. Zone of inhibition obtained against all the bacterial strains from walnut kernel extracts increased with increase in the stage of ripening. It is concluded that Pre-PTB harvest stage with higher antioxidant activities, better fatty acid profile and consumer acceptability could be preferred harvesting stage for obtaining functionally superior walnut kernels.
Multidimensional NMR inversion without Kronecker products: Multilinear inversion
NASA Astrophysics Data System (ADS)
Medellín, David; Ravi, Vivek R.; Torres-Verdín, Carlos
2016-08-01
Multidimensional NMR inversion using Kronecker products poses several challenges. First, kernel compression is only possible when the kernel matrices are separable, and in recent years, there has been an increasing interest in NMR sequences with non-separable kernels. Second, in three or more dimensions, the singular value decomposition is not unique; therefore kernel compression is not well-defined for higher dimensions. Without kernel compression, the Kronecker product yields matrices that require large amounts of memory, making the inversion intractable for personal computers. Finally, incorporating arbitrary regularization terms is not possible using the Lawson-Hanson (LH) or the Butler-Reeds-Dawson (BRD) algorithms. We develop a minimization-based inversion method that circumvents the above problems by using multilinear forms to perform multidimensional NMR inversion without using kernel compression or Kronecker products. The new method is memory efficient, requiring less than 0.1% of the memory required by the LH or BRD methods. It can also be extended to arbitrary dimensions and adapted to include non-separable kernels, linear constraints, and arbitrary regularization terms. Additionally, it is easy to implement because only a cost function and its first derivative are required to perform the inversion.
On supervised graph Laplacian embedding CA model & kernel construction and its application
NASA Astrophysics Data System (ADS)
Zeng, Junwei; Qian, Yongsheng; Wang, Min; Yang, Yongzhong
2017-01-01
There are many methods to construct kernel with given data attribute information. Gaussian radial basis function (RBF) kernel is one of the most popular ways to construct a kernel. The key observation is that in real-world data, besides the data attribute information, data label information also exists, which indicates the data class. In order to make use of both data attribute information and data label information, in this work, we propose a supervised kernel construction method. Supervised information from training data is integrated into standard kernel construction process to improve the discriminative property of resulting kernel. A supervised Laplacian embedding cellular automaton model is another key application developed for two-lane heterogeneous traffic flow with the safe distance and large-scale truck. Based on the properties of traffic flow in China, we re-calibrate the cell length, velocity, random slowing mechanism and lane-change conditions and use simulation tests to study the relationships among the speed, density and flux. The numerical results show that the large-scale trucks will have great effects on the traffic flow, which are relevant to the proportion of the large-scale trucks, random slowing rate and the times of the lane space change.
On the Hilbert-Huang Transform Data Processing System Development
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Flatley, Thomas P.; Huang, Norden E.; Cornwell, Evette; Smith, Darell
2003-01-01
One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The Fourier view of nonlinear mechanics that had existed for a long time, and the associated FFT (fairly recent development), carry strong a-priori assumptions about the source data, such as linearity and of being stationary. Natural phenomena measurements are essentially nonlinear and nonstationary. A very recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT) proposes a novel approach to the solution for the nonlinear class of spectrum analysis problems. Using the Empirical Mode Decomposition (EMD) followed by the Hilbert Transform of the empirical decomposition data (HT), the HHT allows spectrum analysis of nonlinear and nonstationary data by using an engineering a-posteriori data processing, based on the EMD algorithm. This results in a non-constrained decomposition of a source real value data vector into a finite set of Intrinsic Mode Functions (IMF) that can be further analyzed for spectrum interpretation by the classical Hilbert Transform. This paper describes phase one of the development of a new engineering tool, the HHT Data Processing System (HHTDPS). The HHTDPS allows applying the "T to a data vector in a fashion similar to the heritage FFT. It is a generic, low cost, high performance personal computer (PC) based system that implements the HHT computational algorithms in a user friendly, file driven environment. This paper also presents a quantitative analysis for a complex waveform data sample, a summary of technology commercialization efforts and the lessons learned from this new technology development.
Grating-based phase contrast tomosynthesis imaging: Proof-of-concept experimental studies
Li, Ke; Ge, Yongshuai; Garrett, John; Bevins, Nicholas; Zambelli, Joseph; Chen, Guang-Hong
2014-01-01
Purpose: This paper concerns the feasibility of x-ray differential phase contrast (DPC) tomosynthesis imaging using a grating-based DPC benchtop experimental system, which is equipped with a commercial digital flat-panel detector and a medical-grade rotating-anode x-ray tube. An extensive system characterization was performed to quantify its imaging performance. Methods: The major components of the benchtop system include a diagnostic x-ray tube with a 1.0 mm nominal focal spot size, a flat-panel detector with 96 μm pixel pitch, a sample stage that rotates within a limited angular span of ±30°, and a Talbot-Lau interferometer with three x-ray gratings. A total of 21 projection views acquired with 3° increments were used to reconstruct three sets of tomosynthetic image volumes, including the conventional absorption contrast tomosynthesis image volume (AC-tomo) reconstructed using the filtered-backprojection (FBP) algorithm with the ramp kernel, the phase contrast tomosynthesis image volume (PC-tomo) reconstructed using FBP with a Hilbert kernel, and the differential phase contrast tomosynthesis image volume (DPC-tomo) reconstructed using the shift-and-add algorithm. Three inhouse physical phantoms containing tissue-surrogate materials were used to characterize the signal linearity, the signal difference-to-noise ratio (SDNR), the three-dimensional noise power spectrum (3D NPS), and the through-plane artifact spread function (ASF). Results: While DPC-tomo highlights edges and interfaces in the image object, PC-tomo removes the differential nature of the DPC projection data and its pixel values are linearly related to the decrement of the real part of the x-ray refractive index. The SDNR values of polyoxymethylene in water and polystyrene in oil are 1.5 and 1.0, respectively, in AC-tomo, and the values were improved to 3.0 and 2.0, respectively, in PC-tomo. PC-tomo and AC-tomo demonstrate equivalent ASF, but their noise characteristics quantified by the 3D NPS were found to be different due to the difference in the tomosynthesis image reconstruction algorithms. Conclusions: It is feasible to simultaneously generate x-ray differential phase contrast, phase contrast, and absorption contrast tomosynthesis images using a grating-based data acquisition setup. The method shows promise in improving the visibility of several low-density materials and therefore merits further investigation. PMID:24387511
Characteristics and composition of watermelon, pumpkin, and paprika seed oils and flours.
El-Adawy, T A; Taha, K M
2001-03-01
The nutritional quality and functional properties of paprika seed flour and seed kernel flours of pumpkin and watermelon were studied, as were the characteristics and structure of their seed oils. Paprika seed and seed kernels of pumpkin and watermelon were rich in oil and protein. All flour samples contained considerable amounts of P, K, Mg, Mn, and Ca. Paprika seed flour was superior to watermelon and pumpkin seed kernel flours in content of lysine and total essential amino acids. Oil samples had high amounts of unsaturated fatty acids with linoleic and oleic acids as the major acids. All oil samples fractionated into seven classes including triglycerides as a major lipid class. Data obtained for the oils' characteristics compare well with those of other edible oils. Antinutritional compounds such as stachyose, raffinose, verbascose, trypsin inhibitor, phytic acid, and tannins were detected in all flours. Pumpkin seed kernel flour had higher values of chemical score, essential amino acid index, and in vitro protein digestibility than the other flours examined. The first limiting amino acid was lysine for both watermelon and pumpkin seed kernel flours, but it was leucine in paprika seed flour. Protein solubility index, water and fat absorption capacities, emulsification properties, and foam stability were excellent in watermelon and pumpkin seed kernel flours and fairly good in paprika seed flour. Flour samples could be potentially added to food systems such as bakery products and ground meat formulations not only as a nutrient supplement but also as a functional agent in these formulations.
Adaptive wiener image restoration kernel
Yuan, Ding [Henderson, NV
2007-06-05
A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.
Discrete element method as an approach to model the wheat milling process
USDA-ARS?s Scientific Manuscript database
It is a well-known phenomenon that break-release, particle size, and size distribution of wheat milling are functions of machine operational parameters and grain properties. Due to the non-uniformity of characteristics and properties of wheat kernels, the kernel physical and mechanical properties af...
A Comparison of Methods for Nonparametric Estimation of Item Characteristic Curves for Binary Items
ERIC Educational Resources Information Center
Lee, Young-Sun
2007-01-01
This study compares the performance of three nonparametric item characteristic curve (ICC) estimation procedures: isotonic regression, smoothed isotonic regression, and kernel smoothing. Smoothed isotonic regression, employed along with an appropriate kernel function, provides better estimates and also satisfies the assumption of strict…
Ali, Ferhana Y; Hall, Matthew G; Desvergne, Béatrice; Warner, Timothy D; Mitchell, Jane A
2009-11-01
Peroxisome proliferator-activated receptor beta/delta (PPARbeta/delta) is a nuclear receptor found in platelets. PPARbeta/delta agonists acutely inhibit platelet function within a few minutes of addition. As platelets are anucleated, the effects of PPARbeta/delta agonists on platelets must be nongenomic. Currently, the particular role of PPARbeta/delta receptors and their intracellular signaling pathways in platelets are not known. We have used mice lacking PPARbeta/delta (PPARbeta/delta(-/-)) to show the effects of the PPARbeta/delta agonist GW501516 on platelet adhesion and cAMP levels are mediated specifically by PPARbeta/delta, however GW501516 had no PPARbeta/delta-specific effect on platelet aggregation. Studies in human platelets showed that PKCalpha, which can mediate platelet activation, was bound and repressed by PPARbeta/delta after platelets were treated with GW501516. These data provide evidence of a novel mechanism by which PPAR receptors influence platelet activity and thereby thrombotic risk.
Prioritizing individual genetic variants after kernel machine testing using variable selection.
He, Qianchuan; Cai, Tianxi; Liu, Yang; Zhao, Ni; Harmon, Quaker E; Almli, Lynn M; Binder, Elisabeth B; Engel, Stephanie M; Ressler, Kerry J; Conneely, Karen N; Lin, Xihong; Wu, Michael C
2016-12-01
Kernel machine learning methods, such as the SNP-set kernel association test (SKAT), have been widely used to test associations between traits and genetic polymorphisms. In contrast to traditional single-SNP analysis methods, these methods are designed to examine the joint effect of a set of related SNPs (such as a group of SNPs within a gene or a pathway) and are able to identify sets of SNPs that are associated with the trait of interest. However, as with many multi-SNP testing approaches, kernel machine testing can draw conclusion only at the SNP-set level, and does not directly inform on which one(s) of the identified SNP set is actually driving the associations. A recently proposed procedure, KerNel Iterative Feature Extraction (KNIFE), provides a general framework for incorporating variable selection into kernel machine methods. In this article, we focus on quantitative traits and relatively common SNPs, and adapt the KNIFE procedure to genetic association studies and propose an approach to identify driver SNPs after the application of SKAT to gene set analysis. Our approach accommodates several kernels that are widely used in SNP analysis, such as the linear kernel and the Identity by State (IBS) kernel. The proposed approach provides practically useful utilities to prioritize SNPs, and fills the gap between SNP set analysis and biological functional studies. Both simulation studies and real data application are used to demonstrate the proposed approach. © 2016 WILEY PERIODICALS, INC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loubenets, Elena R.
We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less
NASA Technical Reports Server (NTRS)
Ma, Q.; Tipping, R. H.; Lavrentieva, N. N.
2012-01-01
By adopting a concept from signal processing, instead of starting from the correlation functions which are even, one considers the causal correlation functions whose Fourier transforms become complex. Their real and imaginary parts multiplied by 2 are the Fourier transforms of the original correlations and the subsequent Hilbert transforms, respectively. Thus, by taking this step one can complete the two previously needed transforms. However, to obviate performing the Cauchy principal integrations required in the Hilbert transforms is the greatest advantage. Meanwhile, because the causal correlations are well-bounded within the time domain and band limited in the frequency domain, one can replace their Fourier transforms by the discrete Fourier transforms and the latter can be carried out with the FFT algorithm. This replacement is justified by sampling theory because the Fourier transforms can be derived from the discrete Fourier transforms with the Nyquis rate without any distortions. We apply this method in calculating pressure induced shifts of H2O lines and obtain more reliable values. By comparing the calculated shifts with those in HITRAN 2008 and by screening both of them with the pair identity and the smooth variation rules, one can conclude many of shift values in HITRAN are not correct.
Elliptic complexes over C∗-algebras of compact operators
NASA Astrophysics Data System (ADS)
Krýsl, Svatopluk
2016-03-01
For a C∗-algebra A of compact operators and a compact manifold M, we prove that the Hodge theory holds for A-elliptic complexes of pseudodifferential operators acting on smooth sections of finitely generated projective A-Hilbert bundles over M. For these C∗-algebras and manifolds, we get a topological isomorphism between the cohomology groups of an A-elliptic complex and the space of harmonic elements of the complex. Consequently, the cohomology groups appear to be finitely generated projective C∗-Hilbert modules and especially, Banach spaces. We also prove that in the category of Hilbert A-modules and continuous adjointable Hilbert A-module homomorphisms, the property of a complex of being self-adjoint parametrix possessing characterizes the complexes of Hodge type.
Learning a peptide-protein binding affinity predictor with kernel ridge regression
2013-01-01
Background The cellular function of a vast majority of proteins is performed through physical interactions with other biomolecules, which, most of the time, are other proteins. Peptides represent templates of choice for mimicking a secondary structure in order to modulate protein-protein interaction. They are thus an interesting class of therapeutics since they also display strong activity, high selectivity, low toxicity and few drug-drug interactions. Furthermore, predicting peptides that would bind to a specific MHC alleles would be of tremendous benefit to improve vaccine based therapy and possibly generate antibodies with greater affinity. Modern computational methods have the potential to accelerate and lower the cost of drug and vaccine discovery by selecting potential compounds for testing in silico prior to biological validation. Results We propose a specialized string kernel for small bio-molecules, peptides and pseudo-sequences of binding interfaces. The kernel incorporates physico-chemical properties of amino acids and elegantly generalizes eight kernels, comprised of the Oligo, the Weighted Degree, the Blended Spectrum, and the Radial Basis Function. We provide a low complexity dynamic programming algorithm for the exact computation of the kernel and a linear time algorithm for it’s approximation. Combined with kernel ridge regression and SupCK, a novel binding pocket kernel, the proposed kernel yields biologically relevant and good prediction accuracy on the PepX database. For the first time, a machine learning predictor is capable of predicting the binding affinity of any peptide to any protein with reasonable accuracy. The method was also applied to both single-target and pan-specific Major Histocompatibility Complex class II benchmark datasets and three Quantitative Structure Affinity Model benchmark datasets. Conclusion On all benchmarks, our method significantly (p-value ≤ 0.057) outperforms the current state-of-the-art methods at predicting peptide-protein binding affinities. The proposed approach is flexible and can be applied to predict any quantitative biological activity. Moreover, generating reliable peptide-protein binding affinities will also improve system biology modelling of interaction pathways. Lastly, the method should be of value to a large segment of the research community with the potential to accelerate the discovery of peptide-based drugs and facilitate vaccine development. The proposed kernel is freely available at http://graal.ift.ulaval.ca/downloads/gs-kernel/. PMID:23497081
Derivation of aerodynamic kernel functions
NASA Technical Reports Server (NTRS)
Dowell, E. H.; Ventres, C. S.
1973-01-01
The method of Fourier transforms is used to determine the kernel function which relates the pressure on a lifting surface to the prescribed downwash within the framework of Dowell's (1971) shear flow model. This model is intended to improve upon the potential flow aerodynamic model by allowing for the aerodynamic boundary layer effects neglected in the potential flow model. For simplicity, incompressible, steady flow is considered. The proposed method is illustrated by deriving known results from potential flow theory.
Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.
Echinaka, Yuki; Ozeki, Yukiyasu
2016-10-01
The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.
NASA Technical Reports Server (NTRS)
Bykhovskiy, E. B.; Smirnov, N. V.
1983-01-01
The Hilbert space L2(omega) of vector functions is studied. A breakdown of L2(omega) into orthogonal subspaces is discussed and the properties of the operators for projection onto these subspaces are investigated from the standpoint of preserving the differential properties of the vectors being projected. Finally, the properties of the operators are examined.
Functional Renormalization Group Flows on Friedman-Lemaître-Robertson-Walker backgrounds
NASA Astrophysics Data System (ADS)
Platania, Alessia; Saueressig, Frank
2018-06-01
We revisit the construction of the gravitational functional renormalization group equation tailored to the Arnowitt-Deser-Misner formulation emphasizing its connection to the covariant formulation. The results obtained from projecting the renormalization group flow onto the Einstein-Hilbert action are reviewed in detail and we provide a novel example illustrating how the formalism may be connected to the causal dynamical triangulations approach to quantum gravity.
NASA Astrophysics Data System (ADS)
Hu, Yan-Yan; Li, Dong-Sheng
2016-01-01
The hyperspectral images(HSI) consist of many closely spaced bands carrying the most object information. While due to its high dimensionality and high volume nature, it is hard to get satisfactory classification performance. In order to reduce HSI data dimensionality preparation for high classification accuracy, it is proposed to combine a band selection method of artificial immune systems (AIS) with a hybrid kernels support vector machine (SVM-HK) algorithm. In fact, after comparing different kernels for hyperspectral analysis, the approach mixed radial basis function kernel (RBF-K) with sigmoid kernel (Sig-K) and applied the optimized hybrid kernels in SVM classifiers. Then the SVM-HK algorithm used to induce the bands selection of an improved version of AIS. The AIS was composed of clonal selection and elite antibody mutation, including evaluation process with optional index factor (OIF). Experimental classification performance was on a San Diego Naval Base acquired by AVIRIS, the HRS dataset shows that the method is able to efficiently achieve bands redundancy removal while outperforming the traditional SVM classifier.
CW-SSIM kernel based random forest for image classification
NASA Astrophysics Data System (ADS)
Fan, Guangzhe; Wang, Zhou; Wang, Jiheng
2010-07-01
Complex wavelet structural similarity (CW-SSIM) index has been proposed as a powerful image similarity metric that is robust to translation, scaling and rotation of images, but how to employ it in image classification applications has not been deeply investigated. In this paper, we incorporate CW-SSIM as a kernel function into a random forest learning algorithm. This leads to a novel image classification approach that does not require a feature extraction or dimension reduction stage at the front end. We use hand-written digit recognition as an example to demonstrate our algorithm. We compare the performance of the proposed approach with random forest learning based on other kernels, including the widely adopted Gaussian and the inner product kernels. Empirical evidences show that the proposed method is superior in its classification power. We also compared our proposed approach with the direct random forest method without kernel and the popular kernel-learning method support vector machine. Our test results based on both simulated and realworld data suggest that the proposed approach works superior to traditional methods without the feature selection procedure.
NASA Astrophysics Data System (ADS)
Guo, Feng; Wang, Xue-Yuan; Zhu, Cheng-Yin; Cheng, Xiao-Feng; Zhang, Zheng-Yu; Huang, Xu-Hui
2017-12-01
The stochastic resonance for a fractional oscillator with time-delayed kernel and quadratic trichotomous noise is investigated. Applying linear system theory and Laplace transform, the system output amplitude (SPA) for the fractional oscillator is obtained. It is found that the SPA is a periodical function of the kernel delayed-time. Stochastic multiplicative phenomenon appears on the SPA versus the driving frequency, versus the noise amplitude, and versus the fractional exponent. The non-monotonous dependence of the SPA on the system parameters is also discussed.
Frequency Domain Analysis of Narx Neural Networks
NASA Astrophysics Data System (ADS)
Chance, J. E.; Worden, K.; Tomlinson, G. R.
1998-06-01
A method is proposed for interpreting the behaviour of NARX neural networks. The correspondence between time-delay neural networks and Volterra series is extended to the NARX class of networks. The Volterra kernels, or rather, their Fourier transforms, are obtained via harmonic probing. In the same way that the Volterra kernels generalize the impulse response to non-linear systems, the Volterra kernel transforms can be viewed as higher-order analogues of the Frequency Response Functions commonly used in Engineering dynamics; they can be interpreted in much the same way.
A Riemann-Hilbert formulation for the finite temperature Hubbard model
NASA Astrophysics Data System (ADS)
Cavaglià, Andrea; Cornagliotto, Martina; Mattelliano, Massimo; Tateo, Roberto
2015-06-01
Inspired by recent results in the context of AdS/CFT integrability, we reconsider the Thermodynamic Bethe Ansatz equations describing the 1D fermionic Hubbard model at finite temperature. We prove that the infinite set of TBA equations are equivalent to a simple nonlinear Riemann-Hilbert problem for a finite number of unknown functions. The latter can be transformed into a set of three coupled nonlinear integral equations defined over a finite support, which can be easily solved numerically. We discuss the emergence of an exact Bethe Ansatz and the link between the TBA approach and the results by Jüttner, Klümper and Suzuki based on the Quantum Transfer Matrix method. We also comment on the analytic continuation mechanism leading to excited states and on the mirror equations describing the finite-size Hubbard model with twisted boundary conditions.
Bath-induced correlations in an infinite-dimensional Hilbert space
NASA Astrophysics Data System (ADS)
Nizama, Marco; Cáceres, Manuel O.
2017-09-01
Quantum correlations between two free spinless dissipative distinguishable particles (interacting with a thermal bath) are studied analytically using the quantum master equation and tools of quantum information. Bath-induced coherence and correlations in an infinite-dimensional Hilbert space are shown. We show that for temperature T> 0 the time-evolution of the reduced density matrix cannot be written as the direct product of two independent particles. We have found a time-scale that characterizes the time when the bath-induced coherence is maximum before being wiped out by dissipation (purity, relative entropy, spatial dispersion, and mirror correlations are studied). The Wigner function associated to the Wannier lattice (where the dissipative quantum walks move) is studied as an indirect measure of the induced correlations among particles. We have supported the quantum character of the correlations by analyzing the geometric quantum discord.
Model representations for systems of selfadjoint operators satisfying commutation relations
NASA Astrophysics Data System (ADS)
Zolotarev, Vladimir A.
2010-12-01
Model representations are constructed for a system \\{B_k\\}_1^n of bounded linear selfadjoint operators in a Hilbert space H such that \\displaystyle \\lbrack B_k,B_s \\rbrack =\\frac i2\\varphi^*R_{k,s}^-\\varphi, \\qquad\\sigma_k\\varphi B_s-\\sigma_s\\varphi B_k=R_{k,s}^+\\varphi, \\displaystyle \\sigma_k\\varphi\\varphi^*\\sigma_s-\\sigma_s\\varphi\\varphi^*\\sigma_k=2iR_{k,s}^-,\\qquad1\\le k, s\\le n, where \\varphi is a linear operator from H into a Hilbert space E and \\{\\sigma_k,R_{k,s}^+/-\\}_1^n are some selfadjoint operators in E. A realization of these models in function spaces on a Riemann surface is found and a full set of invariants for \\{B_k\\}_1^n is described. Bibliography: 11 titles.
Considering causal genes in the genetic dissection of kernel traits in common wheat.
Mohler, Volker; Albrecht, Theresa; Castell, Adelheid; Diethelm, Manuela; Schweizer, Günther; Hartl, Lorenz
2016-11-01
Genetic factors controlling thousand-kernel weight (TKW) were characterized for their association with other seed traits, including kernel width, kernel length, ratio of kernel width to kernel length (KW/KL), kernel area, and spike number per m 2 (SN). For this purpose, a genetic map was established utilizing a doubled haploid population derived from a cross between German winter wheat cultivars Pamier and Format. Association studies in a diversity panel of elite cultivars supplemented genetic analysis of kernel traits. In both populations, genomic signatures of 13 candidate genes for TKW and kernel size were analyzed. Major quantitative trait loci (QTL) for TKW were identified on chromosomes 1B, 2A, 2D, and 4D, and their locations coincided with major QTL for kernel size traits, supporting the common belief that TKW is a function of other kernel traits. The QTL on chromosome 2A was associated with TKW candidate gene TaCwi-A1 and the QTL on chromosome 4D was associated with dwarfing gene Rht-D1. A minor QTL for TKW on chromosome 6B coincided with TaGW2-6B. The QTL for kernel dimensions that did not affect TKW were detected on eight chromosomes. A major QTL for KW/KL located at the distal tip of chromosome arm 5AS is being reported for the first time. TaSus1-7A and TaSAP-A1, closely linked to each other on chromosome 7A, could be related to a minor QTL for KW/KL. Genetic analysis of SN confirmed its negative correlation with TKW in this cross. In the diversity panel, TaSus1-7A was associated with TKW. Compared to the Pamier/Format bi-parental population where TaCwi-A1a was associated with higher TKW, the same allele reduced grain yield in the diversity panel, suggesting opposite effects of TaCwi-A1 on these two traits.
Korekar, Girish; Stobdan, Tsering; Arora, Richa; Yadav, Ashish; Singh, Shashi Bala
2011-11-01
Fourteen apricot genotypes grown under similar cultural practices in Trans-Himalayan Ladakh region were studied to find out the influence of genotype on antioxidant capacity and total phenolic content (TPC) of apricot kernel. The kernels were found to be rich in TPC ranging from 92.2 to 162.1 mg gallic acid equivalent/100 g. The free radical-scavenging activity in terms of inhibitory concentration (IC(50)) ranged from 43.8 to 123.4 mg/ml and ferric reducing antioxidant potential (FRAP) from 154.1 to 243.6 FeSO(4).7H(2)O μg/ml. A variation of 1-1.7 fold in total phenolic content, 1-2.8 fold in IC(50) by 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay and 1-1.6 fold in ferric reducing antioxidant potential among the examined kernels underlines the important role played by genetic background for determining the phenolic content and antioxidant potential of apricot kernel. A positive significant correlation between TPC and FRAP (r=0.671) was found. No significant correlation was found between TPC and IC(50); FRAP and IC(50); TPC and physical properties of kernel. Principal component analysis demonstrated that genotypic effect is more pronounced towards TPC and total antioxidant capacity (TAC) content in apricot kernel while the contribution of seed and kernel physical properties are not highly significant.
Putting Priors in Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
Experimental demonstration of an efficient hybrid equalizer for short-reach optical SSB systems
NASA Astrophysics Data System (ADS)
Zhu, Mingyue; Ying, Hao; Zhang, Jing; Yi, Xingwen; Qiu, Kun
2018-02-01
We propose an efficient enhanced hybrid equalizer combining the feed forward equalization (FFE) with a modified Volterra filter to mitigate the linear and nonlinear interference for the short-reach optical single side-band (SSB) system. The optical SSB signal is generated by a relatively low-cost dual-drive Mach-Zehnder modulator (DDMZM). The two driving signals are a pair of Hilbert signals with Nyquist pulse-shaped four-level pulse amplitude modulation (NPAM-4). After the fiber transmission, the neighboring received symbols are strongly correlated due to the pulse spreading in time domain caused by the chromatic dispersion (CD). At the receiver equalization stage, the FFE followed by higher order terms of modified Volterra filter, which utilizes the forward and backward neighboring symbols to construct the kernels with strong correlation, are used as an enhanced hybrid equalizer to mitigate the inter symbol interference (ISI) and nonlinear distortion due to the interaction of the CD and the square-law detection. We experimentally demonstrate that the optical SSB NPAM-4 signal of 40 Gb/s transmitting over 80 km standard single mode fiber (SSMF) with a bit-error-rate (BER) of 7 . 59 × 10-4.
Rotational relaxation of CF+(X1Σ) in collision with He(1S)
NASA Astrophysics Data System (ADS)
Denis-Alpizar, O.; Inostroza, N.; Castro Palacio, J. C.
2018-01-01
The carbon monofluoride cation (CF+) has been detected recently in Galactic and extragalactic regions. Therefore, excitation rate coefficients of this molecule in collision with He and H2 are necessary for a correct interpretation of the astronomical observations. The main goal of this work is to study the collision of CF+ with He in full dimensionality at the close-coupling level and to report a large set of rotational rate coefficients. New ab initio interaction energies at the CCSD(T)/aug-cc-pv5z level of theory were computed, and a three-dimensional potential energy surface was represented using a reproducing kernel Hilbert space. Close-coupling scattering calculations were performed at collisional energies up to 1600 cm-1 in the ground vibrational state. The vibrational quenching cross-sections were found to be at least three orders of magnitude lower than the pure rotational cross-sections. Also, the collisional rate coefficients were reported for the lowest 20 rotational states of CF+ and an even propensity rule was found to be in action only for j > 4. Finally, the hyperfine rate coefficients were explored. These data can be useful for the determination of the interstellar conditions where this molecule has been detected.
Study of the formation of interstellar CF+ from the HF + C + →CF+ + H reaction
NASA Astrophysics Data System (ADS)
Denis-Alpizar, Otoniel; Guzmán, Viviana V.; Inostroza, Natalia
2018-06-01
The detection of the carbon monofluoride cation CF+ was considered as a support of the theories of the fluorine chemistry in the interstellar medium (ISM). This molecule is formed by the reaction of HF with C+. The rates of this reaction have been estimated previously by two different groups. However, these two estimations led to different results. The main goal of the present work is to study the HF + C+ reaction and determine new reactive rate coefficients. A large set of ab initio energies at the MRCI-F12/cc-pVQZ-F12 level was computed. The first reactive potential energy surface (PES) for the HF + C+ → CF+ + H reaction was developed using a reproducing kernel Hilbert space (RKHS) based method. The dynamics of the reaction was followed from quasiclassical trajectories (QCT). The results of such calculations showed that CF+ is produced in excited vibrational states. The rate coefficients for the HF + C+ → CF+ + H reaction from 50 K up to 2000 K are reported. The impact of these new data in the astrophysical models for the determination of the interstellar conditions is also explored.
The Conserved and Unique Genetic Architecture of Kernel Size and Weight in Maize and Rice1[OPEN
Lan, Liu; Wang, Hongze; Xu, Yuancheng; Yang, Xiaohong; Li, Wenqiang; Tong, Hao; Xiao, Yingjie; Pan, Qingchun; Qiao, Feng; Raihan, Mohammad Sharif; Liu, Haijun; Yang, Ning; Wang, Xiaqing; Deng, Min; Jin, Minliang; Zhao, Lijun; Luo, Xin; Zhan, Wei; Liu, Nannan; Wang, Hong; Chen, Gengshen
2017-01-01
Maize (Zea mays) is a major staple crop. Maize kernel size and weight are important contributors to its yield. Here, we measured kernel length, kernel width, kernel thickness, hundred kernel weight, and kernel test weight in 10 recombinant inbred line populations and dissected their genetic architecture using three statistical models. In total, 729 quantitative trait loci (QTLs) were identified, many of which were identified in all three models, including 22 major QTLs that each can explain more than 10% of phenotypic variation. To provide candidate genes for these QTLs, we identified 30 maize genes that are orthologs of 18 rice (Oryza sativa) genes reported to affect rice seed size or weight. Interestingly, 24 of these 30 genes are located in the identified QTLs or within 1 Mb of the significant single-nucleotide polymorphisms. We further confirmed the effects of five genes on maize kernel size/weight in an independent association mapping panel with 540 lines by candidate gene association analysis. Lastly, the function of ZmINCW1, a homolog of rice GRAIN INCOMPLETE FILLING1 that affects seed size and weight, was characterized in detail. ZmINCW1 is close to QTL peaks for kernel size/weight (less than 1 Mb) and contains significant single-nucleotide polymorphisms affecting kernel size/weight in the association panel. Overexpression of this gene can rescue the reduced weight of the Arabidopsis (Arabidopsis thaliana) homozygous mutant line in the AtcwINV2 gene (Arabidopsis ortholog of ZmINCW1). These results indicate that the molecular mechanisms affecting seed development are conserved in maize, rice, and possibly Arabidopsis. PMID:28811335
QTL Mapping of Kernel Number-Related Traits and Validation of One Major QTL for Ear Length in Maize.
Huo, Dongao; Ning, Qiang; Shen, Xiaomeng; Liu, Lei; Zhang, Zuxin
2016-01-01
The kernel number is a grain yield component and an important maize breeding goal. Ear length, kernel number per row and ear row number are highly correlated with the kernel number per ear, which eventually determines the ear weight and grain yield. In this study, two sets of F2:3 families developed from two bi-parental crosses sharing one inbred line were used to identify quantitative trait loci (QTL) for four kernel number-related traits: ear length, kernel number per row, ear row number and ear weight. A total of 39 QTLs for the four traits were identified in the two populations. The phenotypic variance explained by a single QTL ranged from 0.4% to 29.5%. Additionally, 14 overlapping QTLs formed 5 QTL clusters on chromosomes 1, 4, 5, 7, and 10. Intriguingly, six QTLs for ear length and kernel number per row overlapped in a region on chromosome 1. This region was designated qEL1.10 and was validated as being simultaneously responsible for ear length, kernel number per row and ear weight in a near isogenic line-derived population, suggesting that qEL1.10 was a pleiotropic QTL with large effects. Furthermore, the performance of hybrids generated by crossing 6 elite inbred lines with two near isogenic lines at qEL1.10 showed the breeding value of qEL1.10 for the improvement of the kernel number and grain yield of maize hybrids. This study provides a basis for further fine mapping, molecular marker-aided breeding and functional studies of kernel number-related traits in maize.
The Conserved and Unique Genetic Architecture of Kernel Size and Weight in Maize and Rice.
Liu, Jie; Huang, Juan; Guo, Huan; Lan, Liu; Wang, Hongze; Xu, Yuancheng; Yang, Xiaohong; Li, Wenqiang; Tong, Hao; Xiao, Yingjie; Pan, Qingchun; Qiao, Feng; Raihan, Mohammad Sharif; Liu, Haijun; Zhang, Xuehai; Yang, Ning; Wang, Xiaqing; Deng, Min; Jin, Minliang; Zhao, Lijun; Luo, Xin; Zhou, Yang; Li, Xiang; Zhan, Wei; Liu, Nannan; Wang, Hong; Chen, Gengshen; Li, Qing; Yan, Jianbing
2017-10-01
Maize ( Zea mays ) is a major staple crop. Maize kernel size and weight are important contributors to its yield. Here, we measured kernel length, kernel width, kernel thickness, hundred kernel weight, and kernel test weight in 10 recombinant inbred line populations and dissected their genetic architecture using three statistical models. In total, 729 quantitative trait loci (QTLs) were identified, many of which were identified in all three models, including 22 major QTLs that each can explain more than 10% of phenotypic variation. To provide candidate genes for these QTLs, we identified 30 maize genes that are orthologs of 18 rice ( Oryza sativa ) genes reported to affect rice seed size or weight. Interestingly, 24 of these 30 genes are located in the identified QTLs or within 1 Mb of the significant single-nucleotide polymorphisms. We further confirmed the effects of five genes on maize kernel size/weight in an independent association mapping panel with 540 lines by candidate gene association analysis. Lastly, the function of ZmINCW1 , a homolog of rice GRAIN INCOMPLETE FILLING1 that affects seed size and weight, was characterized in detail. ZmINCW1 is close to QTL peaks for kernel size/weight (less than 1 Mb) and contains significant single-nucleotide polymorphisms affecting kernel size/weight in the association panel. Overexpression of this gene can rescue the reduced weight of the Arabidopsis ( Arabidopsis thaliana ) homozygous mutant line in the AtcwINV2 gene (Arabidopsis ortholog of ZmINCW1 ). These results indicate that the molecular mechanisms affecting seed development are conserved in maize, rice, and possibly Arabidopsis. © 2017 American Society of Plant Biologists. All Rights Reserved.
USDA-ARS?s Scientific Manuscript database
Fusarium verticillioides (Fv) is a prevalent seed-borne maize endophyte capable of causing severe kernel rot and fumonisin mycotoxin contamination. Within maize kernels, Fv is primarily confined to the pedicel, while another seed-borne fungal endophyte, Sarocladium zeae (Sz), is observed in embryos....
USDA-ARS?s Scientific Manuscript database
Pre-harvest sprouting of wheat kernels within the grain head presents serious problems as it can greatly affect end use quality. Functional properties of wheat flour made from sprouted wheat result in poor dough and bread-making quality. This research examined the ability of two instruments to estim...
Maximum of the modulus of kernels in Gauss-Turan quadratures
NASA Astrophysics Data System (ADS)
Milovanovic, Gradimir V.; Spalevic, Miodrag M.; Pranic, Miroslav S.
2008-06-01
We study the kernels K_{n,s}(z) in the remainder terms R_{n,s}(f) of the Gauss-Turan quadrature formulae for analytic functions on elliptical contours with foci at pm 1 , when the weight omega is a generalized Chebyshev weight function. For the generalized Chebyshev weight of the first (third) kind, it is shown that the modulus of the kernel \\vert K_{n,s}(z)\\vert attains its maximum on the real axis (positive real semi-axis) for each ngeq n_0, n_0Dn_0(rho,s) . It was stated as a conjecture in [Mathematics of Computation 72 (2003), 1855-1872]. For the generalized Chebyshev weight of the second kind, in the case when the number of the nodes n in the corresponding Gauss-Turan quadrature formula is even, it is shown that the modulus of the kernel attains its maximum on the imaginary axis for each ngeq n_0, n_0Dn_0(rho,s) . Numerical examples are included. Retrieve articles in all Journals with MSC (1991): [41]41A55, [42]65D30, [43]65D32
Quantum decimation in Hilbert space: Coarse graining without structure
NASA Astrophysics Data System (ADS)
Singh, Ashmeet; Carroll, Sean M.
2018-03-01
We present a technique to coarse grain quantum states in a finite-dimensional Hilbert space. Our method is distinguished from other approaches by not relying on structures such as a preferred factorization of Hilbert space or a preferred set of operators (local or otherwise) in an associated algebra. Rather, we use the data corresponding to a given set of states, either specified independently or constructed from a single state evolving in time. Our technique is based on principle component analysis (PCA), and the resulting coarse-grained quantum states live in a lower-dimensional Hilbert space whose basis is defined using the underlying (isometric embedding) transformation of the set of fine-grained states we wish to coarse grain. Physically, the transformation can be interpreted to be an "entanglement coarse-graining" scheme that retains most of the global, useful entanglement structure of each state, while needing fewer degrees of freedom for its reconstruction. This scheme could be useful for efficiently describing collections of states whose number is much smaller than the dimension of Hilbert space, or a single state evolving over time.
Hilbert's sixth problem: between the foundations of geometry and the axiomatization of physics.
Corry, Leo
2018-04-28
The sixth of Hilbert's famous 1900 list of 23 problems was a programmatic call for the axiomatization of the physical sciences. It was naturally and organically rooted at the core of Hilbert's conception of what axiomatization is all about. In fact, the axiomatic method which he applied at the turn of the twentieth century in his famous work on the foundations of geometry originated in a preoccupation with foundational questions related with empirical science in general. Indeed, far from a purely formal conception, Hilbert counted geometry among the sciences with strong empirical content, closely related to other branches of physics and deserving a treatment similar to that reserved for the latter. In this treatment, the axiomatization project was meant to play, in his view, a crucial role. Curiously, and contrary to a once-prevalent view, from all the problems in the list, the sixth is the only one that continually engaged Hilbet's efforts over a very long period of time, at least between 1894 and 1932.This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).
Hilbert's sixth problem: between the foundations of geometry and the axiomatization of physics
NASA Astrophysics Data System (ADS)
Corry, Leo
2018-04-01
The sixth of Hilbert's famous 1900 list of 23 problems was a programmatic call for the axiomatization of the physical sciences. It was naturally and organically rooted at the core of Hilbert's conception of what axiomatization is all about. In fact, the axiomatic method which he applied at the turn of the twentieth century in his famous work on the foundations of geometry originated in a preoccupation with foundational questions related with empirical science in general. Indeed, far from a purely formal conception, Hilbert counted geometry among the sciences with strong empirical content, closely related to other branches of physics and deserving a treatment similar to that reserved for the latter. In this treatment, the axiomatization project was meant to play, in his view, a crucial role. Curiously, and contrary to a once-prevalent view, from all the problems in the list, the sixth is the only one that continually engaged Hilbet's efforts over a very long period of time, at least between 1894 and 1932. This article is part of the theme issue `Hilbert's sixth problem'.
Transient and asymptotic behaviour of the binary breakage problem
NASA Astrophysics Data System (ADS)
Mantzaris, Nikos V.
2005-06-01
The general binary breakage problem with power-law breakage functions and two families of symmetric and asymmetric breakage kernels is studied in this work. A useful transformation leads to an equation that predicts self-similar solutions in its asymptotic limit and offers explicit knowledge of the mean size and particle density at each point in dimensionless time. A novel moving boundary algorithm in the transformed coordinate system is developed, allowing the accurate prediction of the full transient behaviour of the system from the initial condition up to the point where self-similarity is achieved, and beyond if necessary. The numerical algorithm is very rapid and its results are in excellent agreement with known analytical solutions. In the case of the symmetric breakage kernels only unimodal, self-similar number density functions are obtained asymptotically for all parameter values and independent of the initial conditions, while in the case of asymmetric breakage kernels, bimodality appears for high degrees of asymmetry and sharp breakage functions. For symmetric and discrete breakage kernels, self-similarity is not achieved. The solution exhibits sustained oscillations with amplitude that depends on the initial condition and the sharpness of the breakage mechanism, while the period is always fixed and equal to ln 2 with respect to dimensionless time.
Kim, Jongin; Park, Hyeong-jun
2016-01-01
The purpose of this study is to classify EEG data on imagined speech in a single trial. We recorded EEG data while five subjects imagined different vowels, /a/, /e/, /i/, /o/, and /u/. We divided each single trial dataset into thirty segments and extracted features (mean, variance, standard deviation, and skewness) from all segments. To reduce the dimension of the feature vector, we applied a feature selection algorithm based on the sparse regression model. These features were classified using a support vector machine with a radial basis function kernel, an extreme learning machine, and two variants of an extreme learning machine with different kernels. Because each single trial consisted of thirty segments, our algorithm decided the label of the single trial by selecting the most frequent output among the outputs of the thirty segments. As a result, we observed that the extreme learning machine and its variants achieved better classification rates than the support vector machine with a radial basis function kernel and linear discrimination analysis. Thus, our results suggested that EEG responses to imagined speech could be successfully classified in a single trial using an extreme learning machine with a radial basis function and linear kernel. This study with classification of imagined speech might contribute to the development of silent speech BCI systems. PMID:28097128
Basis adaptation in homogeneous chaos spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, Ramakrishna; Ghanem, Roger
2014-02-01
We present a new meth for the characterization of subspaces associated with low-dimensional quantities of interet (QoI). The probability density function of these QoI is found to be concentrated around one-dimensional subspaces for which we develop projection operators. Our approach builds on the properties of Gaussian Hilbert spaces and associated tensor product spaces.
NASA Astrophysics Data System (ADS)
Tubman, Norm; Whaley, Birgitta
The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.
Decomposition of the polynomial kernel of arbitrary higher spin Dirac operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eelbode, D., E-mail: David.Eelbode@ua.ac.be; Raeymaekers, T., E-mail: Tim.Raeymaekers@UGent.be; Van der Jeugt, J., E-mail: Joris.VanderJeugt@UGent.be
2015-10-15
In a series of recent papers, we have introduced higher spin Dirac operators, which are generalisations of the classical Dirac operator. Whereas the latter acts on spinor-valued functions, the former acts on functions taking values in arbitrary irreducible half-integer highest weight representations for the spin group. In this paper, we describe how the polynomial kernel spaces of such operators decompose in irreducible representations of the spin group. We will hereby make use of results from representation theory.
G-Hash: Towards Fast Kernel-based Similarity Search in Large Graph Databases.
Wang, Xiaohong; Smalter, Aaron; Huan, Jun; Lushington, Gerald H
2009-01-01
Structured data including sets, sequences, trees and graphs, pose significant challenges to fundamental aspects of data management such as efficient storage, indexing, and similarity search. With the fast accumulation of graph databases, similarity search in graph databases has emerged as an important research topic. Graph similarity search has applications in a wide range of domains including cheminformatics, bioinformatics, sensor network management, social network management, and XML documents, among others.Most of the current graph indexing methods focus on subgraph query processing, i.e. determining the set of database graphs that contains the query graph and hence do not directly support similarity search. In data mining and machine learning, various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models for supervised learning, graph kernel functions have (i) high computational complexity and (ii) non-trivial difficulty to be indexed in a graph database.Our objective is to bridge graph kernel function and similarity search in graph databases by proposing (i) a novel kernel-based similarity measurement and (ii) an efficient indexing structure for graph data management. Our method of similarity measurement builds upon local features extracted from each node and their neighboring nodes in graphs. A hash table is utilized to support efficient storage and fast search of the extracted local features. Using the hash table, a graph kernel function is defined to capture the intrinsic similarity of graphs and for fast similarity query processing. We have implemented our method, which we have named G-hash, and have demonstrated its utility on large chemical graph databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Most importantly, the new similarity measurement and the index structure is scalable to large database with smaller indexing size, faster indexing construction time, and faster query processing time as compared to state-of-the-art indexing methods such as C-tree, gIndex, and GraphGrep.
Transition probabilities for non self-adjoint Hamiltonians in infinite dimensional Hilbert spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagarello, F., E-mail: fabio.bagarello@unipa.it
In a recent paper we have introduced several possible inequivalent descriptions of the dynamics and of the transition probabilities of a quantum system when its Hamiltonian is not self-adjoint. Our analysis was carried out in finite dimensional Hilbert spaces. This is useful, but quite restrictive since many physically relevant quantum systems live in infinite dimensional Hilbert spaces. In this paper we consider this situation, and we discuss some applications to well known models, introduced in the literature in recent years: the extended harmonic oscillator, the Swanson model and a generalized version of the Landau levels Hamiltonian. Not surprisingly we willmore » find new interesting features not previously found in finite dimensional Hilbert spaces, useful for a deeper comprehension of this kind of physical systems.« less
Singular value decomposition for the truncated Hilbert transform
NASA Astrophysics Data System (ADS)
Katsevich, A.
2010-11-01
Starting from a breakthrough result by Gelfand and Graev, inversion of the Hilbert transform became a very important tool for image reconstruction in tomography. In particular, their result is useful when the tomographic data are truncated and one deals with an interior problem. As was established recently, the interior problem admits a stable and unique solution when some a priori information about the object being scanned is available. The most common approach to solving the interior problem is based on converting it to the Hilbert transform and performing analytic continuation. Depending on what type of tomographic data are available, one gets different Hilbert inversion problems. In this paper, we consider two such problems and establish singular value decomposition for the operators involved. We also propose algorithms for performing analytic continuation.
An Ensemble Approach to Building Mercer Kernels with Prior Information
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2005-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.
Executing application function calls in response to an interrupt
Almasi, Gheorghe; Archer, Charles J.; Giampapa, Mark E.; Gooding, Thomas M.; Heidelberger, Philip; Parker, Jeffrey J.
2010-05-11
Executing application function calls in response to an interrupt including creating a thread; receiving an interrupt having an interrupt type; determining whether a value of a semaphore represents that interrupts are disabled; if the value of the semaphore represents that interrupts are not disabled: calling, by the thread, one or more preconfigured functions in dependence upon the interrupt type of the interrupt; yielding the thread; and if the value of the semaphore represents that interrupts are disabled: setting the value of the semaphore to represent to a kernel that interrupts are hard-disabled; and hard-disabling interrupts at the kernel.
Efficient protein structure search using indexing methods
2013-01-01
Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively. PMID:23691543
Efficient protein structure search using indexing methods.
Kim, Sungchul; Sael, Lee; Yu, Hwanjo
2013-01-01
Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively.
Resonant nonlinear ultrasound spectroscopy
Johnson, Paul A.; TenCate, James A.; Guyer, Robert A.; Van Den Abeele, Koen E. A.
2001-01-01
Components with defects are identified from the response to strains applied at acoustic and ultrasound frequencies. The relative resonance frequency shift .vertline..DELTA..function./.function..sub.0.vertline., is determined as a function of applied strain amplitude for an acceptable component, where .function..sub.0 is the frequency of the resonance peak at the lowest amplitude of applied strain and .DELTA..function. is the frequency shift of the resonance peak of a selected mode to determine a reference relationship. Then, the relative resonance frequency shift .vertline..DELTA..function./.function..sub.0 is determined as a function of applied strain for a component under test, where fo .function..sub.0 the frequency of the resonance peak at the lowest amplitude of applied strain and .DELTA..function. is the frequency shift of the resonance peak to determine a quality test relationship. The reference relationship is compared with the quality test relationship to determine the presence of defects in the component under test.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vostokov, S V
A new method for calculating an explicit form of the Hilbert pairing is proposed. It is used to calculate the Hilbert pairing in a classical local field and in a complete higher-dimensional field. Bibliography: 25 titles.
The spectral function of a singular differential operator of order 2m
NASA Astrophysics Data System (ADS)
Kozko, Artem I.; Pechentsov, Alexander S.
2010-12-01
We study the spectral function of a self-adjoint semibounded below differential operator on a Hilbert space L_2 \\lbrack 0,\\infty) and obtain the formulae for the spectral function of the operator (-1)^{m}y^{(2m)}(x) with general boundary conditions at the zero. In particular, for the boundary conditions y(0)=y'(0)=\\dots=y^{(m-1)}(0)=0 we find the explicit form of the spectral function \\Theta_{mB'}(x,x,\\lambda) on the diagonal x=y for \\lambda \\ge 0.
On the physical Hilbert space of loop quantum cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noui, Karim; Perez, Alejandro; Vandersloot, Kevin
2005-02-15
In this paper we present a model of Riemannian loop quantum cosmology with a self-adjoint quantum scalar constraint. The physical Hilbert space is constructed using refined algebraic quantization. When matter is included in the form of a cosmological constant, the model is exactly solvable and we show explicitly that the physical Hilbert space is separable, consisting of a single physical state. We extend the model to the Lorentzian sector and discuss important implications for standard loop quantum cosmology.
Experimental Issues in Coherent Quantum-State Manipulation of Trapped Atomic Ions
1998-05-01
in Hilbert space and almost always precludes the exis- tence of “large” Schrödinger-cat-like states except on extremely short time scales. A...Hamiltonian Hideal operate on the Hilbert space formed by the ↓l and ↑l states of the L qubits. In practice, for the case of trapped ions, the...auxiliary state (Sec. 3.3). If decoherence mechanisms cause other states to be populated, the Hilbert space must be expanded. Although more streamlined
NASA Astrophysics Data System (ADS)
Gao, Gan; Wang, Li-Ping
2010-11-01
We propose a quantum secret sharing protocol, in which Bell states in the high dimension Hilbert space are employed. The biggest advantage of our protocol is the high source capacity. Compared with the previous secret sharing protocol, ours has the higher controlling efficiency. In addition, as decoy states in the high dimension Hilbert space are used, we needn’t destroy quantum entanglement for achieving the goal to check the channel security.
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
Modern Electromagnetic Scattering
2013-08-10
Kramers– Kronig relations and is therefore a complex-valued function of angular frequency. The same is true for permeability. Thus, in general, we have...Kramers– Kronig relations, then (ω) and µ(ω) are analytic functions in the upper-half ω-plane. Furthermore, it can be shown that (ω) and µ(ω) are never...Kramers– Kronig (KK) relations (the Hilbert transform pair) in the Fourier-domain, namely, 6For our purposes, it is more convenient to work with (3.3
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
Electron beam lithographic modeling assisted by artificial intelligence technology
NASA Astrophysics Data System (ADS)
Nakayamada, Noriaki; Nishimura, Rieko; Miura, Satoru; Nomura, Haruyuki; Kamikubo, Takashi
2017-07-01
We propose a new concept of tuning a point-spread function (a "kernel" function) in the modeling of electron beam lithography using the machine learning scheme. Normally in the work of artificial intelligence, the researchers focus on the output results from a neural network, such as success ratio in image recognition or improved production yield, etc. In this work, we put more focus on the weights connecting the nodes in a convolutional neural network, which are naturally the fractions of a point-spread function, and take out those weighted fractions after learning to be utilized as a tuned kernel. Proof-of-concept of the kernel tuning has been demonstrated using the examples of proximity effect correction with 2-layer network, and charging effect correction with 3-layer network. This type of new tuning method can be beneficial to give researchers more insights to come up with a better model, yet it might be too early to be deployed to production to give better critical dimension (CD) and positional accuracy almost instantly.
Optimized data fusion for K-means Laplacian clustering
Yu, Shi; Liu, Xinhai; Tranchevent, Léon-Charles; Glänzel, Wolfgang; Suykens, Johan A. K.; De Moor, Bart; Moreau, Yves
2011-01-01
Motivation: We propose a novel algorithm to combine multiple kernels and Laplacians for clustering analysis. The new algorithm is formulated on a Rayleigh quotient objective function and is solved as a bi-level alternating minimization procedure. Using the proposed algorithm, the coefficients of kernels and Laplacians can be optimized automatically. Results: Three variants of the algorithm are proposed. The performance is systematically validated on two real-life data fusion applications. The proposed Optimized Kernel Laplacian Clustering (OKLC) algorithms perform significantly better than other methods. Moreover, the coefficients of kernels and Laplacians optimized by OKLC show some correlation with the rank of performance of individual data source. Though in our evaluation the K values are predefined, in practical studies, the optimal cluster number can be consistently estimated from the eigenspectrum of the combined kernel Laplacian matrix. Availability: The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/oklc.html. Contact: shiyu@uchicago.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20980271
Half-blind remote sensing image restoration with partly unknown degradation
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
The problem of image restoration has been extensively studied for its practical importance and theoretical interest. This paper mainly discusses the problem of image restoration with partly unknown kernel. In this model, the degraded kernel function is known but its parameters are unknown. With this model, we should estimate the parameters in Gaussian kernel and the real image simultaneity. For this new problem, a total variation restoration model is put out and an intersect direction iteration algorithm is designed. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measurement (SSIM) are used to measure the performance of the method. Numerical results show that we can estimate the parameters in kernel accurately, and the new method has both much higher PSNR and much higher SSIM than the expectation maximization (EM) method in many cases. In addition, the accuracy of estimation is not sensitive to noise. Furthermore, even though the support of the kernel is unknown, we can also use this method to get accurate estimation.
On the deep structure of the blowing-up of curve singularities
NASA Astrophysics Data System (ADS)
Elias, Juan
2001-09-01
Let C be a germ of curve singularity embedded in (kn, 0). It is well known that the blowing-up of C centred on its closed ring, Bl(C), is a finite union of curve singularities. If C is reduced we can iterate this process and, after a finite number of steps, we find only non-singular curves. This is the desingularization process. The main idea of this paper is to linearize the blowing-up of curve singularities Bl(C) [rightward arrow] C. We perform this by studying the structure of [script O]Bl(C)/[script O]C as W-module, where W is a discrete valuation ring contained in [script O]C. Since [script O]Bl(C)/[script O]C is a torsion W-module, its structure is determined by the invariant factors of [script O]C in [script O]Bl(C). The set of invariant factors is called in this paper as the set of micro-invariants of C (see Definition 1·2).In the first section we relate the micro-invariants of C to the Hilbert function of C (Proposition 1·3), and we show how to compute them from the Hilbert function of some quotient of [script O]C (see Proposition 1·4).The main result of this paper is Theorem 3·3 where we give upper bounds of the micro-invariants in terms of the regularity, multiplicity and embedding dimension. As a corollary we improve and we recover some results of [6]. These bounds can be established as a consequence of the study of the Hilbert function of a filtration of ideals g = {g[r,i+1]}i [gt-or-equal, slanted] 0 of the tangent cone of [script O]C (see Section 2). The main property of g is that the ideals g[r,i+1] have initial degree bigger than the Castelnuovo-Mumford regularity of the tangent cone of [script O]C.Section 4 is devoted to computation the micro-invariants of branches; we show how to compute them from the semigroup of values of C and Bl(C) (Proposition 4·3). The case of monomial curve singularities is especially studied; we end Section 4 with some explicit computations.In the last section we study some geometric properties of C that can be deduced from special values of the micro-invariants, and we specially study the relationship of the micro-invariants with the Hilbert function of [script O]Bl(C). We end the paper studying the natural equisingularity criteria that can be defined from the micro-invariants and its relationship with some of the known equisingularity criteria.
Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.
Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing
2017-12-14
Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.
Stochastic subset selection for learning with kernel machines.
Rhinelander, Jason; Liu, Xiaoping P
2012-06-01
Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-08-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision-making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps, i.e. the spatial probability of a future vent opening given the past eruptive activity of a volcano. This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source Geographic Information System Quantum GIS, that is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows to select an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input datasets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
Projective flatness in the quantisation of bosons and fermions
NASA Astrophysics Data System (ADS)
Wu, Siye
2015-07-01
We compare the quantisation of linear systems of bosons and fermions. We recall the appearance of projectively flat connection and results on parallel transport in the quantisation of bosons. We then discuss pre-quantisation and quantisation of fermions using the calculus of fermionic variables. We define a natural connection on the bundle of Hilbert spaces and show that it is projectively flat. This identifies, up to a phase, equivalent spinor representations constructed by various polarisations. We introduce the concept of metaplectic correction for fermions and show that the bundle of corrected Hilbert spaces is naturally flat. We then show that the parallel transport in the bundle of Hilbert spaces along a geodesic is a rescaled projection provided that the geodesic lies within the complement of a cut locus. Finally, we study the bundle of Hilbert spaces when there is a symmetry.
Effective Numerical Methods for Solving Elliptical Problems in Strengthened Sobolev Spaces
NASA Technical Reports Server (NTRS)
D'yakonov, Eugene G.
1996-01-01
Fourth-order elliptic boundary value problems in the plane can be reduced to operator equations in Hilbert spaces G that are certain subspaces of the Sobolev space W(sub 2)(exp 2)(Omega) is identical with G(sup (2)). Appearance of asymptotically optimal algorithms for Stokes type problems made it natural to focus on an approach that considers rot w is identical with (D(sub 2)w - D(sub 1)w) is identical with vector of u as a new unknown vector-function, which automatically satisfies the condition div vector of u = 0. In this work, we show that this approach can also be developed for an important class of problems from the theory of plates and shells with stiffeners. The main mathematical problem was to show that the well-known inf-sup condition (normal solvability of the divergence operator) holds for special Hilbert spaces. This result is also essential for certain hydrodynamics problems.
Coherence-generating power of quantum dephasing processes
NASA Astrophysics Data System (ADS)
Styliaris, Georgios; Campos Venuti, Lorenzo; Zanardi, Paolo
2018-03-01
We provide a quantification of the capability of various quantum dephasing processes to generate coherence out of incoherent states. The measures defined, admitting computable expressions for any finite Hilbert-space dimension, are based on probabilistic averages and arise naturally from the viewpoint of coherence as a resource. We investigate how the capability of a dephasing process (e.g., a nonselective orthogonal measurement) to generate coherence depends on the relevant bases of the Hilbert space over which coherence is quantified and the dephasing process occurs, respectively. We extend our analysis to include those Lindblad time evolutions which, in the infinite-time limit, dephase the system under consideration and calculate their coherence-generating power as a function of time. We further identify specific families of such time evolutions that, although dephasing, have optimal (over all quantum processes) coherence-generating power for some intermediate time. Finally, we investigate the coherence-generating capability of random dephasing channels.
Qi, Xin; Li, Shixue; Zhu, Yaxi; Zhao, Qian; Zhu, Dengyun; Yu, Jingjuan
2017-01-01
To explore the function of Dof transcription factors during kernel development in maize, we first identified Dof genes in the maize genome. We found that ZmDof3 was exclusively expressed in the endosperm of maize kernel and had the features of a Dof transcription factor. Suppression of ZmDof3 resulted in a defective kernel phenotype with reduced starch content and a partially patchy aleurone layer. The expression levels of starch synthesis-related genes and aleurone differentiation-associated genes were down-regulated in ZmDof3 knockdown kernels, indicating that ZmDof3 plays an important role in maize endosperm development. The maize endosperm, occupying a large proportion of the kernel, plays an important role in seed development and germination. Current knowledge regarding the regulation of endosperm development is limited. Dof proteins, a family of plant-specific transcription factors, play critical roles in diverse biological processes. In this study, an endosperm-specific Dof protein gene, ZmDof3, was identified in maize through genome-wide screening. Suppression of ZmDof3 resulted in a defective kernel phenotype. The endosperm of ZmDof3 knockdown kernels was loosely packed with irregular starch granules observed by electronic microscope. Through genome-wide expression profiling, we found that down-regulated genes were enriched in GO terms related to carbohydrate metabolism. Moreover, ZmDof3 could bind to the Dof core element in the promoter of starch biosynthesis genes Du1 and Su2 in vitro and in vivo. In addition, the aleurone at local position in mature ZmDof3 knockdown kernels varied from one to three layers, which consisted of smaller and irregular cells. Further analyses showed that knockdown of ZmDof3 reduced the expression of Nkd1, which is involved in aleurone cell differentiation, and that ZmDof3 could bind to the Dof core element in the Nkd1 promoter. Our study reveals that ZmDof3 functions in maize endosperm development as a positive regulator in the signaling system controlling starch accumulation and aleurone development.
Bivariate discrete beta Kernel graduation of mortality data.
Mazza, Angelo; Punzo, Antonio
2015-07-01
Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.
Schaid, Daniel J
2010-01-01
Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castro-Palacio, Juan Carlos; Nagy, Tibor; Meuwly, Markus, E-mail: m.meuwly@unibas.ch
2014-10-28
Reactions involving N and O atoms dominate the energetics of the reactive air flow around spacecraft when reentering the atmosphere in the hypersonic flight regime. For this reason, the thermal rate coefficients for reactive processes involving O({sup 3}P) and NO({sup 2}Π) are relevant over a wide range of temperatures. For this purpose, a potential energy surface (PES) for the ground state of the NO{sub 2} molecule is constructed based on high-level ab initio calculations. These ab initio energies are represented using the reproducible kernel Hilbert space method and Legendre polynomials. The global PES of NO{sub 2} in the ground statemore » is constructed by smoothly connecting the surfaces of the grids of various channels around the equilibrium NO{sub 2} geometry by a distance-dependent weighting function. The rate coefficients were calculated using Monte Carlo integration. The results indicate that at high temperatures only the lowest A-symmetry PES is relevant. At the highest temperatures investigated (20 000 K), the rate coefficient for the “O1O2+N” channel becomes comparable (to within a factor of around three) to the rate coefficient of the oxygen exchange reaction. A state resolved analysis shows that the smaller the vibrational quantum number of NO in the reactants, the higher the relative translational energy required to open it and conversely with higher vibrational quantum number, less translational energy is required. This is in accordance with Polanyi's rules. However, the oxygen exchange channel (NO2+O1) is accessible at any collision energy. Finally, this work introduces an efficient computational protocol for the investigation of three-atom collisions in general.« less
Castro-Palacio, Juan Carlos; Nagy, Tibor; Bemish, Raymond J; Meuwly, Markus
2014-10-28
Reactions involving N and O atoms dominate the energetics of the reactive air flow around spacecraft when reentering the atmosphere in the hypersonic flight regime. For this reason, the thermal rate coefficients for reactive processes involving O((3)P) and NO((2)Π) are relevant over a wide range of temperatures. For this purpose, a potential energy surface (PES) for the ground state of the NO2 molecule is constructed based on high-level ab initio calculations. These ab initio energies are represented using the reproducible kernel Hilbert space method and Legendre polynomials. The global PES of NO2 in the ground state is constructed by smoothly connecting the surfaces of the grids of various channels around the equilibrium NO2 geometry by a distance-dependent weighting function. The rate coefficients were calculated using Monte Carlo integration. The results indicate that at high temperatures only the lowest A-symmetry PES is relevant. At the highest temperatures investigated (20,000 K), the rate coefficient for the "O1O2+N" channel becomes comparable (to within a factor of around three) to the rate coefficient of the oxygen exchange reaction. A state resolved analysis shows that the smaller the vibrational quantum number of NO in the reactants, the higher the relative translational energy required to open it and conversely with higher vibrational quantum number, less translational energy is required. This is in accordance with Polanyi's rules. However, the oxygen exchange channel (NO2+O1) is accessible at any collision energy. Finally, this work introduces an efficient computational protocol for the investigation of three-atom collisions in general.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calixto, M., E-mail: calixto@ugr.es; Pérez-Romero, E.
We revise the unireps. of U(2, 2) describing conformal particles with continuous mass spectrum from a many-body perspective, which shows massive conformal particles as compounds of two correlated massless particles. The statistics of the compound (boson/fermion) depends on the helicity h of the massless components (integer/half-integer). Coherent states (CS) of particle-hole pairs (“excitons”) are also explicitly constructed as the exponential action of exciton (non-canonical) creation operators on the ground state of unpaired particles. These CS are labeled by points Z (2×2 complex matrices) on the Cartan-Bergman domain D₄=U(2,2)/U(2)², and constitute a generalized (matrix) version of Perelomov U(1, 1) coherent statesmore » labeled by points z on the unit disk D₁=U(1,1)/U(1)². First, we follow a geometric approach to the construction of CS, orthonormal basis, U(2, 2) generators and their matrix elements and symbols in the reproducing kernel Hilbert space H{sub λ}(D₄) of analytic square-integrable holomorphic functions on D₄, which carries a unitary irreducible representation of U(2, 2) with index λϵN (the conformal or scale dimension). Then we introduce a many-body representation of the previous construction through an oscillator realization of the U(2, 2) Lie algebra generators in terms of eight boson operators with constraints. This particle picture allows us for a physical interpretation of our abstract mathematical construction in the many-body jargon. In particular, the index λ is related to the number 2(λ – 2) of unpaired quanta and to the helicity h = (λ – 2)/2 of each massless particle forming the massive compound.« less
L1-norm locally linear representation regularization multi-source adaptation learning.
Tao, Jianwen; Wen, Shiting; Hu, Wenjun
2015-09-01
In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.
Quantitative comparison of noise texture across CT scanners from different manufacturers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solomon, Justin B.; Christianson, Olav; Samei, Ehsan
2012-10-15
Purpose: To quantitatively compare noise texture across computed tomography (CT) scanners from different manufacturers using the noise power spectrum (NPS). Methods: The American College of Radiology CT accreditation phantom (Gammex 464, Gammex, Inc., Middleton, WI) was imaged on two scanners: Discovery CT 750HD (GE Healthcare, Waukesha, WI), and SOMATOM Definition Flash (Siemens Healthcare, Germany), using a consistent acquisition protocol (120 kVp, 0.625/0.6 mm slice thickness, 250 mAs, and 22 cm field of view). Images were reconstructed using filtered backprojection and a wide selection of reconstruction kernels. For each image set, the 2D NPS were estimated from the uniform section ofmore » the phantom. The 2D spectra were normalized by their integral value, radially averaged, and filtered by the human visual response function. A systematic kernel-by-kernel comparison across manufacturers was performed by computing the root mean square difference (RMSD) and the peak frequency difference (PFD) between the NPS from different kernels. GE and Siemens kernels were compared and kernel pairs that minimized the RMSD and |PFD| were identified. Results: The RMSD (|PFD|) values between the NPS of GE and Siemens kernels varied from 0.01 mm{sup 2} (0.002 mm{sup -1}) to 0.29 mm{sup 2} (0.74 mm{sup -1}). The GE kernels 'Soft,''Standard,''Chest,' and 'Lung' closely matched the Siemens kernels 'B35f,''B43f,''B41f,' and 'B80f' (RMSD < 0.05 mm{sup 2}, |PFD| < 0.02 mm{sup -1}, respectively). The GE 'Bone,''Bone+,' and 'Edge' kernels all matched most closely with Siemens 'B75f' kernel but with sizeable RMSD and |PFD| values up to 0.18 mm{sup 2} and 0.41 mm{sup -1}, respectively. These sizeable RMSD and |PFD| values corresponded to visually perceivable differences in the noise texture of the images. Conclusions: It is possible to use the NPS to quantitatively compare noise texture across CT systems. The degree to which similar texture across scanners could be achieved varies and is limited by the kernels available on each scanner.« less
Quantitative comparison of noise texture across CT scanners from different manufacturers.
Solomon, Justin B; Christianson, Olav; Samei, Ehsan
2012-10-01
To quantitatively compare noise texture across computed tomography (CT) scanners from different manufacturers using the noise power spectrum (NPS). The American College of Radiology CT accreditation phantom (Gammex 464, Gammex, Inc., Middleton, WI) was imaged on two scanners: Discovery CT 750HD (GE Healthcare, Waukesha, WI), and SOMATOM Definition Flash (Siemens Healthcare, Germany), using a consistent acquisition protocol (120 kVp, 0.625∕0.6 mm slice thickness, 250 mAs, and 22 cm field of view). Images were reconstructed using filtered backprojection and a wide selection of reconstruction kernels. For each image set, the 2D NPS were estimated from the uniform section of the phantom. The 2D spectra were normalized by their integral value, radially averaged, and filtered by the human visual response function. A systematic kernel-by-kernel comparison across manufacturers was performed by computing the root mean square difference (RMSD) and the peak frequency difference (PFD) between the NPS from different kernels. GE and Siemens kernels were compared and kernel pairs that minimized the RMSD and |PFD| were identified. The RMSD (|PFD|) values between the NPS of GE and Siemens kernels varied from 0.01 mm(2) (0.002 mm(-1)) to 0.29 mm(2) (0.74 mm(-1)). The GE kernels "Soft," "Standard," "Chest," and "Lung" closely matched the Siemens kernels "B35f," "B43f," "B41f," and "B80f" (RMSD < 0.05 mm(2), |PFD| < 0.02 mm(-1), respectively). The GE "Bone," "Bone+," and "Edge" kernels all matched most closely with Siemens "B75f" kernel but with sizeable RMSD and |PFD| values up to 0.18 mm(2) and 0.41 mm(-1), respectively. These sizeable RMSD and |PFD| values corresponded to visually perceivable differences in the noise texture of the images. It is possible to use the NPS to quantitatively compare noise texture across CT systems. The degree to which similar texture across scanners could be achieved varies and is limited by the kernels available on each scanner.
International Roughness Index (IRI) measurement using Hilbert-Huang transform
NASA Astrophysics Data System (ADS)
Zhang, Wenjin; Wang, Ming L.
2018-03-01
International Roughness Index (IRI) is an important metric to measure condition of roadways. This index is usually used to justify the maintenance priority and scheduling for roadways. Various inspection methods and algorithms are used to assess this index through the use of road profiles. This study proposes to calculate IRI values using Hilbert-Huang Transform (HHT) algorithm. In particular, road profile data is provided using surface radar attached to a vehicle driving at highway speed. Hilbert-Huang transform (HHT) is used in this study because of its superior properties for nonstationary and nonlinear data. Empirical mode decomposition (EMD) processes the raw data into a set of intrinsic mode functions (IMFs), representing various dominating frequencies. These various frequencies represent noises from the body of the vehicle, sensor location, and the excitation induced by nature frequency of the vehicle, etc. IRI calculation can be achieved by eliminating noises that are not associated with the road profile including vehicle inertia effect. The resulting IRI values are compared favorably to the field IRI values, where the filtered IMFs captures the most characteristics of road profile while eliminating noises from the vehicle and the vehicle inertia effect. Therefore, HHT is an effect method for road profile analysis and for IRI measurement. Furthermore, the application of HHT method has the potential to eliminate the use of accelerometers attached to the vehicle as part of the displacement measurement used to offset the inertia effect.
LORETA EEG phase reset of the default mode network.
Thatcher, Robert W; North, Duane M; Biver, Carl J
2014-01-01
The purpose of this study was to explore phase reset of 3-dimensional current sources in Brodmann areas located in the human default mode network (DMN) using Low Resolution Electromagnetic Tomography (LORETA) of the human electroencephalogram (EEG). The EEG was recorded from 19 scalp locations from 70 healthy normal subjects ranging in age from 13 to 20 years. A time point by time point computation of LORETA current sources were computed for 14 Brodmann areas comprising the DMN in the delta frequency band. The Hilbert transform of the LORETA time series was used to compute the instantaneous phase differences between all pairs of Brodmann areas. Phase shift and lock durations were calculated based on the 1st and 2nd derivatives of the time series of phase differences. Phase shift duration exhibited three discrete modes at approximately: (1) 25 ms, (2) 50 ms, and (3) 65 ms. Phase lock duration present primarily at: (1) 300-350 ms and (2) 350-450 ms. Phase shift and lock durations were inversely related and exhibited an exponential change with distance between Brodmann areas. The results are explained by local neural packing density of network hubs and an exponential decrease in connections with distance from a hub. The results are consistent with a discrete temporal model of brain function where anatomical hubs behave like a "shutter" that opens and closes at specific durations as nodes of a network giving rise to temporarily phase locked clusters of neurons for specific durations.
Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.
Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K
2016-03-01
Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.
Quantum Computation of Fluid Dynamics
1998-02-16
state of the quantum computer’s "memory". With N qubits, the quantum state IT) resides in an exponentially large Hilbert space with 2 N dimensions. A new...size of the Hilbert space in which the entanglement occurs. And to make matters worse, even if a quantum computer was constructed with a large number of...number of qubits "* 2 N is the size of the full Hilbert space "* 2 B is the size of the on-site submanifold, denoted 71 "* B is the size of the
Heat kernel for the elliptic system of linear elasticity with boundary conditions
NASA Astrophysics Data System (ADS)
Taylor, Justin; Kim, Seick; Brown, Russell
2014-10-01
We consider the elliptic system of linear elasticity with bounded measurable coefficients in a domain where the second Korn inequality holds. We construct heat kernel of the system subject to Dirichlet, Neumann, or mixed boundary condition under the assumption that weak solutions of the elliptic system are Hölder continuous in the interior. Moreover, we show that if weak solutions of the mixed problem are Hölder continuous up to the boundary, then the corresponding heat kernel has a Gaussian bound. In particular, if the domain is a two dimensional Lipschitz domain satisfying a corkscrew or non-tangential accessibility condition on the set where we specify Dirichlet boundary condition, then we show that the heat kernel has a Gaussian bound. As an application, we construct Green's function for elliptic mixed problem in such a domain.
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Shiri, Ron S.; Vootukuru, Meg; Coletti, Alessandro
2015-01-01
Norden E. Huang et al. had proposed and published the Hilbert-Huang Transform (HHT) concept correspondently in 1996, 1998. The HHT is a novel method for adaptive spectral analysis of non-linear and non-stationary signals. The HHT comprises two components: - the Huang Empirical Mode Decomposition (EMD), resulting in an adaptive data-derived basis of Intrinsic Mode functions (IMFs), and the Hilbert Spectral Analysis (HSA1) based on the Hilbert Transform for 1-dimension (1D) applied to the EMD IMF's outcome. Although paper describes the HHT concept in great depth, it does not contain all needed methodology to implement the HHT computer code. In 2004, Semion Kizhner and Karin Blank implemented the reference digital HHT real-time data processing system for 1D (HHT-DPS Version 1.4). The case for 2-Dimension (2D) (HHT2) proved to be difficult due to the computational complexity of EMD for 2D (EMD2) and absence of a suitable Hilbert Transform for 2D spectral analysis (HSA2). The real-time EMD2 and HSA2 comprise the real-time HHT2. Kizhner completed the real-time EMD2 and the HSA2 reference digital implementations respectively in 2013 & 2014. Still, the HHT2 outcome synthesis remains an active research area. This paper presents the initial concepts and preliminary results of HHT2-based synthesis and its application to processing of signals contaminated by Radio-Frequency Interference (RFI), as well as optical systems' fringe detection and mitigation at design stage. The Soil Moisture Active Passive (SMAP mission (SMAP) carries a radiometer instrument that measures Earth soil moisture at L1 frequency (1.4 GHz polarimetric - H, V, 3rd and 4th Stokes parameters). There is abundant RFI at L1 and because soil moisture is a strategic parameter, it is important to be able to recover the RFI-contaminated measurement samples (15% of telemetry). State-of-the-art only allows RFI detection and removes RFI-contaminated measurements. The HHT-based analysis and synthesis facilitates recovery of measurements contaminated by all kinds of RFI, including jamming [7-8]. The fringes are inherent in optical systems and multi-layer complex contour expensive coatings are employed to remove the unwanted fringes. HHT2-based analysis allows test image decomposition to analyze and detect fringes, and HHT2-based synthesis of useful image.
Hériché, Jean-Karim; Lees, Jon G.; Morilla, Ian; Walter, Thomas; Petrova, Boryana; Roberti, M. Julia; Hossain, M. Julius; Adler, Priit; Fernández, José M.; Krallinger, Martin; Haering, Christian H.; Vilo, Jaak; Valencia, Alfonso; Ranea, Juan A.; Orengo, Christine; Ellenberg, Jan
2014-01-01
The advent of genome-wide RNA interference (RNAi)–based screens puts us in the position to identify genes for all functions human cells carry out. However, for many functions, assay complexity and cost make genome-scale knockdown experiments impossible. Methods to predict genes required for cell functions are therefore needed to focus RNAi screens from the whole genome on the most likely candidates. Although different bioinformatics tools for gene function prediction exist, they lack experimental validation and are therefore rarely used by experimentalists. To address this, we developed an effective computational gene selection strategy that represents public data about genes as graphs and then analyzes these graphs using kernels on graph nodes to predict functional relationships. To demonstrate its performance, we predicted human genes required for a poorly understood cellular function—mitotic chromosome condensation—and experimentally validated the top 100 candidates with a focused RNAi screen by automated microscopy. Quantitative analysis of the images demonstrated that the candidates were indeed strongly enriched in condensation genes, including the discovery of several new factors. By combining bioinformatics prediction with experimental validation, our study shows that kernels on graph nodes are powerful tools to integrate public biological data and predict genes involved in cellular functions of interest. PMID:24943848
Time Asymmetric Quantum Mechanics
NASA Astrophysics Data System (ADS)
Bohm, Arno R.; Gadella, Manuel; Kielanowski, Piotr
2011-09-01
The meaning of time asymmetry in quantum physics is discussed. On the basis of a mathematical theorem, the Stone-von Neumann theorem, the solutions of the dynamical equations, the Schrödinger equation (1) for states or the Heisenberg equation (6a) for observables are given by a unitary group. Dirac kets require the concept of a RHS (rigged Hilbert space) of Schwartz functions; for this kind of RHS a mathematical theorem also leads to time symmetric group evolution. Scattering theory suggests to distinguish mathematically between states (defined by a preparation apparatus) and observables (defined by a registration apparatus (detector)). If one requires that scattering resonances of width Γ and exponentially decaying states of lifetime τ=h/Γ should be the same physical entities (for which there is sufficient evidence) one is led to a pair of RHS's of Hardy functions and connected with it, to a semigroup time evolution t0≤t<∞, with the puzzling result that there is a quantum mechanical beginning of time, just like the big bang time for the universe, when it was a quantum system. The decay of quasi-stable particles is used to illustrate this quantum mechanical time asymmetry. From the analysis of these processes, we show that the properties of rigged Hilbert spaces of Hardy functions are suitable for a formulation of time asymmetry in quantum mechanics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Baker, Nathan A.; Li, Xiantao
We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.
a Gsa-Svm Hybrid System for Classification of Binary Problems
NASA Astrophysics Data System (ADS)
Sarafrazi, Soroor; Nezamabadi-pour, Hossein; Barahman, Mojgan
2011-06-01
This paperhybridizesgravitational search algorithm (GSA) with support vector machine (SVM) and made a novel GSA-SVM hybrid system to improve the classification accuracy in binary problems. GSA is an optimization heuristic toolused to optimize the value of SVM kernel parameter (in this paper, radial basis function (RBF) is chosen as the kernel function). The experimental results show that this newapproach can achieve high classification accuracy and is comparable to or better than the particle swarm optimization (PSO)-SVM and genetic algorithm (GA)-SVM, which are two hybrid systems for classification.
A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia
NASA Astrophysics Data System (ADS)
Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.
2017-08-01
In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.
Cannabinoid Modulation of Functional Connectivity within Regions Processing Attentional Salience
Bhattacharyya, Sagnik; Falkenberg, Irina; Martin-Santos, Rocio; Atakan, Zerrin; Crippa, Jose A; Giampietro, Vincent; Brammer, Mick; McGuire, Philip
2015-01-01
There is now considerable evidence to support the hypothesis that psychotic symptoms are the result of abnormal salience attribution, and that the attribution of salience is largely mediated through the prefrontal cortex, the striatum, and the hippocampus. Although these areas show differential activation under the influence of delta-9-tetrahydrocannabinol (delta-9-THC) and cannabidiol (CBD), the two major derivatives of cannabis sativa, little is known about the effects of these cannabinoids on the functional connectivity between these regions. We investigated this in healthy occasional cannabis users by employing event-related functional magnetic resonance imaging (fMRI) following oral administration of delta-9-THC, CBD, or a placebo capsule. Employing a seed cluster-based functional connectivity analysis that involved using the average time series from each seed cluster for a whole-brain correlational analysis, we investigated the effect of drug condition on functional connectivity between the seed clusters and the rest of the brain during an oddball salience processing task. Relative to the placebo condition, delta-9-THC and CBD had opposite effects on the functional connectivity between the dorsal striatum, the prefrontal cortex, and the hippocampus. Delta-9-THC reduced fronto-striatal connectivity, which was related to its effect on task performance, whereas this connection was enhanced by CBD. Conversely, mediotemporal-prefrontal connectivity was enhanced by delta-9-THC and reduced by CBD. Our results suggest that the functional integration of brain regions involved in salience processing is differentially modulated by single doses of delta-9-THC and CBD and that this relates to the processing of salient stimuli. PMID:25249057
Cannabinoid modulation of functional connectivity within regions processing attentional salience.
Bhattacharyya, Sagnik; Falkenberg, Irina; Martin-Santos, Rocio; Atakan, Zerrin; Crippa, Jose A; Giampietro, Vincent; Brammer, Mick; McGuire, Philip
2015-05-01
There is now considerable evidence to support the hypothesis that psychotic symptoms are the result of abnormal salience attribution, and that the attribution of salience is largely mediated through the prefrontal cortex, the striatum, and the hippocampus. Although these areas show differential activation under the influence of delta-9-tetrahydrocannabinol (delta-9-THC) and cannabidiol (CBD), the two major derivatives of cannabis sativa, little is known about the effects of these cannabinoids on the functional connectivity between these regions. We investigated this in healthy occasional cannabis users by employing event-related functional magnetic resonance imaging (fMRI) following oral administration of delta-9-THC, CBD, or a placebo capsule. Employing a seed cluster-based functional connectivity analysis that involved using the average time series from each seed cluster for a whole-brain correlational analysis, we investigated the effect of drug condition on functional connectivity between the seed clusters and the rest of the brain during an oddball salience processing task. Relative to the placebo condition, delta-9-THC and CBD had opposite effects on the functional connectivity between the dorsal striatum, the prefrontal cortex, and the hippocampus. Delta-9-THC reduced fronto-striatal connectivity, which was related to its effect on task performance, whereas this connection was enhanced by CBD. Conversely, mediotemporal-prefrontal connectivity was enhanced by delta-9-THC and reduced by CBD. Our results suggest that the functional integration of brain regions involved in salience processing is differentially modulated by single doses of delta-9-THC and CBD and that this relates to the processing of salient stimuli.
Fixed and Data Adaptive Kernels in Cohen’s Class of Time-Frequency Distributions
1992-09-01
translated into its associated analytic signal by using the techniques discussed in Chapter Four. 1. Wigner - Ville Distribution function PS = wvd (data,winlen...step,begin,theend) % PS = wvd (data,winlen,step,begin,theend) % ’wvd.ml returns the Wigner - Ville time-frequency distribution % for the input data...12 IV. FIXED KERNEL DISTRIBUTIONS .................................................................. 19 A. WIGNER - VILLE DISTRIBUTION
USDA-ARS?s Scientific Manuscript database
The dek18 mutant of maize has decreased auxin content in kernels. Molecular and functional characterization of this mutant line offers the possibility to better understand auxin biology in maize seed development. Seeds of the dek18 mutants are smaller compared to wild type seeds and the vegetative d...
Predicting activity approach based on new atoms similarity kernel function.
Abu El-Atta, Ahmed H; Moussa, M I; Hassanien, Aboul Ella
2015-07-01
Drug design is a high cost and long term process. To reduce time and costs for drugs discoveries, new techniques are needed. Chemoinformatics field implements the informational techniques and computer science like machine learning and graph theory to discover the chemical compounds properties, such as toxicity or biological activity. This is done through analyzing their molecular structure (molecular graph). To overcome this problem there is an increasing need for algorithms to analyze and classify graph data to predict the activity of molecules. Kernels methods provide a powerful framework which combines machine learning with graph theory techniques. These kernels methods have led to impressive performance results in many several chemoinformatics problems like biological activity prediction. This paper presents a new approach based on kernel functions to solve activity prediction problem for chemical compounds. First we encode all atoms depending on their neighbors then we use these codes to find a relationship between those atoms each other. Then we use relation between different atoms to find similarity between chemical compounds. The proposed approach was compared with many other classification methods and the results show competitive accuracy with these methods. Copyright © 2015 Elsevier Inc. All rights reserved.
Fission product release and survivability of UN-kernel LWR TRISO fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. M. Besmann; M. K. Ferber; H.-T. Lin
2014-05-01
A thermomechanical assessment of the LWR application of TRISO fuel with UN kernels was performed. Fission product release under operational and transient temperature conditions was determined by extrapolation from fission product recoil calculations and limited data from irradiated UN pellets. Both fission recoil and diffusive release were considered and internal particle pressures computed for both 650 and 800 um diameter kernels as a function of buffer layer thickness. These pressures were used in conjunction with a finite element program to compute the radial and tangential stresses generated within a TRISO particle undergoing burnup. Creep and swelling of the inner andmore » outer pyrolytic carbon layers were included in the analyses. A measure of reliability of the TRISO particle was obtained by computing the probability of survival of the SiC barrier layer and the maximum tensile stress generated in the pyrolytic carbon layers from internal pressure and thermomechanics of the layers. These reliability estimates were obtained as functions of the kernel diameter, buffer layer thickness, and pyrolytic carbon layer thickness. The value of the probability of survival at the end of irradiation was inversely proportional to the maximum pressure.« less
Ruan, Peiying; Hayashida, Morihiro; Maruyama, Osamu; Akutsu, Tatsuya
2013-01-01
Since many proteins express their functional activity by interacting with other proteins and forming protein complexes, it is very useful to identify sets of proteins that form complexes. For that purpose, many prediction methods for protein complexes from protein-protein interactions have been developed such as MCL, MCODE, RNSC, PCP, RRW, and NWE. These methods have dealt with only complexes with size of more than three because the methods often are based on some density of subgraphs. However, heterodimeric protein complexes that consist of two distinct proteins occupy a large part according to several comprehensive databases of known complexes. In this paper, we propose several feature space mappings from protein-protein interaction data, in which each interaction is weighted based on reliability. Furthermore, we make use of prior knowledge on protein domains to develop feature space mappings, domain composition kernel and its combination kernel with our proposed features. We perform ten-fold cross-validation computational experiments. These results suggest that our proposed kernel considerably outperforms the naive Bayes-based method, which is the best existing method for predicting heterodimeric protein complexes. PMID:23776458
The Swift-Hohenberg equation with a nonlocal nonlinearity
NASA Astrophysics Data System (ADS)
Morgan, David; Dawes, Jonathan H. P.
2014-03-01
It is well known that aspects of the formation of localised states in a one-dimensional Swift-Hohenberg equation can be described by Ginzburg-Landau-type envelope equations. This paper extends these multiple scales analyses to cases where an additional nonlinear integral term, in the form of a convolution, is present. The presence of a kernel function introduces a new lengthscale into the problem, and this results in additional complexity in both the derivation of envelope equations and in the bifurcation structure. When the kernel is short-range, weakly nonlinear analysis results in envelope equations of standard type but whose coefficients are modified in complicated ways by the nonlinear nonlocal term. Nevertheless, these computations can be formulated quite generally in terms of properties of the Fourier transform of the kernel function. When the lengthscale associated with the kernel is longer, our method leads naturally to the derivation of two different, novel, envelope equations that describe aspects of the dynamics in these new regimes. The first of these contains additional bifurcations, and unexpected loops in the bifurcation diagram. The second of these captures the stretched-out nature of the homoclinic snaking curves that arises due to the nonlocal term.
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale
Diao, Yuzhu; Hu, Aqin
2018-01-01
Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation. PMID:29498699
Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale.
Li, Qingsheng; Diao, Yuzhu; Gong, Zaiwu; Hu, Aqin
2018-03-02
Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation.
Ben Salem, Samira; Bacha, Khmais; Chaari, Abdelkader
2012-09-01
In this work we suggest an original fault signature based on an improved combination of Hilbert and Park transforms. Starting from this combination we can create two fault signatures: Hilbert modulus current space vector (HMCSV) and Hilbert phase current space vector (HPCSV). These two fault signatures are subsequently analysed using the classical fast Fourier transform (FFT). The effects of mechanical faults on the HMCSV and HPCSV spectrums are described, and the related frequencies are determined. The magnitudes of spectral components, relative to the studied faults (air-gap eccentricity and outer raceway ball bearing defect), are extracted in order to develop the input vector necessary for learning and testing the support vector machine with an aim of classifying automatically the various states of the induction motor. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki
2016-03-01
The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference between double and triple Gaussian kernel models. The authors found that the difference between the two studied kernel models appeared at mid-depths and the accuracy of predicting the double Gaussian model deteriorated at the low-dose bump that appeared at mid-depths. When the authors employed the double Gaussian kernel model, the accuracy of calculations for the absolute dose at the center of the SOBP varied with irradiation conditions and the maximum difference was 3.4%. In contrast, the results obtained from calculations with the triple Gaussian kernel model indicated good agreement with the measurements within ±1.1%, regardless of the irradiation conditions. The difference between the results obtained with the two types of studied kernel models was distinct in the high energy region. The accuracy of calculations with the double Gaussian kernel model varied with the field size and SOBP width because the accuracy of prediction with the double Gaussian model was insufficient at the low-dose bump. The evaluation was only qualitative under limited volumetric irradiation conditions. Further accumulation of measured data would be needed to quantitatively comprehend what influence the double and triple Gaussian kernel models had on the accuracy of dose calculations.
Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods
NASA Astrophysics Data System (ADS)
Liu, Qinya; Tromp, Jeroen
2008-07-01
We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.
Jabbar, Ahmed Najah
2018-04-13
This letter suggests two new types of asymmetrical higher-order kernels (HOK) that are generated using the orthogonal polynomials Laguerre (positive or right skew) and Bessel (negative or left skew). These skewed HOK are implemented in the blind source separation/independent component analysis (BSS/ICA) algorithm. The tests for these proposed HOK are accomplished using three scenarios to simulate a real environment using actual sound sources, an environment of mixtures of multimodal fast-changing probability density function (pdf) sources that represent a challenge to the symmetrical HOK, and an environment of an adverse case (near gaussian). The separation is performed by minimizing the mutual information (MI) among the mixed sources. The performance of the skewed kernels is compared to the performance of the standard kernels such as Epanechnikov, bisquare, trisquare, and gaussian and the performance of the symmetrical HOK generated using the polynomials Chebyshev1, Chebyshev2, Gegenbauer, Jacobi, and Legendre to the tenth order. The gaussian HOK are generated using the Hermite polynomial and the Wand and Schucany procedure. The comparison among the 96 kernels is based on the average intersymbol interference ratio (AISIR) and the time needed to complete the separation. In terms of AISIR, the skewed kernels' performance is better than that of the standard kernels and rivals most of the symmetrical kernels' performance. The importance of these new skewed HOK is manifested in the environment of the multimodal pdf mixtures. In such an environment, the skewed HOK come in first place compared with the symmetrical HOK. These new families can substitute for symmetrical HOKs in such applications.
NASA Astrophysics Data System (ADS)
Tehrany, Mahyat Shafapour; Pradhan, Biswajeet; Jebur, Mustafa Neamah
2014-05-01
Flood is one of the most devastating natural disasters that occur frequently in Terengganu, Malaysia. Recently, ensemble based techniques are getting extremely popular in flood modeling. In this paper, weights-of-evidence (WoE) model was utilized first, to assess the impact of classes of each conditioning factor on flooding through bivariate statistical analysis (BSA). Then, these factors were reclassified using the acquired weights and entered into the support vector machine (SVM) model to evaluate the correlation between flood occurrence and each conditioning factor. Through this integration, the weak point of WoE can be solved and the performance of the SVM will be enhanced. The spatial database included flood inventory, slope, stream power index (SPI), topographic wetness index (TWI), altitude, curvature, distance from the river, geology, rainfall, land use/cover (LULC), and soil type. Four kernel types of SVM (linear kernel (LN), polynomial kernel (PL), radial basis function kernel (RBF), and sigmoid kernel (SIG)) were used to investigate the performance of each kernel type. The efficiency of the new ensemble WoE and SVM method was tested using area under curve (AUC) which measured the prediction and success rates. The validation results proved the strength and efficiency of the ensemble method over the individual methods. The best results were obtained from RBF kernel when compared with the other kernel types. Success rate and prediction rate for ensemble WoE and RBF-SVM method were 96.48% and 95.67% respectively. The proposed ensemble flood susceptibility mapping method could assist researchers and local governments in flood mitigation strategies.
A Probabilistic Framework for the Validation and Certification of Computer Simulations
NASA Technical Reports Server (NTRS)
Ghanem, Roger; Knio, Omar
2000-01-01
The paper presents a methodology for quantifying, propagating, and managing the uncertainty in the data required to initialize computer simulations of complex phenomena. The purpose of the methodology is to permit the quantitative assessment of a certification level to be associated with the predictions from the simulations, as well as the design of a data acquisition strategy to achieve a target level of certification. The value of a methodology that can address the above issues is obvious, specially in light of the trend in the availability of computational resources, as well as the trend in sensor technology. These two trends make it possible to probe physical phenomena both with physical sensors, as well as with complex models, at previously inconceivable levels. With these new abilities arises the need to develop the knowledge to integrate the information from sensors and computer simulations. This is achieved in the present work by tracing both activities back to a level of abstraction that highlights their commonalities, thus allowing them to be manipulated in a mathematically consistent fashion. In particular, the mathematical theory underlying computer simulations has long been associated with partial differential equations and functional analysis concepts such as Hilbert spares and orthogonal projections. By relying on a probabilistic framework for the modeling of data, a Hilbert space framework emerges that permits the modeling of coefficients in the governing equations as random variables, or equivalently, as elements in a Hilbert space. This permits the development of an approximation theory for probabilistic problems that parallels that of deterministic approximation theory. According to this formalism, the solution of the problem is identified by its projection on a basis in the Hilbert space of random variables, as opposed to more traditional techniques where the solution is approximated by its first or second-order statistics. The present representation, in addition to capturing significantly more information than the traditional approach, facilitates the linkage between different interacting stochastic systems as is typically observed in real-life situations.
[EMD Time-Frequency Analysis of Raman Spectrum and NIR].
Zhao, Xiao-yu; Fang, Yi-ming; Tan, Feng; Tong, Liang; Zhai, Zhe
2016-02-01
This paper analyzes the Raman spectrum and Near Infrared Spectrum (NIR) with time-frequency method. The empirical mode decomposition spectrum becomes intrinsic mode functions, which the proportion calculation reveals the Raman spectral energy is uniform distributed in each component, while the NIR's low order intrinsic mode functions only undertakes fewer primary spectroscopic effective information. Both the real spectrum and numerical experiments show that the empirical mode decomposition (EMD) regard Raman spectrum as the amplitude-modulated signal, which possessed with high frequency adsorption property; and EMD regards NIR as the frequency-modulated signal, which could be preferably realized high frequency narrow-band demodulation during first-order intrinsic mode functions. The first-order intrinsic mode functions Hilbert transform reveals that during the period of empirical mode decomposes Raman spectrum, modal aliasing happened. Through further analysis of corn leaf's NIR in time-frequency domain, after EMD, the first and second orders components of low energy are cut off, and reconstruct spectral signal by using the remaining intrinsic mode functions, the root-mean-square error is 1.001 1, and the correlation coefficient is 0.981 3, both of these two indexes indicated higher accuracy in re-construction; the decomposition trend term indicates the absorbency is ascending along with the decreasing to wave length in the near-infrared light wave band; and the Hilbert transform of characteristic modal component displays, 657 cm⁻¹ is the specific frequency by the corn leaf stress spectrum, which could be regarded as characteristic frequency for identification.
NASA Astrophysics Data System (ADS)
Yusop, Hanafi M.; Ghazali, M. F.; Yusof, M. F. M.; PiRemli, M. A.; Karollah, B.; Rusman
2017-10-01
Pressure transient signal occurred due to sudden changes in fluid propagation filled in pipelines system, which is caused by rapid pressure and flow fluctuation in a system, such as closing and opening valve rapidly. The application of Hilbert-Huang Transform (HHT) as the method to analyse the pressure transient signal utilised in this research. However, this method has the difficulty in selecting the suitable IMF for the further post-processing, which is Hilbert Transform (HT). This paper proposed the implementation of Integrated Kurtosis-based Algorithm for z-filter Technique (I-kaz) to kurtosis ratio (I-kaz-Kurtosis) for that allows automatic selection of intrinsic mode function (IMF) that’s should be used. This work demonstrated the synthetic pressure transient signal generates using transmission line modelling (TLM) in order to test the effectiveness of I-kaz as the autonomous selection of intrinsic mode function (IMF). A straight fluid network was designed using TLM fixing with higher resistance at some point act as a leak and connecting to the pipe feature (junction, pipefitting or blockage). The analysis results using I-kaz-kurtosis ratio revealed that the method can be utilised as an automatic selection of intrinsic mode function (IMF) although the noise level ratio of the signal is lower. I-kaz-kurtosis ratio is recommended and advised to be implemented as automatic selection of intrinsic mode function (IMF) through HHT analysis.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.; Rowe, W. S.
1984-01-01
For the design of active controls to stabilize flight vehicles, which requires the use of unsteady aerodynamics that are valid for arbitrary complex frequencies, algorithms are derived for evaluating the nonelementary part of the kernel of the integral equation that relates unsteady pressure to downwash. This part of the kernel is separated into an infinite limit integral that is evaluated using Bessel and Struve functions and into a finite limit integral that is expanded in series and integrated termwise in closed form. The developed series expansions gave reliable answers for all complex reduced frequencies and executed faster than exponential approximations for many pressure stations.
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-11-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps (i.e., the spatial probability of a future vent opening given the past eruptive activity of a volcano). This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source geographic information system Quantum GIS, which is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows the selection of an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input data sets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
Pirooznia, Mehdi; Deng, Youping
2006-12-12
Graphical user interface (GUI) software promotes novelty by allowing users to extend the functionality. SVM Classifier is a cross-platform graphical application that handles very large datasets well. The purpose of this study is to create a GUI application that allows SVM users to perform SVM training, classification and prediction. The GUI provides user-friendly access to state-of-the-art SVM methods embodied in the LIBSVM implementation of Support Vector Machine. We implemented the java interface using standard swing libraries. We used a sample data from a breast cancer study for testing classification accuracy. We achieved 100% accuracy in classification among the BRCA1-BRCA2 samples with RBF kernel of SVM. We have developed a java GUI application that allows SVM users to perform SVM training, classification and prediction. We have demonstrated that support vector machines can accurately classify genes into functional categories based upon expression data from DNA microarray hybridization experiments. Among the different kernel functions that we examined, the SVM that uses a radial basis kernel function provides the best performance. The SVM Classifier is available at http://mfgn.usm.edu/ebl/svm/.
Estimating Mixture of Gaussian Processes by Kernel Smoothing
Huang, Mian; Li, Runze; Wang, Hansheng; Yao, Weixin
2014-01-01
When the functional data are not homogeneous, e.g., there exist multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this paper, we propose a new estimation procedure for the Mixture of Gaussian Processes, to incorporate both functional and inhomogeneous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from EM algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset. PMID:24976675
Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko
2017-07-10
This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.
NASA Astrophysics Data System (ADS)
Evstatiev, Evstati; Svidzinski, Vladimir; Spencer, Andy; Galkin, Sergei
2014-10-01
Full wave 3-D modeling of RF fields in hot magnetized nonuniform plasma requires calculation of nonlocal conductivity kernel describing the dielectric response of such plasma to the RF field. In many cases, the conductivity kernel is a localized function near the test point which significantly simplifies numerical solution of the full wave 3-D problem. Preliminary results of feasibility analysis of numerical calculation of the conductivity kernel in a 3-D hot nonuniform magnetized plasma in the electron cyclotron frequency range will be reported. This case is relevant to modeling of ECRH in ITER. The kernel is calculated by integrating the linearized Vlasov equation along the unperturbed particle's orbits. Particle's orbits in the nonuniform equilibrium magnetic field are calculated numerically by one of the Runge-Kutta methods. RF electric field is interpolated on a specified grid on which the conductivity kernel is discretized. The resulting integrals in the particle's initial velocity and time are then calculated numerically. Different optimization approaches of the integration are tested in this feasibility analysis. Work is supported by the U.S. DOE SBIR program.
Delta opioid receptor analgesia: recent contributions from pharmacology and molecular approaches
Gavériaux-Ruff, Claire; Kieffer, Brigitte Lina
2012-01-01
Delta opioid receptors represent a promising target for the development of novel analgesics. A number of tools have been developed recently that have significantly improved our knowledge of delta receptor function in pain control. These include several novel delta agonists with potent analgesic properties, as well as genetic mouse models with targeted mutations in the delta opioid receptor gene. Also, recent findings have further documented the regulation of delta receptor function at cellular level, which impacts on the pain-reducing activity of the receptor. These regulatory mechanisms occur at transcriptional and post-translational levels, along agonist-induced receptor activation, signaling and trafficking, or in interaction with other receptors and neuromodulatory systems. All these tools for in vivo research, as well as proposed mechanisms at molecular level, have tremendously increased our understanding of delta receptor physiology, and contribute to designing innovative strategies for the treatment of chronic pain and other diseases such as mood disorders. PMID:21836459
Richardson-Lucy deblurring for the star scene under a thinning motion path
NASA Astrophysics Data System (ADS)
Su, Laili; Shao, Xiaopeng; Wang, Lin; Wang, Haixin; Huang, Yining
2015-05-01
This paper puts emphasis on how to model and correct image blur that arises from a camera's ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera's path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera's ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patrick, Christopher E., E-mail: chripa@fysik.dtu.dk; Thygesen, Kristian S., E-mail: thygesen@fysik.dtu.dk
2015-09-14
We present calculations of the correlation energies of crystalline solids and isolated systems within the adiabatic-connection fluctuation-dissipation formulation of density-functional theory. We perform a quantitative comparison of a set of model exchange-correlation kernels originally derived for the homogeneous electron gas (HEG), including the recently introduced renormalized adiabatic local-density approximation (rALDA) and also kernels which (a) satisfy known exact limits of the HEG, (b) carry a frequency dependence, or (c) display a 1/k{sup 2} divergence for small wavevectors. After generalizing the kernels to inhomogeneous systems through a reciprocal-space averaging procedure, we calculate the lattice constants and bulk moduli of a testmore » set of 10 solids consisting of tetrahedrally bonded semiconductors (C, Si, SiC), ionic compounds (MgO, LiCl, LiF), and metals (Al, Na, Cu, Pd). We also consider the atomization energy of the H{sub 2} molecule. We compare the results calculated with different kernels to those obtained from the random-phase approximation (RPA) and to experimental measurements. We demonstrate that the model kernels correct the RPA’s tendency to overestimate the magnitude of the correlation energy whilst maintaining a high-accuracy description of structural properties.« less
Valentini, Giorgio; Paccanaro, Alberto; Caniza, Horacio; Romero, Alfonso E; Re, Matteo
2014-06-01
In the context of "network medicine", gene prioritization methods represent one of the main tools to discover candidate disease genes by exploiting the large amount of data covering different types of functional relationships between genes. Several works proposed to integrate multiple sources of data to improve disease gene prioritization, but to our knowledge no systematic studies focused on the quantitative evaluation of the impact of network integration on gene prioritization. In this paper, we aim at providing an extensive analysis of gene-disease associations not limited to genetic disorders, and a systematic comparison of different network integration methods for gene prioritization. We collected nine different functional networks representing different functional relationships between genes, and we combined them through both unweighted and weighted network integration methods. We then prioritized genes with respect to each of the considered 708 medical subject headings (MeSH) diseases by applying classical guilt-by-association, random walk and random walk with restart algorithms, and the recently proposed kernelized score functions. The results obtained with classical random walk algorithms and the best single network achieved an average area under the curve (AUC) across the 708 MeSH diseases of about 0.82, while kernelized score functions and network integration boosted the average AUC to about 0.89. Weighted integration, by exploiting the different "informativeness" embedded in different functional networks, outperforms unweighted integration at 0.01 significance level, according to the Wilcoxon signed rank sum test. For each MeSH disease we provide the top-ranked unannotated candidate genes, available for further bio-medical investigation. Network integration is necessary to boost the performances of gene prioritization methods. Moreover the methods based on kernelized score functions can further enhance disease gene ranking results, by adopting both local and global learning strategies, able to exploit the overall topology of the network. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Optimization of fixture layouts of glass laser optics using multiple kernel regression.
Su, Jianhua; Cao, Enhua; Qiao, Hong
2014-05-10
We aim to build an integrated fixturing model to describe the structural properties and thermal properties of the support frame of glass laser optics. Therefore, (a) a near global optimal set of clamps can be computed to minimize the surface shape error of the glass laser optic based on the proposed model, and (b) a desired surface shape error can be obtained by adjusting the clamping forces under various environmental temperatures based on the model. To construct the model, we develop a new multiple kernel learning method and call it multiple kernel support vector functional regression. The proposed method uses two layer regressions to group and order the data sources by the weights of the kernels and the factors of the layers. Because of that, the influences of the clamps and the temperature can be evaluated by grouping them into different layers.
Nonparametric entropy estimation using kernel densities.
Lake, Douglas E
2009-01-01
The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.
RTOS kernel in portable electrocardiograph
NASA Astrophysics Data System (ADS)
Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.
2011-12-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
A Hilbert Space Representation of Generalized Observables and Measurement Processes in the ESR Model
NASA Astrophysics Data System (ADS)
Sozzo, Sandro; Garola, Claudio
2010-12-01
The extended semantic realism ( ESR) model recently worked out by one of the authors embodies the mathematical formalism of standard (Hilbert space) quantum mechanics in a noncontextual framework, reinterpreting quantum probabilities as conditional instead of absolute. We provide here a Hilbert space representation of the generalized observables introduced by the ESR model that satisfy a simple physical condition, propose a generalization of the projection postulate, and suggest a possible mathematical description of the measurement process in terms of evolution of the compound system made up of the measured system and the measuring apparatus.
NASA Astrophysics Data System (ADS)
Passalacqua, P.; Hiatt, M. R.; Sendrowski, A.
2016-12-01
Deltas host approximately half a billion people and are rich in ecosystem diversity and economic resources. However, human-induced activities and climatic shifts are significantly impacting deltas around the world; anthropogenic disturbance, natural subsidence, and eustatic sea-level rise are major causes of threat to deltas and in many cases have compromised their safety and sustainability, putting at risk the people that live on them. In this presentation, I will introduce a framework called Delta Connectome for studying connectivity in river deltas based on different representations of a delta as a network. Here connectivity indicates both physical connectivity (how different portions of the system interact with each other) as well as conceptual (pathways of process coupling). I will explore several network representations and show how quantifying connectivity can advance our understanding of system functioning and can be used to inform coastal management and restoration. From connectivity considerations, the delta emerges as a leaky network that evolves over time and is characterized by continuous exchanges of fluxes of matter, energy, and information. I will discuss the implications of connectivity on delta functioning, land growth, and potential for nutrient removal.
ERIC Educational Resources Information Center
Lazarte, Alejandro A.; Barry, Sue
2008-01-01
In Experiment 1, monolingual native Spanish speakers (NSSs) had better kernel recall and longer end-of-clause (EOC) pauses than native English speakers (NESs) when reading texts that varied in syntactic complexity as a function of the number of nonessential clauses added to the kernel text. NSS familiarity with embedded clauses in Spanish seem to…
Generalized Langevin equation with tempered memory kernel
NASA Astrophysics Data System (ADS)
Liemert, André; Sandev, Trifce; Kantz, Holger
2017-01-01
We study a generalized Langevin equation for a free particle in presence of a truncated power-law and Mittag-Leffler memory kernel. It is shown that in presence of truncation, the particle from subdiffusive behavior in the short time limit, turns to normal diffusion in the long time limit. The case of harmonic oscillator is considered as well, and the relaxation functions and the normalized displacement correlation function are represented in an exact form. By considering external time-dependent periodic force we obtain resonant behavior even in case of a free particle due to the influence of the environment on the particle movement. Additionally, the double-peak phenomenon in the imaginary part of the complex susceptibility is observed. It is obtained that the truncation parameter has a huge influence on the behavior of these quantities, and it is shown how the truncation parameter changes the critical frequencies. The normalized displacement correlation function for a fractional generalized Langevin equation is investigated as well. All the results are exact and given in terms of the three parameter Mittag-Leffler function and the Prabhakar generalized integral operator, which in the kernel contains a three parameter Mittag-Leffler function. Such kind of truncated Langevin equation motion can be of high relevance for the description of lateral diffusion of lipids and proteins in cell membranes.
NASA Astrophysics Data System (ADS)
Xin, Ni; Gu, Xiao-Feng; Wu, Hao; Hu, Yu-Zhu; Yang, Zhong-Lin
2012-04-01
Most herbal medicines could be processed to fulfill the different requirements of therapy. The purpose of this study was to discriminate between raw and processed Dipsacus asperoides, a common traditional Chinese medicine, based on their near infrared (NIR) spectra. Least squares-support vector machine (LS-SVM) and random forests (RF) were employed for full-spectrum classification. Three types of kernels, including linear kernel, polynomial kernel and radial basis function kernel (RBF), were checked for optimization of LS-SVM model. For comparison, a linear discriminant analysis (LDA) model was performed for classification, and the successive projections algorithm (SPA) was executed prior to building an LDA model to choose an appropriate subset of wavelengths. The three methods were applied to a dataset containing 40 raw herbs and 40 corresponding processed herbs. We ran 50 runs of 10-fold cross validation to evaluate the model's efficiency. The performance of the LS-SVM with RBF kernel (RBF LS-SVM) was better than the other two kernels. The RF, RBF LS-SVM and SPA-LDA successfully classified all test samples. The mean error rates for the 50 runs of 10-fold cross validation were 1.35% for RBF LS-SVM, 2.87% for RF, and 2.50% for SPA-LDA. The best classification results were obtained by using LS-SVM with RBF kernel, while RF was fast in the training and making predictions.
Applications of Dirac's Delta Function in Statistics
ERIC Educational Resources Information Center
Khuri, Andre
2004-01-01
The Dirac delta function has been used successfully in mathematical physics for many years. The purpose of this article is to bring attention to several useful applications of this function in mathematical statistics. Some of these applications include a unified representation of the distribution of a function (or functions) of one or several…
Vertical integration from the large Hilbert space
NASA Astrophysics Data System (ADS)
Erler, Theodore; Konopka, Sebastian
2017-12-01
We develop an alternative description of the procedure of vertical integration based on the observation that amplitudes can be written in BRST exact form in the large Hilbert space. We relate this approach to the description of vertical integration given by Sen and Witten.
Scalable and Power Efficient Data Analytics for Hybrid Exascale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhary, Alok; Samatova, Nagiza; Wu, Kesheng
This project developed a generic and optimized set of core data analytics functions. These functions organically consolidate a broad constellation of high performance analytical pipelines. As the architectures of emerging HPC systems become inherently heterogeneous, there is a need to design algorithms for data analysis kernels accelerated on hybrid multi-node, multi-core HPC architectures comprised of a mix of CPUs, GPUs, and SSDs. Furthermore, the power-aware trend drives the advances in our performance-energy tradeoff analysis framework which enables our data analysis kernels algorithms and software to be parameterized so that users can choose the right power-performance optimizations.
Multi-pose facial correction based on Gaussian process with combined kernel function
NASA Astrophysics Data System (ADS)
Shi, Shuyan; Ji, Ruirui; Zhang, Fan
2018-04-01
In order to improve the recognition rate of various postures, this paper proposes a method of facial correction based on Gaussian Process which build a nonlinear regression model between the front and the side face with combined kernel function. The face images with horizontal angle from -45° to +45° can be properly corrected to front faces. Finally, Support Vector Machine is employed for face recognition. Experiments on CAS PEAL R1 face database show that Gaussian process can weaken the influence of pose changes and improve the accuracy of face recognition to certain extent.
Data-driven parameterization of the generalized Langevin equation
Lei, Huan; Baker, Nathan A.; Li, Xiantao
2016-11-29
We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.
NASA Astrophysics Data System (ADS)
Dougherty, Andrew W.
Metal oxides are a staple of the sensor industry. The combination of their sensitivity to a number of gases, and the electrical nature of their sensing mechanism, make the particularly attractive in solid state devices. The high temperature stability of the ceramic material also make them ideal for detecting combustion byproducts where exhaust temperatures can be high. However, problems do exist with metal oxide sensors. They are not very selective as they all tend to be sensitive to a number of reduction and oxidation reactions on the oxide's surface. This makes sensors with large numbers of sensors interesting to study as a method for introducing orthogonality to the system. Also, the sensors tend to suffer from long term drift for a number of reasons. In this thesis I will develop a system for intelligently modeling metal oxide sensors and determining their suitability for use in large arrays designed to analyze exhaust gas streams. It will introduce prior knowledge of the metal oxide sensors' response mechanisms in order to produce a response function for each sensor from sparse training data. The system will use the same technique to model and remove any long term drift from the sensor response. It will also provide an efficient means for determining the orthogonality of the sensor to determine whether they are useful in gas sensing arrays. The system is based on least squares support vector regression using the reciprocal kernel. The reciprocal kernel is introduced along with a method of optimizing the free parameters of the reciprocal kernel support vector machine. The reciprocal kernel is shown to be simpler and to perform better than an earlier kernel, the modified reciprocal kernel. Least squares support vector regression is chosen as it uses all of the training points and an emphasis was placed throughout this research for extracting the maximum information from very sparse data. The reciprocal kernel is shown to be effective in modeling the sensor responses in the time, gas and temperature domains, and the dual representation of the support vector regression solution is shown to provide insight into the sensor's sensitivity and potential orthogonality. Finally, the dual weights of the support vector regression solution to the sensor's response are suggested as a fitness function for a genetic algorithm, or some other method for efficiently searching large parameter spaces.
Protein fold recognition using geometric kernel data fusion.
Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves
2014-07-01
Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Slater, Paul B.
2018-04-01
We begin by investigating relationships between two forms of Hilbert-Schmidt two-rebit and two-qubit "separability functions"—those recently advanced by Lovas and Andai (J Phys A Math Theor 50(29):295303, 2017), and those earlier presented by Slater (J Phys A 40(47):14279, 2007). In the Lovas-Andai framework, the independent variable ɛ \\in [0,1] is the ratio σ (V) of the singular values of the 2 × 2 matrix V=D_2^{1/2} D_1^{-1/2} formed from the two 2 × 2 diagonal blocks (D_1, D_2) of a 4 × 4 density matrix D= ||ρ _{ij}||. In the Slater setting, the independent variable μ is the diagonal-entry ratio √{ρ _{11} ρ _ {44}/ρ _ {22 ρ _ {33}}}—with, of central importance, μ =ɛ or μ =1/ɛ when both D_1 and D_2 are themselves diagonal. Lovas and Andai established that their two-rebit "separability function" \\tilde{χ }_1 (ɛ ) (≈ ɛ ) yields the previously conjectured Hilbert-Schmidt separability probability of 29/64. We are able, in the Slater framework (using cylindrical algebraic decompositions [CAD] to enforce positivity constraints), to reproduce this result. Further, we newly find its two-qubit, two-quater[nionic]-bit and "two-octo[nionic]-bit" counterparts, \\tilde{χ _2}(ɛ ) =1/3 ɛ ^2 ( 4-ɛ ^2) , \\tilde{χ _4}(ɛ ) =1/35 ɛ ^4 ( 15 ɛ ^4-64 ɛ ^2+84) and \\tilde{χ _8} (ɛ )= 1/1287ɛ ^8 ( 1155 ɛ ^8-7680 ɛ ^6+20160 ɛ ^4-25088 ɛ ^2+12740) . These immediately lead to predictions of Hilbert-Schmidt separability/PPT-probabilities of 8/33, 26/323 and 44482/4091349, in full agreement with those of the "concise formula" (Slater in J Phys A 46:445302, 2013), and, additionally, of a "specialized induced measure" formula. Then, we find a Lovas-Andai "master formula," \\tilde{χ _d}(ɛ )= ɛ ^d Γ (d+1)^3 _3\\tilde{F}_2( -{d/2,d/2,d;d/2+1,3 d/2+1;ɛ ^2) }/{Γ ( d/2+1) ^2}, encompassing both even and odd values of d. Remarkably, we are able to obtain the \\tilde{χ _d}(ɛ ) formulas, d=1,2,4, applicable to full (9-, 15-, 27-) dimensional sets of density matrices, by analyzing (6-, 9, 15-) dimensional sets, with not only diagonal D_1 and D_2, but also an additional pair of nullified entries. Nullification of a further pair still leads to X-matrices, for which a distinctly different, simple Dyson-index phenomenon is noted. C. Koutschan, then, using his HolonomicFunctions program, develops an order-4 recurrence satisfied by the predictions of the several formulas, establishing their equivalence. A two-qubit separability probability of 1-256/27 π ^2 is obtained based on the operator monotone function √{x}, with the use of \\tilde{χ _2}(ɛ ).
Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Banchariya, Anjali; Rao, Atmakuri Ramakrishna
2017-03-24
Insecticide resistance is a major challenge for the control program of insect pests in the fields of crop protection, human and animal health etc. Resistance to different insecticides is conferred by the proteins encoded from certain class of genes of the insects. To distinguish the insecticide resistant proteins from non-resistant proteins, no computational tool is available till date. Thus, development of such a computational tool will be helpful in predicting the insecticide resistant proteins, which can be targeted for developing appropriate insecticides. Five different sets of feature viz., amino acid composition (AAC), di-peptide composition (DPC), pseudo amino acid composition (PAAC), composition-transition-distribution (CTD) and auto-correlation function (ACF) were used to map the protein sequences into numeric feature vectors. The encoded numeric vectors were then used as input in support vector machine (SVM) for classification of insecticide resistant and non-resistant proteins. Higher accuracies were obtained under RBF kernel than that of other kernels. Further, accuracies were observed to be higher for DPC feature set as compared to others. The proposed approach achieved an overall accuracy of >90% in discriminating resistant from non-resistant proteins. Further, the two classes of resistant proteins i.e., detoxification-based and target-based were discriminated from non-resistant proteins with >95% accuracy. Besides, >95% accuracy was also observed for discrimination of proteins involved in detoxification- and target-based resistance mechanisms. The proposed approach not only outperformed Blastp, PSI-Blast and Delta-Blast algorithms, but also achieved >92% accuracy while assessed using an independent dataset of 75 insecticide resistant proteins. This paper presents the first computational approach for discriminating the insecticide resistant proteins from non-resistant proteins. Based on the proposed approach, an online prediction server DIRProt has also been developed for computational prediction of insecticide resistant proteins, which is accessible at http://cabgrid.res.in:8080/dirprot/ . The proposed approach is believed to supplement the efforts needed to develop dynamic insecticides in wet-lab by targeting the insecticide resistant proteins.
The Polyanalytic Ginibre Ensembles
NASA Astrophysics Data System (ADS)
Haimi, Antti; Hedenmalm, Haakan
2013-10-01
For integers n, q=1,2,3,… , let Pol n, q denote the -linear space of polynomials in z and , of degree ≤ n-1 in z and of degree ≤ q-1 in . We supply Pol n, q with the inner product structure of the resulting Hilbert space is denoted by Pol m, n, q . Here, it is assumed that m is a positive real. We let K m, n, q denote the reproducing kernel of Pol m, n, q , and study the associated determinantal process, in the limit as m, n→+∞ while n= m+O(1); the number q, the degree of polyanalyticity, is kept fixed. We call these processes polyanalytic Ginibre ensembles, because they generalize the Ginibre ensemble—the eigenvalue process of random (normal) matrices with Gaussian weight. There is a physical interpretation in terms of a system of free fermions in a uniform magnetic field so that a fixed number of the first Landau levels have been filled. We consider local blow-ups of the polyanalytic Ginibre ensembles around points in the spectral droplet, which is here the closed unit disk . We obtain asymptotics for the blow-up process, using a blow-up to characteristic distance m -1/2; the typical distance is the same both for interior and for boundary points of . This amounts to obtaining the asymptotical behavior of the generating kernel K m, n, q . Following (Ameur et al. in Commun. Pure Appl. Math. 63(12):1533-1584, 2010), the asymptotics of the K m, n, q are rather conveniently expressed in terms of the Berezin measure (and density) [Equation not available: see fulltext.] For interior points | z|<1, we obtain that in the weak-star sense, where δ z denotes the unit point mass at z. Moreover, if we blow up to the scale of m -1/2 around z, we get convergence to a measure which is Gaussian for q=1, but exhibits more complicated Fresnel zone behavior for q>1. In contrast, for exterior points | z|>1, we have instead that , where is the harmonic measure at z with respect to the exterior disk . For boundary points, | z|=1, the Berezin measure converges to the unit point mass at z, as with interior points, but the blow-up to the scale m -1/2 exhibits quite different behavior at boundary points compared with interior points. We also obtain the asymptotic boundary behavior of the 1-point function at the coarser local scale q 1/2 m -1/2.