Sample records for reproducing kernel hilbert

  1. On the Computation of Optimal Designs for Certain Time Series Models with Applications to Optimal Quantile Selection for Location or Scale Parameter Estimation.

    DTIC Science & Technology

    1981-07-01

    process is observed over all of (0,1], the reproducing kernel Hilbert space (RKHS) techniques developed by Parzen (1961a, 1961b) 2 may be used to construct...covariance kernel,R, for the process (1.1) is the reproducing kernel for a reproducing kernel Hilbert space (RKHS) which will be denoted as H(R) (c.f...2.6), it is known that (c.f. Eubank, Smith and Smith (1981a, 1981b)), i) H(R) is a Hilbert function space consisting of functions which satisfy for fEH

  2. Wavelet SVM in Reproducing Kernel Hilbert Space for hyperspectral remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Du, Peijun; Tan, Kun; Xing, Xiaoshi

    2010-12-01

    Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.

  3. An iterative kernel based method for fourth order nonlinear equation with nonlinear boundary condition

    NASA Astrophysics Data System (ADS)

    Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid

    2018-06-01

    This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.

  4. Towards the Geometry of Reproducing Kernels

    NASA Astrophysics Data System (ADS)

    Galé, J. E.

    2010-11-01

    It is shown here how one is naturally led to consider a category whose objects are reproducing kernels of Hilbert spaces, and how in this way a differential geometry for such kernels may be settled down.

  5. Aveiro method in reproducing kernel Hilbert spaces under complete dictionary

    NASA Astrophysics Data System (ADS)

    Mai, Weixiong; Qian, Tao

    2017-12-01

    Aveiro Method is a sparse representation method in reproducing kernel Hilbert spaces (RKHS) that gives orthogonal projections in linear combinations of reproducing kernels over uniqueness sets. It, however, suffers from determination of uniqueness sets in the underlying RKHS. In fact, in general spaces, uniqueness sets are not easy to be identified, let alone the convergence speed aspect with Aveiro Method. To avoid those difficulties we propose an anew Aveiro Method based on a dictionary and the matching pursuit idea. What we do, in fact, are more: The new Aveiro method will be in relation to the recently proposed, the so called Pre-Orthogonal Greedy Algorithm (P-OGA) involving completion of a given dictionary. The new method is called Aveiro Method Under Complete Dictionary (AMUCD). The complete dictionary consists of all directional derivatives of the underlying reproducing kernels. We show that, under the boundary vanishing condition, bring available for the classical Hardy and Paley-Wiener spaces, the complete dictionary enables an efficient expansion of any given element in the Hilbert space. The proposed method reveals new and advanced aspects in both the Aveiro Method and the greedy algorithm.

  6. Limitations of shallow nets approximation.

    PubMed

    Lin, Shao-Bo

    2017-10-01

    In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. A kernel adaptive algorithm for quaternion-valued inputs.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  8. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  9. Structured functional additive regression in reproducing kernel Hilbert spaces.

    PubMed

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2014-06-01

    Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application.

  10. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint

    PubMed Central

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2015-01-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575

  11. Hermite polynomials and quasi-classical asymptotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, S. Twareque, E-mail: twareque.ali@concordia.ca; Engliš, Miroslav, E-mail: englis@math.cas.cz

    2014-04-15

    We study an unorthodox variant of the Berezin-Toeplitz type of quantization scheme, on a reproducing kernel Hilbert space generated by the real Hermite polynomials and work out the associated quasi-classical asymptotics.

  12. Structured functional additive regression in reproducing kernel Hilbert spaces

    PubMed Central

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2013-01-01

    Summary Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application. PMID:25013362

  13. Adaptive learning in complex reproducing kernel Hilbert spaces employing Wirtinger's subgradients.

    PubMed

    Bouboulis, Pantelis; Slavakis, Konstantinos; Theodoridis, Sergios

    2012-03-01

    This paper presents a wide framework for non-linear online supervised learning tasks in the context of complex valued signal processing. The (complex) input data are mapped into a complex reproducing kernel Hilbert space (RKHS), where the learning phase is taking place. Both pure complex kernels and real kernels (via the complexification trick) can be employed. Moreover, any convex, continuous and not necessarily differentiable function can be used to measure the loss between the output of the specific system and the desired response. The only requirement is the subgradient of the adopted loss function to be available in an analytic form. In order to derive analytically the subgradients, the principles of the (recently developed) Wirtinger's calculus in complex RKHS are exploited. Furthermore, both linear and widely linear (in RKHS) estimation filters are considered. To cope with the problem of increasing memory requirements, which is present in almost all online schemes in RKHS, the sparsification scheme, based on projection onto closed balls, has been adopted. We demonstrate the effectiveness of the proposed framework in a non-linear channel identification task, a non-linear channel equalization problem and a quadrature phase shift keying equalization scheme, using both circular and non circular synthetic signal sources.

  14. Single image super-resolution via an iterative reproducing kernel Hilbert space method.

    PubMed

    Deng, Liang-Jian; Guo, Weihong; Huang, Ting-Zhu

    2016-11-01

    Image super-resolution, a process to enhance image resolution, has important applications in satellite imaging, high definition television, medical imaging, etc. Many existing approaches use multiple low-resolution images to recover one high-resolution image. In this paper, we present an iterative scheme to solve single image super-resolution problems. It recovers a high quality high-resolution image from solely one low-resolution image without using a training data set. We solve the problem from image intensity function estimation perspective and assume the image contains smooth and edge components. We model the smooth components of an image using a thin-plate reproducing kernel Hilbert space (RKHS) and the edges using approximated Heaviside functions. The proposed method is applied to image patches, aiming to reduce computation and storage. Visual and quantitative comparisons with some competitive approaches show the effectiveness of the proposed method.

  15. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  16. Transactions of the Army Conference on Applied Mathematics and Computing (2nd) Held at Washington, DC on 22-25 May 1984

    DTIC Science & Technology

    1985-02-01

    0 Here Q denotes the midplane of the plate ?assumed to be a Lipschitzian) with a smooth boundary ", and H (Q) and H (Q) are the Hilbert spaces of...using a reproducing kernel Hilbert space approach, Weinert [8,9] et al, developed a structural correspondence between spline interpolation and linear...597 A Mesh Moving Technique for Time Dependent Partial Differential Equations in Two Space Dimensions David C. Arney and Joseph

  17. Numerical method for solving the nonlinear four-point boundary value problems

    NASA Astrophysics Data System (ADS)

    Lin, Yingzhen; Lin, Jinnan

    2010-12-01

    In this paper, a new reproducing kernel space is constructed skillfully in order to solve a class of nonlinear four-point boundary value problems. The exact solution of the linear problem can be expressed in the form of series and the approximate solution of the nonlinear problem is given by the iterative formula. Compared with known investigations, the advantages of our method are that the representation of exact solution is obtained in a new reproducing kernel Hilbert space and accuracy of numerical computation is higher. Meanwhile we present the convergent theorem, complexity analysis and error estimation. The performance of the new method is illustrated with several numerical examples.

  18. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  19. Biologically-Inspired Spike-Based Automatic Speech Recognition of Isolated Digits Over a Reproducing Kernel Hilbert Space

    PubMed Central

    Li, Kan; Príncipe, José C.

    2018-01-01

    This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime. PMID:29666568

  20. Biologically-Inspired Spike-Based Automatic Speech Recognition of Isolated Digits Over a Reproducing Kernel Hilbert Space.

    PubMed

    Li, Kan; Príncipe, José C

    2018-01-01

    This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime.

  1. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  2. Reactive Collisions and Final State Analysis in Hypersonic Flight Regime

    DTIC Science & Technology

    2016-09-13

    Kelvin.[7] The gas-phase, surface reactions and energy transfer at these tempera- tures are essentially uncharacterized and the experimental methodologies...high temperatures (1000 to 20000 K) and compared with results from experimentally derived thermodynamics quantities from the NASA CEA (NASA Chemical...with a reproducing kernel Hilbert space (RKHS) method[13] combined with Legendre polynomials; (2) quasi classical trajectory (QCT) calculations to study

  3. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  4. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  5. CLAss-Specific Subspace Kernel Representations and Adaptive Margin Slack Minimization for Large Scale Classification.

    PubMed

    Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan

    2018-02-01

    In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.

  6. Online Pairwise Learning Algorithms.

    PubMed

    Ying, Yiming; Zhou, Ding-Xuan

    2016-04-01

    Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.

  7. Investigations of Reactive Processes at Temperatures Relevant to the Hypersonic Flight Regime

    DTIC Science & Technology

    2014-10-31

    molecule is constructed based on high- level ab-initio calculations and interpolated using the reproducible kernel Hilbert space (RKHS) method and...a potential energy surface (PES) for the ground state of the NO2 molecule is constructed based on high- level ab initio calculations and interpolated...between O(3P) and NO(2Π) at higher temperatures relevant to the hypersonic flight regime of reentering space- crafts. At a more fundamental level , we

  8. A Regression Design Approach to Optimal and Robust Spacing Selection.

    DTIC Science & Technology

    1981-07-01

    Hassanein (1968, 1969a, 1969b, 1971, 1972, 1977), Kulldorf (1963), Kulldorf and Vannman (1973), Rhodin (1976), Sarhan and Greenberg (1958, 1962) and...of d0 and Q0 1 d 0 "Q0 ’ are in the reproducing kernel Hilbert space (RKHS) generated by R, the techniques developed by Parzen (1961a, 1961b) may be... Greenberg , B.G. (1958). Estimation problems in the exponential distribution using order statistics. Proceedings of the Statistical Techniques in Missile

  9. Multiple kernel learning using single stage function approximation for binary classification problems

    NASA Astrophysics Data System (ADS)

    Shiju, S.; Sumitra, S.

    2017-12-01

    In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.

  10. Least square regularized regression in sum space.

    PubMed

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  11. Nonlinear association criterion, nonlinear Granger causality and related issues with applications to neuroimage studies.

    PubMed

    Tao, Chenyang; Feng, Jianfeng

    2016-03-15

    Quantifying associations in neuroscience (and many other scientific disciplines) is often challenged by high-dimensionality, nonlinearity and noisy observations. Many classic methods have either poor power or poor scalability on data sets of the same or different scales such as genetical, physiological and image data. Based on the framework of reproducing kernel Hilbert spaces we proposed a new nonlinear association criteria (NAC) with an efficient numerical algorithm and p-value approximation scheme. We also presented mathematical justification that links the proposed method to related methods such as kernel generalized variance, kernel canonical correlation analysis and Hilbert-Schmidt independence criteria. NAC allows the detection of association between arbitrary input domain as long as a characteristic kernel is defined. A MATLAB package was provided to facilitate applications. Extensive simulation examples and four real world neuroscience examples including functional MRI causality, Calcium imaging and imaging genetic studies on autism [Brain, 138(5):13821393 (2015)] and alcohol addiction [PNAS, 112(30):E4085-E4093 (2015)] are used to benchmark NAC. It demonstrates the superior performance over the existing procedures we tested and also yields biologically significant results for the real world examples. NAC beats its linear counterparts when nonlinearity is presented in the data. It also shows more robustness against different experimental setups compared with its nonlinear counterparts. In this work we presented a new and robust statistical approach NAC for measuring associations. It could serve as an interesting alternative to the existing methods for datasets where nonlinearity and other confounding factors are present. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Adaptive multiregression in reproducing kernel Hilbert spaces: the multiaccess MIMO channel case.

    PubMed

    Slavakis, Konstantinos; Bouboulis, Pantelis; Theodoridis, Sergios

    2012-02-01

    This paper introduces a wide framework for online, i.e., time-adaptive, supervised multiregression tasks. The problem is formulated in a general infinite-dimensional reproducing kernel Hilbert space (RKHS). In this context, a fairly large number of nonlinear multiregression models fall as special cases, including the linear case. Any convex, continuous, and not necessarily differentiable function can be used as a loss function in order to quantify the disagreement between the output of the system and the desired response. The only requirement is the subgradient of the adopted loss function to be available in an analytic form. To this end, we demonstrate a way to calculate the subgradients of robust loss functions, suitable for the multiregression task. As it is by now well documented, when dealing with online schemes in RKHS, the memory keeps increasing with each iteration step. To attack this problem, a simple sparsification strategy is utilized, which leads to an algorithmic scheme of linear complexity with respect to the number of unknown parameters. A convergence analysis of the technique, based on arguments of convex analysis, is also provided. To demonstrate the capacity of the proposed method, the multiregressor is applied to the multiaccess multiple-input multiple-output channel equalization task for a setting with poor resources and nonavailable channel information. Numerical results verify the potential of the method, when its performance is compared with those of the state-of-the-art linear techniques, which, in contrast, use space-time coding, more antenna elements, as well as full channel information.

  13. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    PubMed

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.

  14. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects. Copyright © 2016 Crop Science Society of America.

  15. Learn the Lagrangian: A Vector-Valued RKHS Approach to Identifying Lagrangian Systems.

    PubMed

    Cheng, Ching-An; Huang, Han-Pang

    2016-12-01

    We study the modeling of Lagrangian systems with multiple degrees of freedom. Based on system dynamics, canonical parametric models require ad hoc derivations and sometimes simplification for a computable solution; on the other hand, due to the lack of prior knowledge in the system's structure, modern nonparametric models in machine learning face the curse of dimensionality, especially in learning large systems. In this paper, we bridge this gap by unifying the theories of Lagrangian systems and vector-valued reproducing kernel Hilbert space. We reformulate Lagrangian systems with kernels that embed the governing Euler-Lagrange equation-the Lagrangian kernels-and show that these kernels span a subspace capturing the Lagrangian's projection as inverse dynamics. By such property, our model uses only inputs and outputs as in machine learning and inherits the structured form as in system dynamics, thereby removing the need for the mundane derivations for new systems as well as the generalization problem in learning from scratches. In effect, it learns the system's Lagrangian, a simpler task than directly learning the dynamics. To demonstrate, we applied the proposed kernel to identify the robot inverse dynamics in simulations and experiments. Our results present a competitive novel approach to identifying Lagrangian systems, despite using only inputs and outputs.

  16. Online Distributed Learning Over Networks in RKH Spaces Using Random Fourier Features

    NASA Astrophysics Data System (ADS)

    Bouboulis, Pantelis; Chouvardas, Symeon; Theodoridis, Sergios

    2018-04-01

    We present a novel diffusion scheme for online kernel-based learning over networks. So far, a major drawback of any online learning algorithm, operating in a reproducing kernel Hilbert space (RKHS), is the need for updating a growing number of parameters as time iterations evolve. Besides complexity, this leads to an increased need of communication resources, in a distributed setting. In contrast, the proposed method approximates the solution as a fixed-size vector (of larger dimension than the input space) using Random Fourier Features. This paves the way to use standard linear combine-then-adapt techniques. To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented. Conditions for asymptotic convergence and boundness of the networkwise regret are also provided. The simulated tests illustrate the performance of the proposed scheme.

  17. Soft and hard classification by reproducing kernel Hilbert space methods.

    PubMed

    Wahba, Grace

    2002-12-24

    Reproducing kernel Hilbert space (RKHS) methods provide a unified context for solving a wide variety of statistical modelling and function estimation problems. We consider two such problems: We are given a training set [yi, ti, i = 1, em leader, n], where yi is the response for the ith subject, and ti is a vector of attributes for this subject. The value of y(i) is a label that indicates which category it came from. For the first problem, we wish to build a model from the training set that assigns to each t in an attribute domain of interest an estimate of the probability pj(t) that a (future) subject with attribute vector t is in category j. The second problem is in some sense less ambitious; it is to build a model that assigns to each t a label, which classifies a future subject with that t into one of the categories or possibly "none of the above." The approach to the first of these two problems discussed here is a special case of what is known as penalized likelihood estimation. The approach to the second problem is known as the support vector machine. We also note some alternate but closely related approaches to the second problem. These approaches are all obtained as solutions to optimization problems in RKHS. Many other problems, in particular the solution of ill-posed inverse problems, can be obtained as solutions to optimization problems in RKHS and are mentioned in passing. We caution the reader that although a large literature exists in all of these topics, in this inaugural article we are selectively highlighting work of the author, former students, and other collaborators.

  18. Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Theodoridis, Sergios

    2008-12-01

    Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.

  19. Hilbert-Schmidt and Sobol sensitivity indices for static and time series Wnt signaling measurements in colorectal cancer - part A.

    PubMed

    Sinha, Shriprakash

    2017-12-04

    Ever since the accidental discovery of Wingless [Sharma R.P., Drosophila information service, 1973, 50, p 134], research in the field of Wnt signaling pathway has taken significant strides in wet lab experiments and various cancer clinical trials, augmented by recent developments in advanced computational modeling of the pathway. Information rich gene expression profiles reveal various aspects of the signaling pathway and help in studying different issues simultaneously. Hitherto, not many computational studies exist which incorporate the simultaneous study of these issues. This manuscript ∙ explores the strength of contributing factors in the signaling pathway, ∙ analyzes the existing causal relations among the inter/extracellular factors effecting the pathway based on prior biological knowledge and ∙ investigates the deviations in fold changes in the recently found prevalence of psychophysical laws working in the pathway. To achieve this goal, local and global sensitivity analysis is conducted on the (non)linear responses between the factors obtained from static and time series expression profiles using the density (Hilbert-Schmidt Information Criterion) and variance (Sobol) based sensitivity indices. The results show the advantage of using density based indices over variance based indices mainly due to the former's employment of distance measures & the kernel trick via Reproducing kernel Hilbert space (RKHS) that capture nonlinear relations among various intra/extracellular factors of the pathway in a higher dimensional space. In time series data, using these indices it is now possible to observe where in time, which factors get influenced & contribute to the pathway, as changes in concentration of the other factors are made. This synergy of prior biological knowledge, sensitivity analysis & representations in higher dimensional spaces can facilitate in time based administration of target therapeutic drugs & reveal hidden biological information within colorectal cancer samples.

  20. Comparing fixed and variable-width Gaussian networks.

    PubMed

    Kůrková, Věra; Kainen, Paul C

    2014-09-01

    The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    PubMed

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  2. Reconstruction of Sensory Stimuli Encoded with Integrate-and-Fire Neurons with Random Thresholds

    PubMed Central

    Lazar, Aurel A.; Pnevmatikakis, Eftychios A.

    2013-01-01

    We present a general approach to the reconstruction of sensory stimuli encoded with leaky integrate-and-fire neurons with random thresholds. The stimuli are modeled as elements of a Reproducing Kernel Hilbert Space. The reconstruction is based on finding a stimulus that minimizes a regularized quadratic optimality criterion. We discuss in detail the reconstruction of sensory stimuli modeled as absolutely continuous functions as well as stimuli with absolutely continuous first-order derivatives. Reconstruction results are presented for stimuli encoded with single as well as a population of neurons. Examples are given that demonstrate the performance of the reconstruction algorithms as a function of threshold variability. PMID:24077610

  3. A mixture model for robust registration in Kinect sensor

    NASA Astrophysics Data System (ADS)

    Peng, Li; Zhou, Huabing; Zhu, Shengguo

    2018-03-01

    The Microsoft Kinect sensor has been widely used in many applications, but it suffers from the drawback of low registration precision between color image and depth image. In this paper, we present a robust method to improve the registration precision by a mixture model that can handle multiply images with the nonparametric model. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS).The estimation is performed by the EM algorithm which by also estimating the variance of the prior model is able to obtain good estimates. We illustrate the proposed method on the public available dataset. The experimental results show that our approach outperforms the baseline methods.

  4. Encoding Dissimilarity Data for Statistical Model Building.

    PubMed

    Wahba, Grace

    2010-12-01

    We summarize, review and comment upon three papers which discuss the use of discrete, noisy, incomplete, scattered pairwise dissimilarity data in statistical model building. Convex cone optimization codes are used to embed the objects into a Euclidean space which respects the dissimilarity information while controlling the dimension of the space. A "newbie" algorithm is provided for embedding new objects into this space. This allows the dissimilarity information to be incorporated into a Smoothing Spline ANOVA penalized likelihood model, a Support Vector Machine, or any model that will admit Reproducing Kernel Hilbert Space components, for nonparametric regression, supervised learning, or semi-supervised learning. Future work and open questions are discussed. The papers are: F. Lu, S. Keles, S. Wright and G. Wahba 2005. A framework for kernel regularization with application to protein clustering. Proceedings of the National Academy of Sciences 102, 12332-1233.G. Corrada Bravo, G. Wahba, K. Lee, B. Klein, R. Klein and S. Iyengar 2009. Examining the relative influence of familial, genetic and environmental covariate information in flexible risk models. Proceedings of the National Academy of Sciences 106, 8128-8133F. Lu, Y. Lin and G. Wahba. Robust manifold unfolding with kernel regularization. TR 1008, Department of Statistics, University of Wisconsin-Madison.

  5. On Hilbert-Schmidt norm convergence of Galerkin approximation for operator Riccati equations

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1988-01-01

    An abstract approximation framework for the solution of operator algebraic Riccati equations is developed. The approach taken is based on a formulation of the Riccati equation as an abstract nonlinear operator equation on the space of Hilbert-Schmidt operators. Hilbert-Schmidt norm convergence of solutions to generic finite dimensional Galerkin approximations to the Riccati equation to the solution of the original infinite dimensional problem is argued. The application of the general theory is illustrated via an operator Riccati equation arising in the linear-quadratic design of an optimal feedback control law for a 1-D heat/diffusion equation. Numerical results demonstrating the convergence of the associated Hilbert-Schmidt kernels are included.

  6. Direct discriminant locality preserving projection with Hammerstein polynomial expansion.

    PubMed

    Chen, Xi; Zhang, Jiashu; Li, Defang

    2012-12-01

    Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.

  7. Modified homotopy perturbation method for solving hypersingular integral equations of the first kind.

    PubMed

    Eshkuvatov, Z K; Zulkarnain, F S; Nik Long, N M A; Muminov, Z

    2016-01-01

    Modified homotopy perturbation method (HPM) was used to solve the hypersingular integral equations (HSIEs) of the first kind on the interval [-1,1] with the assumption that the kernel of the hypersingular integral is constant on the diagonal of the domain. Existence of inverse of hypersingular integral operator leads to the convergence of HPM in certain cases. Modified HPM and its norm convergence are obtained in Hilbert space. Comparisons between modified HPM, standard HPM, Bernstein polynomials approach Mandal and Bhattacharya (Appl Math Comput 190:1707-1716, 2007), Chebyshev expansion method Mahiub et al. (Int J Pure Appl Math 69(3):265-274, 2011) and reproducing kernel Chen and Zhou (Appl Math Lett 24:636-641, 2011) are made by solving five examples. Theoretical and practical examples revealed that the modified HPM dominates the standard HPM and others. Finally, it is found that the modified HPM is exact, if the solution of the problem is a product of weights and polynomial functions. For rational solution the absolute error decreases very fast by increasing the number of collocation points.

  8. A Grassmann graph embedding framework for gait analysis

    NASA Astrophysics Data System (ADS)

    Connie, Tee; Goh, Michael Kah Ong; Teoh, Andrew Beng Jin

    2014-12-01

    Gait recognition is important in a wide range of monitoring and surveillance applications. Gait information has often been used as evidence when other biometrics is indiscernible in the surveillance footage. Building on recent advances of the subspace-based approaches, we consider the problem of gait recognition on the Grassmann manifold. We show that by embedding the manifold into reproducing kernel Hilbert space and applying the mechanics of graph embedding on such manifold, significant performance improvement can be obtained. In this work, the gait recognition problem is studied in a unified way applicable for both supervised and unsupervised configurations. Sparse representation is further incorporated in the learning mechanism to adaptively harness the local structure of the data. Experiments demonstrate that the proposed method can tolerate variations in appearance for gait identification effectively.

  9. The generalization ability of online SVM classification based on Markov sampling.

    PubMed

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  10. Study of the convergence behavior of the complex kernel least mean square algorithm.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2013-09-01

    The complex kernel least mean square (CKLMS) algorithm is recently derived and allows for online kernel adaptive learning for complex data. Kernel adaptive methods can be used in finding solutions for neural network and machine learning applications. The derivation of CKLMS involved the development of a modified Wirtinger calculus for Hilbert spaces to obtain the cost function gradient. We analyze the convergence of the CKLMS with different kernel forms for complex data. The expressions obtained enable us to generate theory-predicted mean-square error curves considering the circularity of the complex input signals and their effect on nonlinear learning. Simulations are used for verifying the analysis results.

  11. Rotational relaxation of AlO+(1Σ+) in collision with He

    NASA Astrophysics Data System (ADS)

    Denis-Alpizar, O.; Trabelsi, T.; Hochlaf, M.; Stoecklin, T.

    2018-03-01

    The rate coefficients for the rotational de-excitation of AlO+ by collisions with He are determined. The possible production mechanisms of the AlO+ ion in both diffuse and dense molecular clouds are first discussed. A set of ab initio interaction energies is computed at the CCSD(T)-F12 level of theory, and a three-dimensional analytical model of the potential energy surface is obtained using a linear combination of reproducing kernel Hilbert space polynomials together with an analytical long range potential. The nuclear spin free close-coupling equations are solved and the de-excitation rotational rate coefficients for the lower 15 rotational states of AlO+ are reported. A propensity rule to favour Δj = -1 transitions is obtained while the hyperfine resolved state-to-state rate coefficients are also discussed.

  12. Gradient descent for robust kernel-based regression

    NASA Astrophysics Data System (ADS)

    Guo, Zheng-Chu; Hu, Ting; Shi, Lei

    2018-06-01

    In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.

  13. Fredholm-Volterra Integral Equation with a Generalized Singular Kernel and its Numerical Solutions

    NASA Astrophysics Data System (ADS)

    El-Kalla, I. L.; Al-Bugami, A. M.

    2010-11-01

    In this paper, the existence and uniqueness of solution of the Fredholm-Volterra integral equation (F-VIE), with a generalized singular kernel, are discussed and proved in the spaceL2(Ω)×C(0,T). The Fredholm integral term (FIT) is considered in position while the Volterra integral term (VIT) is considered in time. Using a numerical technique we have a system of Fredholm integral equations (SFIEs). This system of integral equations can be reduced to a linear algebraic system (LAS) of equations by using two different methods. These methods are: Toeplitz matrix method and Product Nyström method. A numerical examples are considered when the generalized kernel takes the following forms: Carleman function, logarithmic form, Cauchy kernel, and Hilbert kernel.

  14. Improved initial guess with semi-subpixel level accuracy in digital image correlation by feature-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Yunlu; Yan, Lei; Liou, Frank

    2018-05-01

    The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.

  15. Tensor manifold-based extreme learning machine for 2.5-D face recognition

    NASA Astrophysics Data System (ADS)

    Chong, Lee Ying; Ong, Thian Song; Teoh, Andrew Beng Jin

    2018-01-01

    We explore the use of the Gabor regional covariance matrix (GRCM), a flexible matrix-based descriptor that embeds the Gabor features in the covariance matrix, as a 2.5-D facial descriptor and an effective means of feature fusion for 2.5-D face recognition problems. Despite its promise, matching is not a trivial problem for GRCM since it is a special instance of a symmetric positive definite (SPD) matrix that resides in non-Euclidean space as a tensor manifold. This implies that GRCM is incompatible with the existing vector-based classifiers and distance matchers. Therefore, we bridge the gap of the GRCM and extreme learning machine (ELM), a vector-based classifier for the 2.5-D face recognition problem. We put forward a tensor manifold-compliant ELM and its two variants by embedding the SPD matrix randomly into reproducing kernel Hilbert space (RKHS) via tensor kernel functions. To preserve the pair-wise distance of the embedded data, we orthogonalize the random-embedded SPD matrix. Hence, classification can be done using a simple ridge regressor, an integrated component of ELM, on the random orthogonal RKHS. Experimental results show that our proposed method is able to improve the recognition performance and further enhance the computational efficiency.

  16. The construction of a two-dimensional reproducing kernel function and its application in a biomedical model.

    PubMed

    Guo, Qi; Shen, Shu-Ting

    2016-04-29

    There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.

  17. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    NASA Astrophysics Data System (ADS)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  18. Reactive collisions for NO(2Π) + N(4S) at temperatures relevant to the hypersonic flight regime.

    PubMed

    Denis-Alpizar, Otoniel; Bemish, Raymond J; Meuwly, Markus

    2017-01-18

    The NO(X 2 Π) + N( 4 S) reaction which occurs entirely in the triplet manifold of N 2 O is investigated using quasiclassical trajectories and quantum simulations. Fully-dimensional potential energy surfaces for the 3 A' and 3 A'' states are computed at the MRCI+Q level of theory and are represented using a reproducing kernel Hilbert space. The N-exchange and N 2 -formation channels are followed by using the multi-state adiabatic reactive molecular dynamics method. Up to 5000 K these reactions occur predominantly on the N 2 O 3 A'' surface. However, for higher temperatures the contributions of the 3 A' and 3 A'' states are comparable and the final state distributions are far from thermal equilibrium. From the trajectory simulations a new set of thermal rate coefficients of up to 20 000 K is determined. Comparison of the quasiclassical trajectory and quantum simulations shows that a classical description is a good approximation as determined from the final state analysis.

  19. Assessing Predictive Properties of Genome-Wide Selection in Soybeans

    PubMed Central

    Xavier, Alencar; Muir, William M.; Rainey, Katy Martin

    2016-01-01

    Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr). We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set. PMID:27317786

  20. Comparison Between Linear and Non-parametric Regression Models for Genome-Enabled Prediction in Wheat

    PubMed Central

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-01-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882

  1. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    PubMed

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  2. Exact calculation of the time convolutionless master equation generator: Application to the nonequilibrium resonant level model

    NASA Astrophysics Data System (ADS)

    Kidon, Lyran; Wilner, Eli Y.; Rabani, Eran

    2015-12-01

    The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima-Zwanzig-Mori time-convolution (TC) and the other on the Tokuyama-Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called "memory kernel" or "generator," going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operator in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green's function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.

  3. Basilar-membrane responses to broadband noise modeled using linear filters with rational transfer functions.

    PubMed

    Recio-Spinoso, Alberto; Fan, Yun-Hui; Ruggero, Mario A

    2011-05-01

    Basilar-membrane responses to white Gaussian noise were recorded using laser velocimetry at basal sites of the chinchilla cochlea with characteristic frequencies near 10 kHz and first-order Wiener kernels were computed by cross correlation of the stimuli and the responses. The presence or absence of minimum-phase behavior was explored by fitting the kernels with discrete linear filters with rational transfer functions. Excellent fits to the kernels were obtained with filters with transfer functions including zeroes located outside the unit circle, implying nonminimum-phase behavior. These filters accurately predicted basilar-membrane responses to other noise stimuli presented at the same level as the stimulus for the kernel computation. Fits with all-pole and other minimum-phase discrete filters were inferior to fits with nonminimum-phase filters. Minimum-phase functions predicted from the amplitude functions of the Wiener kernels by Hilbert transforms were different from the measured phase curves. These results, which suggest that basilar-membrane responses do not have the minimum-phase property, challenge the validity of models of cochlear processing, which incorporate minimum-phase behavior. © 2011 IEEE

  4. A Primer on Vibrational Ball Bearing Feature Generation for Prognostics and Diagnostics Algorithms

    DTIC Science & Technology

    2015-03-01

    Atlas -Marks (Cone-Shaped Kernel) ........................................................36 8.7.7 Hilbert-Huang Transform...bearing surface and eventually progress to the surface where the material will separate. Also known as pitting, spalling, or flaking. • Wear ...normal degradation caused by dirt and foreign particles causing abrasion of the contact surfaces over time resulting in alterations in the raceway and

  5. Sensitivities Kernels of Seismic Traveltimes and Amplitudes for Quality Factor and Boundary Topography

    NASA Astrophysics Data System (ADS)

    Hsieh, M.; Zhao, L.; Ma, K.

    2010-12-01

    Finite-frequency approach enables seismic tomography to fully utilize the spatial and temporal distributions of the seismic wavefield to improve resolution. In achieving this goal, one of the most important tasks is to compute efficiently and accurately the (Fréchet) sensitivity kernels of finite-frequency seismic observables such as traveltime and amplitude to the perturbations of model parameters. In scattering-integral approach, the Fréchet kernels are expressed in terms of the strain Green tensors (SGTs), and a pre-established SGT database is necessary to achieve practical efficiency for a three-dimensional reference model in which the SGTs must be calculated numerically. Methods for computing Fréchet kernels for seismic velocities have long been established. In this study, we develop algorithms based on the finite-difference method for calculating Fréchet kernels for the quality factor Qμ and seismic boundary topography. Kernels for the quality factor can be obtained in a way similar to those for seismic velocities with the help of the Hilbert transform. The effects of seismic velocities and quality factor on either traveltime or amplitude are coupled. Kernels for boundary topography involve spatial gradient of the SGTs and they also exhibit interesting finite-frequency characteristics. Examples of quality factor and boundary topography kernels will be shown for a realistic model for the Taiwan region with three-dimensional velocity variation as well as surface and Moho discontinuity topography.

  6. Genome-wide regression and prediction with the BGLR statistical package.

    PubMed

    Pérez, Paulino; de los Campos, Gustavo

    2014-10-01

    Many modern genomic data analyses require implementing regressions where the number of parameters (p, e.g., the number of marker effects) exceeds sample size (n). Implementing these large-p-with-small-n regressions poses several statistical and computational challenges, some of which can be confronted using Bayesian methods. This approach allows integrating various parametric and nonparametric shrinkage and variable selection procedures in a unified and consistent manner. The BGLR R-package implements a large collection of Bayesian regression models, including parametric variable selection and shrinkage methods and semiparametric procedures (Bayesian reproducing kernel Hilbert spaces regressions, RKHS). The software was originally developed for genomic applications; however, the methods implemented are useful for many nongenomic applications as well. The response can be continuous (censored or not) or categorical (either binary or ordinal). The algorithm is based on a Gibbs sampler with scalar updates and the implementation takes advantage of efficient compiled C and Fortran routines. In this article we describe the methods implemented in BGLR, present examples of the use of the package, and discuss practical issues emerging in real-data analysis. Copyright © 2014 by the Genetics Society of America.

  7. FAST TRACK COMMUNICATION: General approach to \\mathfrak {SU}(n) quasi-distribution functions

    NASA Astrophysics Data System (ADS)

    Klimov, Andrei B.; de Guise, Hubert

    2010-10-01

    We propose an operational form for the kernel of a mapping between an operator acting in a Hilbert space of a quantum system with an \\mathfrak {SU}(n) symmetry group and its symbol in the corresponding classical phase space. For symmetric irreps of \\mathfrak {SU}(n) , this mapping is bijective. We briefly discuss complications that will occur in the general case.

  8. Implicit kernel sparse shape representation: a sparse-neighbors-based objection segmentation framework.

    PubMed

    Yao, Jincao; Yu, Huimin; Hu, Roland

    2017-01-01

    This paper introduces a new implicit-kernel-sparse-shape-representation-based object segmentation framework. Given an input object whose shape is similar to some of the elements in the training set, the proposed model can automatically find a cluster of implicit kernel sparse neighbors to approximately represent the input shape and guide the segmentation. A distance-constrained probabilistic definition together with a dualization energy term is developed to connect high-level shape representation and low-level image information. We theoretically prove that our model not only derives from two projected convex sets but is also equivalent to a sparse-reconstruction-error-based representation in the Hilbert space. Finally, a "wake-sleep"-based segmentation framework is applied to drive the evolutionary curve to recover the original shape of the object. We test our model on two public datasets. Numerical experiments on both synthetic images and real applications show the superior capabilities of the proposed framework.

  9. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  10. A new numerical approach for uniquely solvable exterior Riemann-Hilbert problem on region with corners

    NASA Astrophysics Data System (ADS)

    Zamzamir, Zamzana; Murid, Ali H. M.; Ismail, Munira

    2014-06-01

    Numerical solution for uniquely solvable exterior Riemann-Hilbert problem on region with corners at offcorner points has been explored by discretizing the related integral equation using Picard iteration method without any modifications to the left-hand side (LHS) and right-hand side (RHS) of the integral equation. Numerical errors for all iterations are converge to the required solution. However, for certain problems, it gives lower accuracy. Hence, this paper presents a new numerical approach for the problem by treating the generalized Neumann kernel at LHS and the function at RHS of the integral equation. Due to the existence of the corner points, Gaussian quadrature is employed which avoids the corner points during numerical integration. Numerical example on a test region is presented to demonstrate the effectiveness of this formulation.

  11. Rotational relaxation of CF+(X1Σ) in collision with He(1S)

    NASA Astrophysics Data System (ADS)

    Denis-Alpizar, O.; Inostroza, N.; Castro Palacio, J. C.

    2018-01-01

    The carbon monofluoride cation (CF+) has been detected recently in Galactic and extragalactic regions. Therefore, excitation rate coefficients of this molecule in collision with He and H2 are necessary for a correct interpretation of the astronomical observations. The main goal of this work is to study the collision of CF+ with He in full dimensionality at the close-coupling level and to report a large set of rotational rate coefficients. New ab initio interaction energies at the CCSD(T)/aug-cc-pv5z level of theory were computed, and a three-dimensional potential energy surface was represented using a reproducing kernel Hilbert space. Close-coupling scattering calculations were performed at collisional energies up to 1600 cm-1 in the ground vibrational state. The vibrational quenching cross-sections were found to be at least three orders of magnitude lower than the pure rotational cross-sections. Also, the collisional rate coefficients were reported for the lowest 20 rotational states of CF+ and an even propensity rule was found to be in action only for j > 4. Finally, the hyperfine rate coefficients were explored. These data can be useful for the determination of the interstellar conditions where this molecule has been detected.

  12. Study of the formation of interstellar CF+ from the HF + C + →CF+ + H reaction

    NASA Astrophysics Data System (ADS)

    Denis-Alpizar, Otoniel; Guzmán, Viviana V.; Inostroza, Natalia

    2018-06-01

    The detection of the carbon monofluoride cation CF+ was considered as a support of the theories of the fluorine chemistry in the interstellar medium (ISM). This molecule is formed by the reaction of HF with C+. The rates of this reaction have been estimated previously by two different groups. However, these two estimations led to different results. The main goal of the present work is to study the HF + C+ reaction and determine new reactive rate coefficients. A large set of ab initio energies at the MRCI-F12/cc-pVQZ-F12 level was computed. The first reactive potential energy surface (PES) for the HF + C+ → CF+ + H reaction was developed using a reproducing kernel Hilbert space (RKHS) based method. The dynamics of the reaction was followed from quasiclassical trajectories (QCT). The results of such calculations showed that CF+ is produced in excited vibrational states. The rate coefficients for the HF + C+ → CF+ + H reaction from 50 K up to 2000 K are reported. The impact of these new data in the astrophysical models for the determination of the interstellar conditions is also explored.

  13. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  14. A new approach to approximating the linear quadratic optimal control law for hereditary systems with control delays

    NASA Technical Reports Server (NTRS)

    Milman, M. H.

    1985-01-01

    A factorization approach is presented for deriving approximations to the optimal feedback gain for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the feedback kernels.

  15. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.

  16. Context-invariant quasi hidden variable (qHV) modelling of all joint von Neumann measurements for an arbitrary Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loubenets, Elena R.

    We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less

  17. Enriched reproducing kernel particle method for fractional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Ying, Yuping; Lian, Yanping; Tang, Shaoqiang; Liu, Wing Kam

    2018-06-01

    The reproducing kernel particle method (RKPM) has been efficiently applied to problems with large deformations, high gradients and high modal density. In this paper, it is extended to solve a nonlocal problem modeled by a fractional advection-diffusion equation (FADE), which exhibits a boundary layer with low regularity. We formulate this method on a moving least-square approach. Via the enrichment of fractional-order power functions to the traditional integer-order basis for RKPM, leading terms of the solution to the FADE can be exactly reproduced, which guarantees a good approximation to the boundary layer. Numerical tests are performed to verify the proposed approach.

  18. Functional identification of spike-processing neural circuits.

    PubMed

    Lazar, Aurel A; Slutskiy, Yevgeniy B

    2014-02-01

    We introduce a novel approach for a complete functional identification of biophysical spike-processing neural circuits. The circuits considered accept multidimensional spike trains as their input and comprise a multitude of temporal receptive fields and conductance-based models of action potential generation. Each temporal receptive field describes the spatiotemporal contribution of all synapses between any two neurons and incorporates the (passive) processing carried out by the dendritic tree. The aggregate dendritic current produced by a multitude of temporal receptive fields is encoded into a sequence of action potentials by a spike generator modeled as a nonlinear dynamical system. Our approach builds on the observation that during any experiment, an entire neural circuit, including its receptive fields and biophysical spike generators, is projected onto the space of stimuli used to identify the circuit. Employing the reproducing kernel Hilbert space (RKHS) of trigonometric polynomials to describe input stimuli, we quantitatively describe the relationship between underlying circuit parameters and their projections. We also derive experimental conditions under which these projections converge to the true parameters. In doing so, we achieve the mathematical tractability needed to characterize the biophysical spike generator and identify the multitude of receptive fields. The algorithms obviate the need to repeat experiments in order to compute the neurons' rate of response, rendering our methodology of interest to both experimental and theoretical neuroscientists.

  19. Exact calculation of the time convolutionless master equation generator: Application to the nonequilibrium resonant level model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kidon, Lyran; The Sackler Center for Computational Molecular and Materials Science, Tel Aviv University, Tel Aviv 69978; Wilner, Eli Y.

    2015-12-21

    The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima–Zwanzig–Mori time-convolution (TC) and the other on the Tokuyama–Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called “memory kernel” or “generator,” going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operatormore » in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green’s function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.« less

  20. A tensor Banach algebra approach to abstract kinetic equations

    NASA Astrophysics Data System (ADS)

    Greenberg, W.; van der Mee, C. V. M.

    The study deals with a concrete algebraic construction providing the existence theory for abstract kinetic equation boundary-value problems, when the collision operator A is an accretive finite-rank perturbation of the identity operator in a Hilbert space H. An algebraic generalization of the Bochner-Phillips theorem is utilized to study solvability of the abstract boundary-value problem without any regulatory condition. A Banach algebra in which the convolution kernel acts is obtained explicitly, and this result is used to prove a perturbation theorem for bisemigroups, which then plays a vital role in solving the initial equations.

  1. Brain tumor image segmentation using kernel dictionary learning.

    PubMed

    Jeon Lee; Seung-Jun Kim; Rong Chen; Herskovits, Edward H

    2015-08-01

    Automated brain tumor image segmentation with high accuracy and reproducibility holds a big potential to enhance the current clinical practice. Dictionary learning (DL) techniques have been applied successfully to various image processing tasks recently. In this work, kernel extensions of the DL approach are adopted. Both reconstructive and discriminative versions of the kernel DL technique are considered, which can efficiently incorporate multi-modal nonlinear feature mappings based on the kernel trick. Our novel discriminative kernel DL formulation allows joint learning of a task-driven kernel-based dictionary and a linear classifier using a K-SVD-type algorithm. The proposed approaches were tested using real brain magnetic resonance (MR) images of patients with high-grade glioma. The obtained preliminary performances are competitive with the state of the art. The discriminative kernel DL approach is seen to reduce computational burden without much sacrifice in performance.

  2. Dirac’s magnetic monopole and the Kontsevich star product

    NASA Astrophysics Data System (ADS)

    Soloviev, M. A.

    2018-03-01

    We examine relationships between various quantization schemes for an electrically charged particle in the field of a magnetic monopole. Quantization maps are defined in invariant geometrical terms, appropriate to the case of nontrivial topology, and are constructed for two operator representations. In the first setting, the quantum operators act on the Hilbert space of sections of a nontrivial complex line bundle associated with the Hopf bundle, whereas the second approach uses instead a quaternionic Hilbert module of sections of a trivial quaternionic line bundle. We show that these two quantizations are naturally related by a bundle morphism and, as a consequence, induce the same phase-space star product. We obtain explicit expressions for the integral kernels of star-products corresponding to various operator orderings and calculate their asymptotic expansions up to the third order in the Planck constant \\hbar . We also show that the differential form of the magnetic Weyl product corresponding to the symmetric ordering agrees completely with the Kontsevich formula for deformation quantization of Poisson structures and can be represented by Kontsevich’s graphs.

  3. Pixel-based meshfree modelling of skeletal muscles.

    PubMed

    Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu

    2016-01-01

    This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.

  4. Numerical study of the ignition behavior of a post-discharge kernel injected into a turbulent stratified cross-flow

    NASA Astrophysics Data System (ADS)

    Jaravel, Thomas; Labahn, Jeffrey; Ihme, Matthias

    2017-11-01

    The reliable initiation of flame ignition by high-energy spark kernels is critical for the operability of aviation gas turbines. The evolution of a spark kernel ejected by an igniter into a turbulent stratified environment is investigated using detailed numerical simulations with complex chemistry. At early times post ejection, comparisons of simulation results with high-speed Schlieren data show that the initial trajectory of the kernel is well reproduced, with a significant amount of air entrainment from the surrounding flow that is induced by the kernel ejection. After transiting in a non-flammable mixture, the kernel reaches a second stream of flammable methane-air mixture, where the successful of the kernel ignition was found to depend on the local flow state and operating conditions. By performing parametric studies, the probability of kernel ignition was identified, and compared with experimental observations. The ignition behavior is characterized by analyzing the local chemical structure, and its stochastic variability is also investigated.

  5. Device-Independent Tests of Classical and Quantum Dimensions

    NASA Astrophysics Data System (ADS)

    Gallego, Rodrigo; Brunner, Nicolas; Hadley, Christopher; Acín, Antonio

    2010-12-01

    We address the problem of testing the dimensionality of classical and quantum systems in a “black-box” scenario. We develop a general formalism for tackling this problem. This allows us to derive lower bounds on the classical dimension necessary to reproduce given measurement data. Furthermore, we generalize the concept of quantum dimension witnesses to arbitrary quantum systems, allowing one to place a lower bound on the Hilbert space dimension necessary to reproduce certain data. Illustrating these ideas, we provide simple examples of classical and quantum dimension witnesses.

  6. Minimum Dimension of a Hilbert Space Needed to Generate a Quantum Correlation.

    PubMed

    Sikora, Jamie; Varvitsiotis, Antonios; Wei, Zhaohui

    2016-08-05

    Consider a two-party correlation that can be generated by performing local measurements on a bipartite quantum system. A question of fundamental importance is to understand how many resources, which we quantify by the dimension of the underlying quantum system, are needed to reproduce this correlation. In this Letter, we identify an easy-to-compute lower bound on the smallest Hilbert space dimension needed to generate a given two-party quantum correlation. We show that our bound is tight on many well-known correlations and discuss how it can rule out correlations of having a finite-dimensional quantum representation. We show that our bound is multiplicative under product correlations and also that it can witness the nonconvexity of certain restricted-dimensional quantum correlations.

  7. An experimental validation of genomic selection in octoploid strawberry

    PubMed Central

    Gezan, Salvador A; Osorio, Luis F; Verma, Sujeet; Whitaker, Vance M

    2017-01-01

    The primary goal of genomic selection is to increase genetic gains for complex traits by predicting performance of individuals for which phenotypic data are not available. The objective of this study was to experimentally evaluate the potential of genomic selection in strawberry breeding and to define a strategy for its implementation. Four clonally replicated field trials, two in each of 2 years comprised of a total of 1628 individuals, were established in 2013–2014 and 2014–2015. Five complex yield and fruit quality traits with moderate to low heritability were assessed in each trial. High-density genotyping was performed with the Affymetrix Axiom IStraw90 single-nucleotide polymorphism array, and 17 479 polymorphic markers were chosen for analysis. Several methods were compared, including Genomic BLUP, Bayes B, Bayes C, Bayesian LASSO Regression, Bayesian Ridge Regression and Reproducing Kernel Hilbert Spaces. Cross-validation within training populations resulted in higher values than for true validations across trials. For true validations, Bayes B gave the highest predictive abilities on average and also the highest selection efficiencies, particularly for yield traits that were the lowest heritability traits. Selection efficiencies using Bayes B for parent selection ranged from 74% for average fruit weight to 34% for early marketable yield. A breeding strategy is proposed in which advanced selection trials are utilized as training populations and in which genomic selection can reduce the breeding cycle from 3 to 2 years for a subset of untested parents based on their predicted genomic breeding values. PMID:28090334

  8. Nonparametric method for genomics-based prediction of performance of quantitative traits involving epistasis in plant breeding.

    PubMed

    Sun, Xiaochun; Ma, Ping; Mumm, Rita H

    2012-01-01

    Genomic selection (GS) procedures have proven useful in estimating breeding value and predicting phenotype with genome-wide molecular marker information. However, issues of high dimensionality, multicollinearity, and the inability to deal effectively with epistasis can jeopardize accuracy and predictive ability. We, therefore, propose a new nonparametric method, pRKHS, which combines the features of supervised principal component analysis (SPCA) and reproducing kernel Hilbert spaces (RKHS) regression, with versions for traits with no/low epistasis, pRKHS-NE, to high epistasis, pRKHS-E. Instead of assigning a specific relationship to represent the underlying epistasis, the method maps genotype to phenotype in a nonparametric way, thus requiring fewer genetic assumptions. SPCA decreases the number of markers needed for prediction by filtering out low-signal markers with the optimal marker set determined by cross-validation. Principal components are computed from reduced marker matrix (called supervised principal components, SPC) and included in the smoothing spline ANOVA model as independent variables to fit the data. The new method was evaluated in comparison with current popular methods for practicing GS, specifically RR-BLUP, BayesA, BayesB, as well as a newer method by Crossa et al., RKHS-M, using both simulated and real data. Results demonstrate that pRKHS generally delivers greater predictive ability, particularly when epistasis impacts trait expression. Beyond prediction, the new method also facilitates inferences about the extent to which epistasis influences trait expression.

  9. Nonparametric Method for Genomics-Based Prediction of Performance of Quantitative Traits Involving Epistasis in Plant Breeding

    PubMed Central

    Sun, Xiaochun; Ma, Ping; Mumm, Rita H.

    2012-01-01

    Genomic selection (GS) procedures have proven useful in estimating breeding value and predicting phenotype with genome-wide molecular marker information. However, issues of high dimensionality, multicollinearity, and the inability to deal effectively with epistasis can jeopardize accuracy and predictive ability. We, therefore, propose a new nonparametric method, pRKHS, which combines the features of supervised principal component analysis (SPCA) and reproducing kernel Hilbert spaces (RKHS) regression, with versions for traits with no/low epistasis, pRKHS-NE, to high epistasis, pRKHS-E. Instead of assigning a specific relationship to represent the underlying epistasis, the method maps genotype to phenotype in a nonparametric way, thus requiring fewer genetic assumptions. SPCA decreases the number of markers needed for prediction by filtering out low-signal markers with the optimal marker set determined by cross-validation. Principal components are computed from reduced marker matrix (called supervised principal components, SPC) and included in the smoothing spline ANOVA model as independent variables to fit the data. The new method was evaluated in comparison with current popular methods for practicing GS, specifically RR-BLUP, BayesA, BayesB, as well as a newer method by Crossa et al., RKHS-M, using both simulated and real data. Results demonstrate that pRKHS generally delivers greater predictive ability, particularly when epistasis impacts trait expression. Beyond prediction, the new method also facilitates inferences about the extent to which epistasis influences trait expression. PMID:23226325

  10. Estimation of spline function in nonparametric path analysis based on penalized weighted least square (PWLS)

    NASA Astrophysics Data System (ADS)

    Fernandes, Adji Achmad Rinaldo; Solimun, Arisoesilaningsih, Endang

    2017-12-01

    The aim of this research is to estimate the spline in Path Analysis-based on Nonparametric Regression using Penalized Weighted Least Square (PWLS) approach. Approach used is Reproducing Kernel Hilbert Space at sobolev space. Nonparametric path analysis model on the equation y1 i=f1.1(x1 i)+ε1 i; y2 i=f1.2(x1 i)+f2.2(y1 i)+ε2 i; i =1 ,2 ,…,n Nonparametric Path Analysis which meet the criteria of minimizing PWLS min fw .k∈W2m[aw .k,bw .k], k =1 ,2 { (2n ) -1(y˜-f ˜ ) TΣ-1(y ˜-f ˜ ) + ∑k =1 2 ∑w =1 2 λw .k ∫aw .k bw .k [fw.k (m )(xi) ] 2d xi } is f ˜^=Ay ˜ with A=T1(T1TU1-1∑-1T1)-1T1TU1-1∑-1+V1U1-1∑-1[I-T1(T1TU1-1∑-1T1)-1T1TU1-1∑-1] columnalign="left">+T2(T2TU2-1∑-1T2)-1T2TU2-1∑-1+V2U2-1∑-1[I1-T2(T2TU2-1∑-1T2) -1T2TU2-1∑-1

  11. Parametric and Nonparametric Statistical Methods for Genomic Selection of Traits with Additive and Epistatic Genetic Architectures

    PubMed Central

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2014-01-01

    Parametric and nonparametric methods have been developed for purposes of predicting phenotypes. These methods are based on retrospective analyses of empirical data consisting of genotypic and phenotypic scores. Recent reports have indicated that parametric methods are unable to predict phenotypes of traits with known epistatic genetic architectures. Herein, we review parametric methods including least squares regression, ridge regression, Bayesian ridge regression, least absolute shrinkage and selection operator (LASSO), Bayesian LASSO, best linear unbiased prediction (BLUP), Bayes A, Bayes B, Bayes C, and Bayes Cπ. We also review nonparametric methods including Nadaraya-Watson estimator, reproducing kernel Hilbert space, support vector machine regression, and neural networks. We assess the relative merits of these 14 methods in terms of accuracy and mean squared error (MSE) using simulated genetic architectures consisting of completely additive or two-way epistatic interactions in an F2 population derived from crosses of inbred lines. Each simulated genetic architecture explained either 30% or 70% of the phenotypic variability. The greatest impact on estimates of accuracy and MSE was due to genetic architecture. Parametric methods were unable to predict phenotypic values when the underlying genetic architecture was based entirely on epistasis. Parametric methods were slightly better than nonparametric methods for additive genetic architectures. Distinctions among parametric methods for additive genetic architectures were incremental. Heritability, i.e., proportion of phenotypic variability, had the second greatest impact on estimates of accuracy and MSE. PMID:24727289

  12. CRKSPH: A new meshfree hydrodynamics method with applications to astrophysics

    NASA Astrophysics Data System (ADS)

    Owen, John Michael; Raskin, Cody; Frontiere, Nicholas

    2018-01-01

    The study of astrophysical phenomena such as supernovae, accretion disks, galaxy formation, and large-scale structure formation requires computational modeling of, at a minimum, hydrodynamics and gravity. Developing numerical methods appropriate for these kinds of problems requires a number of properties: shock-capturing hydrodynamics benefits from rigorous conservation of invariants such as total energy, linear momentum, and mass; lack of obvious symmetries or a simplified spatial geometry to exploit necessitate 3D methods that ideally are Galilean invariant; the dynamic range of mass and spatial scales that need to be resolved can span many orders of magnitude, requiring methods that are highly adaptable in their space and time resolution. We have developed a new Lagrangian meshfree hydrodynamics method called Conservative Reproducing Kernel Smoothed Particle Hydrodynamics, or CRKSPH, in order to meet these goals. CRKSPH is a conservative generalization of the meshfree reproducing kernel method, combining the high-order accuracy of reproducing kernels with the explicit conservation of mass, linear momentum, and energy necessary to study shock-driven hydrodynamics in compressible fluids. CRKSPH's Lagrangian, particle-like nature makes it simple to combine with well-known N-body methods for modeling gravitation, similar to the older Smoothed Particle Hydrodynamics (SPH) method. Indeed, CRKSPH can be substituted for SPH in existing SPH codes due to these similarities. In comparison to SPH, CRKSPH is able to achieve substantially higher accuracy for a given number of points due to the explicitly consistent (and higher-order) interpolation theory of reproducing kernels, while maintaining the same conservation principles (and therefore applicability) as SPH. There are currently two coded implementations of CRKSPH available: one in the open-source research code Spheral, and the other in the high-performance cosmological code HACC. Using these codes we have applied CRKSPH to a number of astrophysical scenarios, such as rotating gaseous disks, supernova remnants, and large-scale cosmological structure formation. In this poster we present an overview of CRKSPH and show examples of these astrophysical applications.

  13. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    PubMed

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Remarks on entanglement entropy in string theory

    NASA Astrophysics Data System (ADS)

    Balasubramanian, Vijay; Parrikar, Onkar

    2018-03-01

    Entanglement entropy for spatial subregions is difficult to define in string theory because of the extended nature of strings. Here we propose a definition for bosonic open strings using the framework of string field theory. The key difference (compared to ordinary quantum field theory) is that the subregion is chosen inside a Cauchy surface in the "space of open string configurations." We first present a simple calculation of this entanglement entropy in free light-cone string field theory, ignoring subtleties related to the factorization of the Hilbert space. We reproduce the answer expected from an effective field theory point of view, namely a sum over the one-loop entanglement entropies corresponding to all the particle-excitations of the string, and further show that the full string theory regulates ultraviolet divergences in the entanglement entropy. We then revisit the question of factorization of the Hilbert space by analyzing the covariant phase-space associated with a subregion in Witten's covariant string field theory. We show that the pure gauge (i.e., BRST exact) modes in the string field become dynamical at the entanglement cut. Thus, a proper definition of the entropy must involve an extended Hilbert space, with new stringy edge modes localized at the entanglement cut.

  15. Pure endmember extraction using robust kernel archetypoid analysis for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Yang, Gang; Wu, Ke; Li, Weiyue; Zhang, Dianfa

    2017-09-01

    A robust kernel archetypoid analysis (RKADA) method is proposed to extract pure endmembers from hyperspectral imagery (HSI). The RKADA assumes that each pixel is a sparse linear mixture of all endmembers and each endmember corresponds to a real pixel in the image scene. First, it improves the re8gular archetypal analysis with a new binary sparse constraint, and the adoption of the kernel function constructs the principal convex hull in an infinite Hilbert space and enlarges the divergences between pairwise pixels. Second, the RKADA transfers the pure endmember extraction problem into an optimization problem by minimizing residual errors with the Huber loss function. The Huber loss function reduces the effects from big noises and outliers in the convergence procedure of RKADA and enhances the robustness of the optimization function. Third, the random kernel sinks for fast kernel matrix approximation and the two-stage algorithm for optimizing initial pure endmembers are utilized to improve its computational efficiency in realistic implementations of RKADA, respectively. The optimization equation of RKADA is solved by using the block coordinate descend scheme and the desired pure endmembers are finally obtained. Six state-of-the-art pure endmember extraction methods are employed to make comparisons with the RKADA on both synthetic and real Cuprite HSI datasets, including three geometrical algorithms vertex component analysis (VCA), alternative volume maximization (AVMAX) and orthogonal subspace projection (OSP), and three matrix factorization algorithms the preconditioning for successive projection algorithm (PreSPA), hierarchical clustering based on rank-two nonnegative matrix factorization (H2NMF) and self-dictionary multiple measurement vector (SDMMV). Experimental results show that the RKADA outperforms all the six methods in terms of spectral angle distance (SAD) and root-mean-square-error (RMSE). Moreover, the RKADA has short computational times in offline operations and shows significant improvement in identifying pure endmembers for ground objects with smaller spectrum differences. Therefore, the RKADA could be an alternative for pure endmember extraction from hyperspectral images.

  16. Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.

    PubMed

    Echinaka, Yuki; Ozeki, Yukiyasu

    2016-10-01

    The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.

  17. The Influence of Reconstruction Kernel on Bone Mineral and Strength Estimates Using Quantitative Computed Tomography and Finite Element Analysis.

    PubMed

    Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K

    2017-10-17

    Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p < 0.001) when compared with images reconstructed using the bone-sharpening kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p < 0.001, and 18.2%, p < 0.001, respectively) when compared with the image reconstructed by the bone-sharpening kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  18. Domain adaptation via transfer component analysis.

    PubMed

    Pan, Sinno Jialin; Tsang, Ivor W; Kwok, James T; Yang, Qiang

    2011-02-01

    Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.

  19. Reproducing kernel potential energy surfaces in biomolecular simulations: Nitric oxide binding to myoglobin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soloviov, Maksym; Meuwly, Markus, E-mail: m.meuwly@unibas.ch

    2015-09-14

    Multidimensional potential energy surfaces based on reproducing kernel-interpolation are employed to explore the energetics and dynamics of free and bound nitric oxide in myoglobin (Mb). Combining a force field description for the majority of degrees of freedom and the higher-accuracy representation for the NO ligand and the Fe out-of-plane motion allows for a simulation approach akin to a mixed quantum mechanics/molecular mechanics treatment. However, the kernel-representation can be evaluated at conventional force-field speed. With the explicit inclusion of the Fe-out-of-plane (Fe-oop) coordinate, the dynamics and structural equilibrium after photodissociation of the ligand are correctly described compared to experiment. Experimentally, themore » Fe-oop coordinate plays an important role for the ligand dynamics. This is also found here where the isomerization dynamics between the Fe–ON and Fe–NO state is significantly affected whether or not this co-ordinate is explicitly included. Although the Fe–ON conformation is metastable when considering only the bound {sup 2}A state, it may disappear once the {sup 4}A state is included. This explains the absence of the Fe–ON state in previous experimental investigations of MbNO.« less

  20. Regularized iterative integration combined with non-linear diffusion filtering for phase-contrast x-ray computed tomography.

    PubMed

    Burger, Karin; Koehler, Thomas; Chabior, Michael; Allner, Sebastian; Marschner, Mathias; Fehringer, Andreas; Willner, Marian; Pfeiffer, Franz; Noël, Peter

    2014-12-29

    Phase-contrast x-ray computed tomography has a high potential to become clinically implemented because of its complementarity to conventional absorption-contrast.In this study, we investigate noise-reducing but resolution-preserving analytical reconstruction methods to improve differential phase-contrast imaging. We apply the non-linear Perona-Malik filter on phase-contrast data prior or post filtered backprojected reconstruction. Secondly, the Hilbert kernel is replaced by regularized iterative integration followed by ramp filtered backprojection as used for absorption-contrast imaging. Combining the Perona-Malik filter with this integration algorithm allows to successfully reveal relevant sample features, quantitatively confirmed by significantly increased structural similarity indices and contrast-to-noise ratios. With this concept, phase-contrast imaging can be performed at considerably lower dose.

  1. Meshfree truncated hierarchical refinement for isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Atri, H. R.; Shojaee, S.

    2018-05-01

    In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.

  2. Schwinger-Keldysh superspace in quantum mechanics

    NASA Astrophysics Data System (ADS)

    Geracie, Michael; Haehl, Felix M.; Loganayagam, R.; Narayan, Prithvi; Ramirez, David M.; Rangamani, Mukund

    2018-05-01

    We examine, in a quantum mechanical setting, the Hilbert space representation of the Becchi, Rouet, Stora, and Tyutin (BRST) symmetry associated with Schwinger-Keldysh path integrals. This structure had been postulated to encode important constraints on influence functionals in coarse-grained systems with dissipation, or in open quantum systems. Operationally, this entails uplifting the standard Schwinger-Keldysh two-copy formalism into superspace by appending BRST ghost degrees of freedom. These statements were previously argued at the level of the correlation functions. We provide herein a complementary perspective by working out the Hilbert space structure explicitly. Our analysis clarifies two crucial issues not evident in earlier works: first, certain background ghost insertions necessary to reproduce the correct Schwinger-Keldysh correlators arise naturally, and, second, the Schwinger-Keldysh difference operators are systematically dressed by the ghost bilinears, which turn out to be necessary to give rise to a consistent operator algebra. We also elaborate on the structure of the final state (which is BRST closed) and the future boundary condition of the ghost fields.

  3. Computational study of collisions between O({sup 3}P) and NO({sup 2}Π) at temperatures relevant to the hypersonic flight regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castro-Palacio, Juan Carlos; Nagy, Tibor; Meuwly, Markus, E-mail: m.meuwly@unibas.ch

    2014-10-28

    Reactions involving N and O atoms dominate the energetics of the reactive air flow around spacecraft when reentering the atmosphere in the hypersonic flight regime. For this reason, the thermal rate coefficients for reactive processes involving O({sup 3}P) and NO({sup 2}Π) are relevant over a wide range of temperatures. For this purpose, a potential energy surface (PES) for the ground state of the NO{sub 2} molecule is constructed based on high-level ab initio calculations. These ab initio energies are represented using the reproducible kernel Hilbert space method and Legendre polynomials. The global PES of NO{sub 2} in the ground statemore » is constructed by smoothly connecting the surfaces of the grids of various channels around the equilibrium NO{sub 2} geometry by a distance-dependent weighting function. The rate coefficients were calculated using Monte Carlo integration. The results indicate that at high temperatures only the lowest A-symmetry PES is relevant. At the highest temperatures investigated (20 000 K), the rate coefficient for the “O1O2+N” channel becomes comparable (to within a factor of around three) to the rate coefficient of the oxygen exchange reaction. A state resolved analysis shows that the smaller the vibrational quantum number of NO in the reactants, the higher the relative translational energy required to open it and conversely with higher vibrational quantum number, less translational energy is required. This is in accordance with Polanyi's rules. However, the oxygen exchange channel (NO2+O1) is accessible at any collision energy. Finally, this work introduces an efficient computational protocol for the investigation of three-atom collisions in general.« less

  4. A theoretical formulation of the electrophysiological inverse problem on the sphere

    NASA Astrophysics Data System (ADS)

    Riera, Jorge J.; Valdés, Pedro A.; Tanabe, Kunio; Kawashima, Ryuta

    2006-04-01

    The construction of three-dimensional images of the primary current density (PCD) produced by neuronal activity is a problem of great current interest in the neuroimaging community, though being initially formulated in the 1970s. There exist even now enthusiastic debates about the authenticity of most of the inverse solutions proposed in the literature, in which low resolution electrical tomography (LORETA) is a focus of attention. However, in our opinion, the capabilities and limitations of the electro and magneto encephalographic techniques to determine PCD configurations have not been extensively explored from a theoretical framework, even for simple volume conductor models of the head. In this paper, the electrophysiological inverse problem for the spherical head model is cast in terms of reproducing kernel Hilbert spaces (RKHS) formalism, which allows us to identify the null spaces of the implicated linear integral operators and also to define their representers. The PCD are described in terms of a continuous basis for the RKHS, which explicitly separates the harmonic and non-harmonic components. The RKHS concept permits us to bring LORETA into the scope of the general smoothing splines theory. A particular way of calculating the general smoothing splines is illustrated, avoiding a brute force discretization prematurely. The Bayes information criterion is used to handle dissimilarities in the signal/noise ratios and physical dimensions of the measurement modalities, which could affect the estimation of the amount of smoothness required for that class of inverse solution to be well specified. In order to validate the proposed method, we have estimated the 3D spherical smoothing splines from two data sets: electric potentials obtained from a skull phantom and magnetic fields recorded from subjects performing an experiment of human faces recognition.

  5. Computational study of collisions between O(3P) and NO(2Π) at temperatures relevant to the hypersonic flight regime.

    PubMed

    Castro-Palacio, Juan Carlos; Nagy, Tibor; Bemish, Raymond J; Meuwly, Markus

    2014-10-28

    Reactions involving N and O atoms dominate the energetics of the reactive air flow around spacecraft when reentering the atmosphere in the hypersonic flight regime. For this reason, the thermal rate coefficients for reactive processes involving O((3)P) and NO((2)Π) are relevant over a wide range of temperatures. For this purpose, a potential energy surface (PES) for the ground state of the NO2 molecule is constructed based on high-level ab initio calculations. These ab initio energies are represented using the reproducible kernel Hilbert space method and Legendre polynomials. The global PES of NO2 in the ground state is constructed by smoothly connecting the surfaces of the grids of various channels around the equilibrium NO2 geometry by a distance-dependent weighting function. The rate coefficients were calculated using Monte Carlo integration. The results indicate that at high temperatures only the lowest A-symmetry PES is relevant. At the highest temperatures investigated (20,000 K), the rate coefficient for the "O1O2+N" channel becomes comparable (to within a factor of around three) to the rate coefficient of the oxygen exchange reaction. A state resolved analysis shows that the smaller the vibrational quantum number of NO in the reactants, the higher the relative translational energy required to open it and conversely with higher vibrational quantum number, less translational energy is required. This is in accordance with Polanyi's rules. However, the oxygen exchange channel (NO2+O1) is accessible at any collision energy. Finally, this work introduces an efficient computational protocol for the investigation of three-atom collisions in general.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calixto, M., E-mail: calixto@ugr.es; Pérez-Romero, E.

    We revise the unireps. of U(2, 2) describing conformal particles with continuous mass spectrum from a many-body perspective, which shows massive conformal particles as compounds of two correlated massless particles. The statistics of the compound (boson/fermion) depends on the helicity h of the massless components (integer/half-integer). Coherent states (CS) of particle-hole pairs (“excitons”) are also explicitly constructed as the exponential action of exciton (non-canonical) creation operators on the ground state of unpaired particles. These CS are labeled by points Z (2×2 complex matrices) on the Cartan-Bergman domain D₄=U(2,2)/U(2)², and constitute a generalized (matrix) version of Perelomov U(1, 1) coherent statesmore » labeled by points z on the unit disk D₁=U(1,1)/U(1)². First, we follow a geometric approach to the construction of CS, orthonormal basis, U(2, 2) generators and their matrix elements and symbols in the reproducing kernel Hilbert space H{sub λ}(D₄) of analytic square-integrable holomorphic functions on D₄, which carries a unitary irreducible representation of U(2, 2) with index λϵN (the conformal or scale dimension). Then we introduce a many-body representation of the previous construction through an oscillator realization of the U(2, 2) Lie algebra generators in terms of eight boson operators with constraints. This particle picture allows us for a physical interpretation of our abstract mathematical construction in the many-body jargon. In particular, the index λ is related to the number 2(λ – 2) of unpaired quanta and to the helicity h = (λ – 2)/2 of each massless particle forming the massive compound.« less

  7. L1-norm locally linear representation regularization multi-source adaptation learning.

    PubMed

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Comparison of Models and Whole-Genome Profiling Approaches for Genomic-Enabled Prediction of Septoria Tritici Blotch, Stagonospora Nodorum Blotch, and Tan Spot Resistance in Wheat.

    PubMed

    Juliana, Philomin; Singh, Ravi P; Singh, Pawan K; Crossa, Jose; Rutkoski, Jessica E; Poland, Jesse A; Bergstrom, Gary C; Sorrells, Mark E

    2017-07-01

    The leaf spotting diseases in wheat that include Septoria tritici blotch (STB) caused by , Stagonospora nodorum blotch (SNB) caused by , and tan spot (TS) caused by pose challenges to breeding programs in selecting for resistance. A promising approach that could enable selection prior to phenotyping is genomic selection that uses genome-wide markers to estimate breeding values (BVs) for quantitative traits. To evaluate this approach for seedling and/or adult plant resistance (APR) to STB, SNB, and TS, we compared the predictive ability of least-squares (LS) approach with genomic-enabled prediction models including genomic best linear unbiased predictor (GBLUP), Bayesian ridge regression (BRR), Bayes A (BA), Bayes B (BB), Bayes Cπ (BC), Bayesian least absolute shrinkage and selection operator (BL), and reproducing kernel Hilbert spaces markers (RKHS-M), a pedigree-based model (RKHS-P) and RKHS markers and pedigree (RKHS-MP). We observed that LS gave the lowest prediction accuracies and RKHS-MP, the highest. The genomic-enabled prediction models and RKHS-P gave similar accuracies. The increase in accuracy using genomic prediction models over LS was 48%. The mean genomic prediction accuracies were 0.45 for STB (APR), 0.55 for SNB (seedling), 0.66 for TS (seedling) and 0.48 for TS (APR). We also compared markers from two whole-genome profiling approaches: genotyping by sequencing (GBS) and diversity arrays technology sequencing (DArTseq) for prediction. While, GBS markers performed slightly better than DArTseq, combining markers from the two approaches did not improve accuracies. We conclude that implementing GS in breeding for these diseases would help to achieve higher accuracies and rapid gains from selection. Copyright © 2017 Crop Science Society of America.

  9. A fast numerical method for ideal fluid flow in domains with multiple stirrers

    NASA Astrophysics Data System (ADS)

    Nasser, Mohamed M. S.; Green, Christopher C.

    2018-03-01

    A collection of arbitrarily-shaped solid objects, each moving at a constant speed, can be used to mix or stir ideal fluid, and can give rise to interesting flow patterns. Assuming these systems of fluid stirrers are two-dimensional, the mathematical problem of resolving the flow field—given a particular distribution of any finite number of stirrers of specified shape and speed—can be formulated as a Riemann-Hilbert (R-H) problem. We show that this R-H problem can be solved numerically using a fast and accurate algorithm for any finite number of stirrers based around a boundary integral equation with the generalized Neumann kernel. Various systems of fluid stirrers are considered, and our numerical scheme is shown to handle highly multiply connected domains (i.e. systems of many fluid stirrers) with minimal computational expense.

  10. Approximating the linear quadratic optimal control law for hereditary systems with delays in the control

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1987-01-01

    The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary systems. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.

  11. Approximating the linear quadratic optimal control law for hereditary systems with delays in the control

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1988-01-01

    The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary schemes. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.

  12. Efficient nonparametric n -body force fields from machine learning

    NASA Astrophysics Data System (ADS)

    Glielmo, Aldo; Zeni, Claudio; De Vita, Alessandro

    2018-05-01

    We provide a definition and explicit expressions for n -body Gaussian process (GP) kernels, which can learn any interatomic interaction occurring in a physical system, up to n -body contributions, for any value of n . The series is complete, as it can be shown that the "universal approximator" squared exponential kernel can be written as a sum of n -body kernels. These recipes enable the choice of optimally efficient force models for each target system, as confirmed by extensive testing on various materials. We furthermore describe how the n -body kernels can be "mapped" on equivalent representations that provide database-size-independent predictions and are thus crucially more efficient. We explicitly carry out this mapping procedure for the first nontrivial (three-body) kernel of the series, and we show that this reproduces the GP-predicted forces with meV /Å accuracy while being orders of magnitude faster. These results pave the way to using novel force models (here named "M-FFs") that are computationally as fast as their corresponding standard parametrized n -body force fields, while retaining the nonparametric character, the ease of training and validation, and the accuracy of the best recently proposed machine-learning potentials.

  13. Quantum entanglement in photoactive prebiotic systems.

    PubMed

    Tamulis, Arvydas; Grigalavicius, Mantas

    2014-06-01

    This paper contains the review of quantum entanglement investigations in living systems, and in the quantum mechanically modelled photoactive prebiotic kernel systems. We define our modelled self-assembled supramolecular photoactive centres, composed of one or more sensitizer molecules, precursors of fatty acids and a number of water molecules, as a photoactive prebiotic kernel systems. We propose that life first emerged in the form of such minimal photoactive prebiotic kernel systems and later in the process of evolution these photoactive prebiotic kernel systems would have produced fatty acids and covered themselves with fatty acid envelopes to become the minimal cells of the Fatty Acid World. Specifically, we model self-assembling of photoactive prebiotic systems with observed quantum entanglement phenomena. We address the idea that quantum entanglement was important in the first stages of origins of life and evolution of the biospheres because simultaneously excite two prebiotic kernels in the system by appearance of two additional quantum entangled excited states, leading to faster growth and self-replication of minimal living cells. The quantum mechanically modelled possibility of synthesizing artificial self-reproducing quantum entangled prebiotic kernel systems and minimal cells also impacts the possibility of the most probable path of emergence of protocells on the Earth or elsewhere. We also examine the quantum entangled logic gates discovered in the modelled systems composed of two prebiotic kernels. Such logic gates may have application in the destruction of cancer cells or becoming building blocks of new forms of artificial cells including magnetically active ones.

  14. Evaluation of Statistical Downscaling Skill at Reproducing Extreme Events

    NASA Astrophysics Data System (ADS)

    McGinnis, S. A.; Tye, M. R.; Nychka, D. W.; Mearns, L. O.

    2015-12-01

    Climate model outputs usually have much coarser spatial resolution than is needed by impacts models. Although higher resolution can be achieved using regional climate models for dynamical downscaling, further downscaling is often required. The final resolution gap is often closed with a combination of spatial interpolation and bias correction, which constitutes a form of statistical downscaling. We use this technique to downscale regional climate model data and evaluate its skill in reproducing extreme events. We downscale output from the North American Regional Climate Change Assessment Program (NARCCAP) dataset from its native 50-km spatial resolution to the 4-km resolution of University of Idaho's METDATA gridded surface meterological dataset, which derives from the PRISM and NLDAS-2 observational datasets. We operate on the major variables used in impacts analysis at a daily timescale: daily minimum and maximum temperature, precipitation, humidity, pressure, solar radiation, and winds. To interpolate the data, we use the patch recovery method from the Earth System Modeling Framework (ESMF) regridding package. We then bias correct the data using Kernel Density Distribution Mapping (KDDM), which has been shown to exhibit superior overall performance across multiple metrics. Finally, we evaluate the skill of this technique in reproducing extreme events by comparing raw and downscaled output with meterological station data in different bioclimatic regions according to the the skill scores defined by Perkins et al in 2013 for evaluation of AR4 climate models. We also investigate techniques for improving bias correction of values in the tails of the distributions. These techniques include binned kernel density estimation, logspline kernel density estimation, and transfer functions constructed by fitting the tails with a generalized pareto distribution.

  15. The complex variable reproducing kernel particle method for bending problems of thin plates on elastic foundations

    NASA Astrophysics Data System (ADS)

    Chen, L.; Cheng, Y. M.

    2018-07-01

    In this paper, the complex variable reproducing kernel particle method (CVRKPM) for solving the bending problems of isotropic thin plates on elastic foundations is presented. In CVRKPM, one-dimensional basis function is used to obtain the shape function of a two-dimensional problem. CVRKPM is used to form the approximation function of the deflection of the thin plates resting on elastic foundation, the Galerkin weak form of thin plates on elastic foundation is employed to obtain the discretized system equations, the penalty method is used to apply the essential boundary conditions, and Winkler and Pasternak foundation models are used to consider the interface pressure between the plate and the foundation. Then the corresponding formulae of CVRKPM for thin plates on elastic foundations are presented in detail. Several numerical examples are given to discuss the efficiency and accuracy of CVRKPM in this paper, and the corresponding advantages of the present method are shown.

  16. Spinor Structure and Internal Symmetries

    NASA Astrophysics Data System (ADS)

    Varlamov, V. V.

    2015-10-01

    Spinor structure and internal symmetries are considered within one theoretical framework based on the generalized spin and abstract Hilbert space. Complex momentum is understood as a generating kernel of the underlying spinor structure. It is shown that tensor products of biquaternion algebras are associated with the each irreducible representation of the Lorentz group. Space-time discrete symmetries P, T and their combination PT are generated by the fundamental automorphisms of this algebraic background (Clifford algebras). Charge conjugation C is presented by a pseudoautomorphism of the complex Clifford algebra. This description of the operation C allows one to distinguish charged and neutral particles including particle-antiparticle interchange and truly neutral particles. Spin and charge multiplets, based on the interlocking representations of the Lorentz group, are introduced. A central point of the work is a correspondence between Wigner definition of elementary particle as an irreducible representation of the Poincaré group and SU(3)-description (quark scheme) of the particle as a vector of the supermultiplet (irreducible representation of SU(3)). This correspondence is realized on the ground of a spin-charge Hilbert space. Basic hadron supermultiplets of SU(3)-theory (baryon octet and two meson octets) are studied in this framework. It is shown that quark phenomenologies are naturally incorporated into presented scheme. The relationship between mass and spin is established. The introduced spin-mass formula and its combination with Gell-Mann-Okubo mass formula allows one to take a new look at the problem of mass spectrum of elementary particles.

  17. ADHM and the 4d quantum Hall effect

    NASA Astrophysics Data System (ADS)

    Barns-Graham, Alec; Dorey, Nick; Lohitsiri, Nakarin; Tong, David; Turner, Carl

    2018-04-01

    Yang-Mills instantons are solitonic particles in d = 4 + 1 dimensional gauge theories. We construct and analyse the quantum Hall states that arise when these particles are restricted to the lowest Landau level. We describe the ground state wavefunctions for both Abelian and non-Abelian quantum Hall states. Although our model is purely bosonic, we show that the excitations of this 4d quantum Hall state are governed by the Nekrasov partition function of a certain five dimensional supersymmetric gauge theory with Chern-Simons term. The partition function can also be interpreted as a variant of the Hilbert series of the instanton moduli space, counting holomorphic sections rather than holomorphic functions. It is known that the Hilbert series of the instanton moduli space can be rewritten using mirror symmetry of 3d gauge theories in terms of Coulomb branch variables. We generalise this approach to include the effect of a five dimensional Chern-Simons term. We demonstrate that the resulting Coulomb branch formula coincides with the corresponding Higgs branch Molien integral which, in turn, reproduces the standard formula for the Nekrasov partition function.

  18. A fast object-oriented Matlab implementation of the Reproducing Kernel Particle Method

    NASA Astrophysics Data System (ADS)

    Barbieri, Ettore; Meo, Michele

    2012-05-01

    Novel numerical methods, known as Meshless Methods or Meshfree Methods and, in a wider perspective, Partition of Unity Methods, promise to overcome most of disadvantages of the traditional finite element techniques. The absence of a mesh makes meshfree methods very attractive for those problems involving large deformations, moving boundaries and crack propagation. However, meshfree methods still have significant limitations that prevent their acceptance among researchers and engineers, namely the computational costs. This paper presents an in-depth analysis of computational techniques to speed-up the computation of the shape functions in the Reproducing Kernel Particle Method and Moving Least Squares, with particular focus on their bottlenecks, like the neighbour search, the inversion of the moment matrix and the assembly of the stiffness matrix. The paper presents numerous computational solutions aimed at a considerable reduction of the computational times: the use of kd-trees for the neighbour search, sparse indexing of the nodes-points connectivity and, most importantly, the explicit and vectorized inversion of the moment matrix without using loops and numerical routines.

  19. Power Spectral Density and Hilbert Transform

    DTIC Science & Technology

    2016-12-01

    Fourier transform, Hilbert transform, digital filter , SDR 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER...terms. A very good approximation to the ideal Hilbert transform is a low-pass finite impulse response (FIR) filter . In Fig. 7, we show a real signal...220), converted to an analytic signal using a 255-tap Hilbert transform low-pass filter . For an ideal Hilbert

  20. Minimal Cohomological Model of a Scalar Field on a Riemannian Manifold

    NASA Astrophysics Data System (ADS)

    Arkhipov, V. V.

    2018-04-01

    Lagrangians of the field-theory model of a scalar field are considered as 4-forms on a Riemannian manifold. The model is constructed on the basis of the Hodge inner product, this latter being an analog of the scalar product of two functions. Including the basis fields in the action of the terms with tetrads makes it possible to reproduce the Klein-Gordon equation and the Maxwell equations, and also the Einstein-Hilbert action. We conjecture that the principle of construction of the Lagrangians as 4-forms can give a criterion restricting possible forms of the field-theory models.

  1. The Polyanalytic Ginibre Ensembles

    NASA Astrophysics Data System (ADS)

    Haimi, Antti; Hedenmalm, Haakan

    2013-10-01

    For integers n, q=1,2,3,… , let Pol n, q denote the -linear space of polynomials in z and , of degree ≤ n-1 in z and of degree ≤ q-1 in . We supply Pol n, q with the inner product structure of the resulting Hilbert space is denoted by Pol m, n, q . Here, it is assumed that m is a positive real. We let K m, n, q denote the reproducing kernel of Pol m, n, q , and study the associated determinantal process, in the limit as m, n→+∞ while n= m+O(1); the number q, the degree of polyanalyticity, is kept fixed. We call these processes polyanalytic Ginibre ensembles, because they generalize the Ginibre ensemble—the eigenvalue process of random (normal) matrices with Gaussian weight. There is a physical interpretation in terms of a system of free fermions in a uniform magnetic field so that a fixed number of the first Landau levels have been filled. We consider local blow-ups of the polyanalytic Ginibre ensembles around points in the spectral droplet, which is here the closed unit disk . We obtain asymptotics for the blow-up process, using a blow-up to characteristic distance m -1/2; the typical distance is the same both for interior and for boundary points of . This amounts to obtaining the asymptotical behavior of the generating kernel K m, n, q . Following (Ameur et al. in Commun. Pure Appl. Math. 63(12):1533-1584, 2010), the asymptotics of the K m, n, q are rather conveniently expressed in terms of the Berezin measure (and density) [Equation not available: see fulltext.] For interior points | z|<1, we obtain that in the weak-star sense, where δ z denotes the unit point mass at z. Moreover, if we blow up to the scale of m -1/2 around z, we get convergence to a measure which is Gaussian for q=1, but exhibits more complicated Fresnel zone behavior for q>1. In contrast, for exterior points | z|>1, we have instead that , where is the harmonic measure at z with respect to the exterior disk . For boundary points, | z|=1, the Berezin measure converges to the unit point mass at z, as with interior points, but the blow-up to the scale m -1/2 exhibits quite different behavior at boundary points compared with interior points. We also obtain the asymptotic boundary behavior of the 1-point function at the coarser local scale q 1/2 m -1/2.

  2. Computing Instantaneous Frequency by normalizing Hilbert Transform

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2005-01-01

    This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.

  3. Computing Instantaneous Frequency by normalizing Hilbert Transform

    DOEpatents

    Huang, Norden E.

    2005-05-31

    This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.

  4. An analytical dose-averaged LET calculation algorithm considering the off-axis LET enhancement by secondary protons for spot-scanning proton therapy.

    PubMed

    Hirayama, Shusuke; Matsuura, Taeko; Ueda, Hideaki; Fujii, Yusuke; Fujii, Takaaki; Takao, Seishin; Miyamoto, Naoki; Shimizu, Shinichi; Fujimoto, Rintaro; Umegaki, Kikuo; Shirato, Hiroki

    2018-05-22

    To evaluate the biological effects of proton beams as part of daily clinical routine, fast and accurate calculation of dose-averaged linear energy transfer (LET d ) is required. In this study, we have developed the analytical LET d calculation method based on the pencil-beam algorithm (PBA) considering the off-axis enhancement by secondary protons. This algorithm (PBA-dLET) was then validated using Monte Carlo simulation (MCS) results. In PBA-dLET, LET values were assigned separately for each individual dose kernel based on the PBA. For the dose kernel, we employed a triple Gaussian model which consists of the primary component (protons that undergo the multiple Coulomb scattering) and the halo component (protons that undergo inelastic, nonelastic and elastic nuclear reaction); the primary and halo components were represented by a single Gaussian and the sum of two Gaussian distributions, respectively. Although the previous analytical approaches assumed a constant LET d value for the lateral distribution of a pencil beam, the actual LET d increases away from the beam axis, because there are more scattered and therefore lower energy protons with higher stopping powers. To reflect this LET d behavior, we have assumed that the LETs of primary and halo components can take different values (LET p and LET halo ), which vary only along the depth direction. The values of dual-LET kernels were determined such that the PBA-dLET reproduced the MCS-generated LET d distribution in both small and large fields. These values were generated at intervals of 1 mm in depth for 96 energies from 70.2 to 220 MeV and collected in the look-up table. Finally, we compared the LET d distributions and mean LET d (LET d,mean ) values of targets and organs at risk between PBA-dLET and MCS. Both homogeneous phantom and patient geometries (prostate, liver, and lung cases) were used to validate the present method. In the homogeneous phantom, the LET d profiles obtained by the dual-LET kernels agree well with the MCS results except for the low-dose region in the lateral penumbra, where the actual dose was below 10% of the maximum dose. In the patient geometry, the LET d profiles calculated with the developed method reproduces MCS with the similar accuracy as in the homogeneous phantom. The maximum differences in LET d,mean for each structure between the PBA-dLET and the MCS were 0.06 keV/μm in homogeneous phantoms and 0.08 keV/μm in patient geometries under all tested conditions, respectively. We confirmed that the dual-LET-kernel model well reproduced the MCS, not only in the homogeneous phantom but also in complex patient geometries. The accuracy of the LET d was largely improved from the single-LET-kernel model, especially at the lateral penumbra. The model is expected to be useful, especially for proper recognition of the risk of side effects when the target is next to critical organs. © 2018 American Association of Physicists in Medicine.

  5. Tool Wear Feature Extraction Based on Hilbert Marginal Spectrum

    NASA Astrophysics Data System (ADS)

    Guan, Shan; Song, Weijie; Pang, Hongyang

    2017-09-01

    In the metal cutting process, the signal contains a wealth of tool wear state information. A tool wear signal’s analysis and feature extraction method based on Hilbert marginal spectrum is proposed. Firstly, the tool wear signal was decomposed by empirical mode decomposition algorithm and the intrinsic mode functions including the main information were screened out by the correlation coefficient and the variance contribution rate. Secondly, Hilbert transform was performed on the main intrinsic mode functions. Hilbert time-frequency spectrum and Hilbert marginal spectrum were obtained by Hilbert transform. Finally, Amplitude domain indexes were extracted on the basis of the Hilbert marginal spectrum and they structured recognition feature vector of tool wear state. The research results show that the extracted features can effectively characterize the different wear state of the tool, which provides a basis for monitoring tool wear condition.

  6. Efficient approach to obtain free energy gradient using QM/MM MD simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asada, Toshio; Koseki, Shiro; The Research Institute for Molecular Electronic Devices

    2015-12-31

    The efficient computational approach denoted as charge and atom dipole response kernel (CDRK) model to consider polarization effects of the quantum mechanical (QM) region is described using the charge response and the atom dipole response kernels for free energy gradient (FEG) calculations in the quantum mechanical/molecular mechanical (QM/MM) method. CDRK model can reasonably reproduce energies and also energy gradients of QM and MM atoms obtained by expensive QM/MM calculations in a drastically reduced computational time. This model is applied on the acylation reaction in hydrated trypsin-BPTI complex to optimize the reaction path on the free energy surface by means ofmore » FEG and the nudged elastic band (NEB) method.« less

  7. Robust point matching via vector field consensus.

    PubMed

    Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

    2014-04-01

    In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

  8. Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space

    PubMed Central

    Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred

    2016-01-01

    Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112

  9. A novel manifold-manifold distance index applied to looseness state assessment of viscoelastic sandwich structures

    NASA Astrophysics Data System (ADS)

    Sun, Chuang; Zhang, Zhousuo; Guo, Ting; Luo, Xue; Qu, Jinxiu; Zhang, Chenxuan; Cheng, Wei; Li, Bing

    2014-06-01

    Viscoelastic sandwich structures (VSS) are widely used in mechanical equipment; their state assessment is necessary to detect structural states and to keep equipment running with high reliability. This paper proposes a novel manifold-manifold distance-based assessment (M2DBA) method for assessing the looseness state in VSSs. In the M2DBA method, a manifold-manifold distance is viewed as a health index. To design the index, response signals from the structure are firstly acquired by condition monitoring technology and a Hankel matrix is constructed by using the response signals to describe state patterns of the VSS. Thereafter, a subspace analysis method, that is, principal component analysis (PCA), is performed to extract the condition subspace hidden in the Hankel matrix. From the subspace, pattern changes in dynamic structural properties are characterized. Further, a Grassmann manifold (GM) is formed by organizing a set of subspaces. The manifold is mapped to a reproducing kernel Hilbert space (RKHS), where support vector data description (SVDD) is used to model the manifold as a hypersphere. Finally, a health index is defined as the cosine of the angle between the hypersphere centers corresponding to the structural baseline state and the looseness state. The defined health index contains similarity information existing in the two structural states, so structural looseness states can be effectively identified. Moreover, the health index is derived by analysis of the global properties of subspace sets, which is different from traditional subspace analysis methods. The effectiveness of the health index for state assessment is validated by test data collected from a VSS subjected to different degrees of looseness. The results show that the health index is a very effective metric for detecting the occurrence and extension of structural looseness. Comparison results indicate that the defined index outperforms some existing state-of-the-art ones.

  10. Genomic and pedigree-based prediction for leaf, stem, and stripe rust resistance in wheat.

    PubMed

    Juliana, Philomin; Singh, Ravi P; Singh, Pawan K; Crossa, Jose; Huerta-Espino, Julio; Lan, Caixia; Bhavani, Sridhar; Rutkoski, Jessica E; Poland, Jesse A; Bergstrom, Gary C; Sorrells, Mark E

    2017-07-01

    Genomic prediction for seedling and adult plant resistance to wheat rusts was compared to prediction using few markers as fixed effects in a least-squares approach and pedigree-based prediction. The unceasing plant-pathogen arms race and ephemeral nature of some rust resistance genes have been challenging for wheat (Triticum aestivum L.) breeding programs and farmers. Hence, it is important to devise strategies for effective evaluation and exploitation of quantitative rust resistance. One promising approach that could accelerate gain from selection for rust resistance is 'genomic selection' which utilizes dense genome-wide markers to estimate the breeding values (BVs) for quantitative traits. Our objective was to compare three genomic prediction models including genomic best linear unbiased prediction (GBLUP), GBLUP A that was GBLUP with selected loci as fixed effects and reproducing kernel Hilbert spaces-markers (RKHS-M) with least-squares (LS) approach, RKHS-pedigree (RKHS-P), and RKHS markers and pedigree (RKHS-MP) to determine the BVs for seedling and/or adult plant resistance (APR) to leaf rust (LR), stem rust (SR), and stripe rust (YR). The 333 lines in the 45th IBWSN and the 313 lines in the 46th IBWSN were genotyped using genotyping-by-sequencing and phenotyped in replicated trials. The mean prediction accuracies ranged from 0.31-0.74 for LR seedling, 0.12-0.56 for LR APR, 0.31-0.65 for SR APR, 0.70-0.78 for YR seedling, and 0.34-0.71 for YR APR. For most datasets, the RKHS-MP model gave the highest accuracies, while LS gave the lowest. GBLUP, GBLUP A, RKHS-M, and RKHS-P models gave similar accuracies. Using genome-wide marker-based models resulted in an average of 42% increase in accuracy over LS. We conclude that GS is a promising approach for improvement of quantitative rust resistance and can be implemented in the breeding pipeline.

  11. A general method for constructing multidimensional molecular potential energy surfaces from {ital ab} {ital initio} calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, T.; Rabitz, H.

    1996-02-01

    A general interpolation method for constructing smooth molecular potential energy surfaces (PES{close_quote}s) from {ital ab} {ital initio} data are proposed within the framework of the reproducing kernel Hilbert space and the inverse problem theory. The general expression for an {ital a} {ital posteriori} error bound of the constructed PES is derived. It is shown that the method yields globally smooth potential energy surfaces that are continuous and possess derivatives up to second order or higher. Moreover, the method is amenable to correct symmetry properties and asymptotic behavior of the molecular system. Finally, the method is generic and can be easilymore » extended from low dimensional problems involving two and three atoms to high dimensional problems involving four or more atoms. Basic properties of the method are illustrated by the construction of a one-dimensional potential energy curve of the He{endash}He van der Waals dimer using the exact quantum Monte Carlo calculations of Anderson {ital et} {ital al}. [J. Chem. Phys. {bold 99}, 345 (1993)], a two-dimensional potential energy surface of the HeCO van der Waals molecule using recent {ital ab} {ital initio} calculations by Tao {ital et} {ital al}. [J. Chem. Phys. {bold 101}, 8680 (1994)], and a three-dimensional potential energy surface of the H{sup +}{sub 3} molecular ion using highly accurate {ital ab} {ital initio} calculations of R{umlt o}hse {ital et} {ital al}. [J. Chem. Phys. {bold 101}, 2231 (1994)]. In the first two cases the constructed potentials clearly exhibit the correct asymptotic forms, while in the last case the constructed potential energy surface is in excellent agreement with that constructed by R{umlt o}hse {ital et} {ital al}. using a low order polynomial fitting procedure. {copyright} {ital 1996 American Institute of Physics.}« less

  12. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  13. The density matrix renormalization group algorithm on kilo-processor architectures: Implementation and trade-offs

    NASA Astrophysics Data System (ADS)

    Nemes, Csaba; Barcza, Gergely; Nagy, Zoltán; Legeza, Örs; Szolgay, Péter

    2014-06-01

    In the numerical analysis of strongly correlated quantum lattice models one of the leading algorithms developed to balance the size of the effective Hilbert space and the accuracy of the simulation is the density matrix renormalization group (DMRG) algorithm, in which the run-time is dominated by the iterative diagonalization of the Hamilton operator. As the most time-dominant step of the diagonalization can be expressed as a list of dense matrix operations, the DMRG is an appealing candidate to fully utilize the computing power residing in novel kilo-processor architectures. In the paper a smart hybrid CPU-GPU implementation is presented, which exploits the power of both CPU and GPU and tolerates problems exceeding the GPU memory size. Furthermore, a new CUDA kernel has been designed for asymmetric matrix-vector multiplication to accelerate the rest of the diagonalization. Besides the evaluation of the GPU implementation, the practical limits of an FPGA implementation are also discussed.

  14. PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation

    NASA Astrophysics Data System (ADS)

    Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long

    2018-06-01

    We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.

  15. Two-stage autoignition and edge flames in a high pressure turbulent jet

    DOE PAGES

    Krisman, Alex; Hawkes, Evatt R.; Chen, Jacqueline H.

    2017-07-04

    A three-dimensional direct numerical simulation is conducted for a temporally evolving planar jet of n-heptane at a pressure of 40 atmospheres and in a coflow of air at 1100 K. At these conditions, n-heptane exhibits a two-stage ignition due to low- and high-temperature chemistry, which is reproduced by the global chemical model used in this study. The results show that ignition occurs in several overlapping stages and multiple modes of combustion are present. Low-temperature chemistry precedes the formation of multiple spatially localised high-temperature chemistry autoignition events, referred to as ‘kernels’. These kernels form within the shear layer and core ofmore » the jet at compositions with short homogeneous ignition delay times and in locations experiencing low scalar dissipation rates. An analysis of the kernel histories shows that the ignition delay time is correlated with the mixing rate history and that the ignition kernels tend to form in vortically dominated regions of the domain, as corroborated by an analysis of the topology of the velocity gradient tensor. Once ignited, the kernels grow rapidly and establish edge flames where they envelop the stoichiometric isosurface. A combination of kernel formation (autoignition) and the growth of existing burning surface (via edge-flame propagation) contributes to the overall ignition process. In conclusion, an analysis of propagation speeds evaluated on the burning surface suggests that although the edge-flame speed is promoted by the autoignitive conditions due to an increase in the local laminar flame speed, edge-flame propagation of existing burning surfaces (triggered initially by isolated autoignition kernels) is the dominant ignition mode in the present configuration.« less

  16. Terahertz bandwidth all-optical Hilbert transformers based on long-period gratings.

    PubMed

    Ashrafi, Reza; Azaña, José

    2012-07-01

    A novel, all-optical design for implementing terahertz (THz) bandwidth real-time Hilbert transformers is proposed and numerically demonstrated. An all-optical Hilbert transformer can be implemented using a uniform-period long-period grating (LPG) with a properly designed amplitude-only grating apodization profile, incorporating a single π-phase shift in the middle of the grating length. The designed LPG-based Hilbert transformers can be practically implemented using either fiber-optic or integrated-waveguide technologies. As a generalization, photonic fractional Hilbert transformers are also designed based on the same optical platform. In this general case, the resulting LPGs have multiple π-phase shifts along the grating length. Our numerical simulations confirm that all-optical Hilbert transformers capable of processing arbitrary optical signals with bandwidths well in the THz range can be implemented using feasible fiber/waveguide LPG designs.

  17. Aeroelastic Flight Data Analysis with the Hilbert-Huang Algorithm

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Prazenica, Chad

    2006-01-01

    This report investigates the utility of the Hilbert Huang transform for the analysis of aeroelastic flight data. It is well known that the classical Hilbert transform can be used for time-frequency analysis of functions or signals. Unfortunately, the Hilbert transform can only be effectively applied to an extremely small class of signals, namely those that are characterized by a single frequency component at any instant in time. The recently-developed Hilbert Huang algorithm addresses the limitations of the classical Hilbert transform through a process known as empirical mode decomposition. Using this approach, the data is filtered into a series of intrinsic mode functions, each of which admits a well-behaved Hilbert transform. In this manner, the Hilbert Huang algorithm affords time-frequency analysis of a large class of signals. This powerful tool has been applied in the analysis of scientific data, structural system identification, mechanical system fault detection, and even image processing. The purpose of this report is to demonstrate the potential applications of the Hilbert Huang algorithm for the analysis of aeroelastic systems, with improvements such as localized online processing. Applications for correlations between system input and output, and amongst output sensors, are discussed to characterize the time-varying amplitude and frequency correlations present in the various components of multiple data channels. Online stability analyses and modal identification are also presented. Examples are given using aeroelastic test data from the F-18 Active Aeroelastic Wing airplane, an Aerostructures Test Wing, and pitch plunge simulation.

  18. Aeroelastic Flight Data Analysis with the Hilbert-Huang Algorithm

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Prazenica, Chad

    2005-01-01

    This paper investigates the utility of the Hilbert-Huang transform for the analysis of aeroelastic flight data. It is well known that the classical Hilbert transform can be used for time-frequency analysis of functions or signals. Unfortunately, the Hilbert transform can only be effectively applied to an extremely small class of signals, namely those that are characterized by a single frequency component at any instant in time. The recently-developed Hilbert-Huang algorithm addresses the limitations of the classical Hilbert transform through a process known as empirical mode decomposition. Using this approach, the data is filtered into a series of intrinsic mode functions, each of which admits a well-behaved Hilbert transform. In this manner, the Hilbert-Huang algorithm affords time-frequency analysis of a large class of signals. This powerful tool has been applied in the analysis of scientific data, structural system identification, mechanical system fault detection, and even image processing. The purpose of this paper is to demonstrate the potential applications of the Hilbert-Huang algorithm for the analysis of aeroelastic systems, with improvements such as localized/online processing. Applications for correlations between system input and output, and amongst output sensors, are discussed to characterize the time-varying amplitude and frequency correlations present in the various components of multiple data channels. Online stability analyses and modal identification are also presented. Examples are given using aeroelastic test data from the F/A-18 Active Aeroelastic Wing aircraft, an Aerostructures Test Wing, and pitch-plunge simulation.

  19. Hilbert's sixth problem and the failure of the Boltzmann to Euler limit

    NASA Astrophysics Data System (ADS)

    Slemrod, Marshall

    2018-04-01

    This paper addresses the main issue of Hilbert's sixth problem, namely the rigorous passage of solutions to the mesoscopic Boltzmann equation to macroscopic solutions of the Euler equations of compressible gas dynamics. The results of the paper are that (i) in general Hilbert's program will fail because of the appearance of van der Waals-Korteweg capillarity terms in a macroscopic description of motion of a gas, and (ii) the van der Waals-Korteweg theory itself might satisfy Hilbert's quest for a map from the `atomistic view' to the laws of motion of continua. This article is part of the theme issue `Hilbert's sixth problem'.

  20. Instantaneous frequency time analysis of physiology signals: The application of pregnant women’s radial artery pulse signals

    NASA Astrophysics Data System (ADS)

    Su, Zhi-Yuan; Wang, Chuan-Chen; Wu, Tzuyin; Wang, Yeng-Tseng; Tang, Feng-Cheng

    2008-01-01

    This study used the Hilbert-Huang transform, a recently developed, instantaneous frequency-time analysis, to analyze radial artery pulse signals taken from women in their 36th week of pregnancy and after pregnancy. The acquired instantaneous frequency-time spectrum (Hilbert spectrum) is further compared with the Morlet wavelet spectrum. Results indicate that the Hilbert spectrum is especially suitable for analyzing the time series of non-stationary radial artery pulse signals since, in the Hilbert-Huang transform, signals are decomposed into different mode functions in accordance with signal’s local time scale. Therefore, the Hilbert spectrum contains more detailed information than the Morlet wavelet spectrum. From the Hilbert spectrum, we can see that radial artery pulse signals taken from women in their 36th week of pregnancy and after pregnancy have different patterns. This approach could be applied to facilitate non-invasive diagnosis of fetus’ physiological signals in the future.

  1. Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    PubMed Central

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  2. Quantum probability and Hilbert's sixth problem

    NASA Astrophysics Data System (ADS)

    Accardi, Luigi

    2018-04-01

    With the birth of quantum mechanics, the two disciplines that Hilbert proposed to axiomatize, probability and mechanics, became entangled and a new probabilistic model arose in addition to the classical one. Thus, to meet Hilbert's challenge, an axiomatization should account deductively for the basic features of all three disciplines. This goal was achieved within the framework of quantum probability. The present paper surveys the quantum probabilistic axiomatization. This article is part of the themed issue `Hilbert's sixth problem'.

  3. Reproducing Kernel Particle Method in Plasticity of Pressure-Sensitive Material with Reference to Powder Forming Process

    NASA Astrophysics Data System (ADS)

    Khoei, A. R.; Samimi, M.; Azami, A. R.

    2007-02-01

    In this paper, an application of the reproducing kernel particle method (RKPM) is presented in plasticity behavior of pressure-sensitive material. The RKPM technique is implemented in large deformation analysis of powder compaction process. The RKPM shape function and its derivatives are constructed by imposing the consistency conditions. The essential boundary conditions are enforced by the use of the penalty approach. The support of the RKPM shape function covers the same set of particles during powder compaction, hence no instability is encountered in the large deformation computation. A double-surface plasticity model is developed in numerical simulation of pressure-sensitive material. The plasticity model includes a failure surface and an elliptical cap, which closes the open space between the failure surface and hydrostatic axis. The moving cap expands in the stress space according to a specified hardening rule. The cap model is presented within the framework of large deformation RKPM analysis in order to predict the non-uniform relative density distribution during powder die pressing. Numerical computations are performed to demonstrate the applicability of the algorithm in modeling of powder forming processes and the results are compared to those obtained from finite element simulation to demonstrate the accuracy of the proposed model.

  4. Direct Images, Fields of Hilbert Spaces, and Geometric Quantization

    NASA Astrophysics Data System (ADS)

    Lempert, László; Szőke, Róbert

    2014-04-01

    Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.

  5. Experimental pencil beam kernels derivation for 3D dose calculation in flattening filter free modulated fields

    NASA Astrophysics Data System (ADS)

    Diego Azcona, Juan; Barbés, Benigno; Wang, Lilie; Burguete, Javier

    2016-01-01

    This paper presents a method to obtain the pencil-beam kernels that characterize a megavoltage photon beam generated in a flattening filter free (FFF) linear accelerator (linac) by deconvolution from experimental measurements at different depths. The formalism is applied to perform independent dose calculations in modulated fields. In our previous work a formalism was developed for ideal flat fluences exiting the linac’s head. That framework could not deal with spatially varying energy fluences, so any deviation from the ideal flat fluence was treated as a perturbation. The present work addresses the necessity of implementing an exact analysis where any spatially varying fluence can be used such as those encountered in FFF beams. A major improvement introduced here is to handle the actual fluence in the deconvolution procedure. We studied the uncertainties associated to the kernel derivation with this method. Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from two linacs from different vendors, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water-equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50mm diameter circular field, collimated with a lead block. The 3D kernel for a FFF beam was obtained by deconvolution using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. Error uncertainty in the kernel derivation procedure was estimated to be within 0.2%. Eighteen modulated fields used clinically in different treatment localizations were irradiated at four measurement depths (total of fifty-four film measurements). Comparison through the gamma-index to their corresponding calculated absolute dose distributions showed a number of passing points (3%, 3mm) mostly above 99%. This new procedure is more reliable and robust than the previous one. Its ability to perform accurate independent dose calculations was demonstrated.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hornung, Richard D.; Hones, Holger E.

    The RAJA Performance Suite is designed to evaluate performance of the RAJA performance portability library on a wide variety of important high performance computing (HPC) algorithmic lulmels. These kernels assess compiler optimizations and various parallel programming model backends accessible through RAJA, such as OpenMP, CUDA, etc. The Initial version of the suite contains 25 computational kernels, each of which appears in 6 variants: Baseline SequcntiaJ, RAJA SequentiaJ, Baseline OpenMP, RAJA OpenMP, Baseline CUDA, RAJA CUDA. All variants of each kernel perform essentially the same mathematical operations and the loop body code for each kernel is identical across all variants. Theremore » are a few kernels, such as those that contain reduction operations, that require CUDA-specific coding for their CUDA variants. ActuaJ computer instructions executed and how they run in parallel differs depending on the parallel programming model backend used and which optimizations are perfonned by the compiler used to build the Perfonnance Suite executable. The Suite will be used primarily by RAJA developers to perform regular assessments of RAJA performance across a range of hardware platforms and compilers as RAJA features are being developed. It will also be used by LLNL hardware and software vendor panners for new defining requirements for future computing platform procurements and acceptance testing. In particular, the RAJA Performance Suite will be used for compiler acceptance testing of the upcoming CORAUSierra machine {initial LLNL delivery expected in late-2017/early 2018) and the CORAL-2 procurement. The Suite will aJso be used to generate concise source code reproducers of compiler and runtime issues we uncover so that we may provide them to relevant vendors to be fixed.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Yongbin; White, R. D.

    In the calculation of the linearized Boltzmann collision operator for an inverse-square force law interaction (Coulomb interaction) F(r)=κ/r{sup 2}, we found the widely used scattering angle cutoff θ≥θ{sub min} is a wrong practise since the divergence still exists after the cutoff has been made. When the correct velocity change cutoff |v′−v|≥δ{sub min} is employed, the scattering angle can be integrated. A unified linearized Boltzmann collision operator for both inverse-square force law and rigid-sphere interactions is obtained. Like many other unified quantities such as transition moments, Fokker-Planck expansion coefficients and energy exchange rates obtained recently [Y. B. Chang and L. A.more » Viehland, AIP Adv. 1, 032128 (2011)], the difference between the two kinds of interactions is characterized by a parameter, γ, which is 1 for rigid-sphere interactions and −3 for inverse-square force law interactions. When the cutoff is removed by setting δ{sub min}=0, Hilbert's well known kernel for rigid-sphere interactions is recovered for γ = 1.« less

  8. Prediction of maize phenotype based on whole-genome single nucleotide polymorphisms using deep belief networks

    NASA Astrophysics Data System (ADS)

    Rachmatia, H.; Kusuma, W. A.; Hasibuan, L. S.

    2017-05-01

    Selection in plant breeding could be more effective and more efficient if it is based on genomic data. Genomic selection (GS) is a new approach for plant-breeding selection that exploits genomic data through a mechanism called genomic prediction (GP). Most of GP models used linear methods that ignore effects of interaction among genes and effects of higher order nonlinearities. Deep belief network (DBN), one of the architectural in deep learning methods, is able to model data in high level of abstraction that involves nonlinearities effects of the data. This study implemented DBN for developing a GP model utilizing whole-genome Single Nucleotide Polymorphisms (SNPs) as data for training and testing. The case study was a set of traits in maize. The maize dataset was acquisitioned from CIMMYT’s (International Maize and Wheat Improvement Center) Global Maize program. Based on Pearson correlation, DBN is outperformed than other methods, kernel Hilbert space (RKHS) regression, Bayesian LASSO (BL), best linear unbiased predictor (BLUP), in case allegedly non-additive traits. DBN achieves correlation of 0.579 within -1 to 1 range.

  9. Multitask SVM learning for remote sensing data classification

    NASA Astrophysics Data System (ADS)

    Leiva-Murillo, Jose M.; Gómez-Chova, Luis; Camps-Valls, Gustavo

    2010-10-01

    Many remote sensing data processing problems are inherently constituted by several tasks that can be solved either individually or jointly. For instance, each image in a multitemporal classification setting could be taken as an individual task but relation to previous acquisitions should be properly considered. In such problems, different modalities of the data (temporal, spatial, angular) gives rise to changes between the training and test distributions, which constitutes a difficult learning problem known as covariate shift. Multitask learning methods aim at jointly solving a set of prediction problems in an efficient way by sharing information across tasks. This paper presents a novel kernel method for multitask learning in remote sensing data classification. The proposed method alleviates the dataset shift problem by imposing cross-information in the classifiers through matrix regularization. We consider the support vector machine (SVM) as core learner and two regularization schemes are introduced: 1) the Euclidean distance of the predictors in the Hilbert space; and 2) the inclusion of relational operators between tasks. Experiments are conducted in the challenging remote sensing problems of cloud screening from multispectral MERIS images and for landmine detection.

  10. Cohomologie des Groupes Localement Compacts et Produits Tensoriels Continus de Representations

    ERIC Educational Resources Information Center

    Guichardet, A.

    1976-01-01

    Contains few and sometimes incomplete proofs on continuous tensor products of Hilbert spaces and of group representations, and on the irreducibility of the latter. Theory of continuous tensor products of Hilbert Spaces is closely related to that of conditionally positive definite functions; it relies on the technique of symmetric Hilbert spaces,…

  11. Dynamical Correspondence in a Generalized Quantum Theory

    NASA Astrophysics Data System (ADS)

    Niestegge, Gerd

    2015-05-01

    In order to figure out why quantum physics needs the complex Hilbert space, many attempts have been made to distinguish the C*-algebras and von Neumann algebras in more general classes of abstractly defined Jordan algebras (JB- and JBW-algebras). One particularly important distinguishing property was identified by Alfsen and Shultz and is the existence of a dynamical correspondence. It reproduces the dual role of the selfadjoint operators as observables and generators of dynamical groups in quantum mechanics. In the paper, this concept is extended to another class of nonassociative algebras, arising from recent studies of the quantum logics with a conditional probability calculus and particularly of those that rule out third-order interference. The conditional probability calculus is a mathematical model of the Lüders-von Neumann quantum measurement process, and third-order interference is a property of the conditional probabilities which was discovered by Sorkin (Mod Phys Lett A 9:3119-3127, 1994) and which is ruled out by quantum mechanics. It is shown then that the postulates that a dynamical correspondence exists and that the square of any algebra element is positive still characterize, in the class considered, those algebras that emerge from the selfadjoint parts of C*-algebras equipped with the Jordan product. Within this class, the two postulates thus result in ordinary quantum mechanics using the complex Hilbert space or, vice versa, a genuine generalization of quantum theory must omit at least one of them.

  12. Quantitative evaluation of first-order retardation corrections to the quarkonium spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brambilla, N.; Prosperi, G.M.

    1992-08-01

    We evaluate numerically first-order retardation corrections for some charmonium and bottomonium masses under the usual assumption of a Bethe-Salpeter purely scalar confinement kernel. The result depends strictly on the use of an additional effective potential to express the corrections (rather than to resort to Kato perturbation theory) and on an appropriate regularization prescription. The kernel has been chosen in order to reproduce in the instantaneous approximation a semirelativistic potential suggested by the Wilson loop method. The calculations are performed for two sets of parameters determined by fits in potential theory. The corrections turn out to be typically of the ordermore » of a few hundred MeV and depend on an additional scale parameter introduced in the regularization. A conjecture existing in the literature on the origin of the constant term in the potential is also discussed.« less

  13. Efficient similarity-based data clustering by optimal object to cluster reallocation.

    PubMed

    Rossignol, Mathias; Lagrange, Mathieu; Cont, Arshia

    2018-01-01

    We present an iterative flat hard clustering algorithm designed to operate on arbitrary similarity matrices, with the only constraint that these matrices be symmetrical. Although functionally very close to kernel k-means, our proposal performs a maximization of average intra-class similarity, instead of a squared distance minimization, in order to remain closer to the semantics of similarities. We show that this approach permits the relaxing of some conditions on usable affinity matrices like semi-positiveness, as well as opening possibilities for computational optimization required for large datasets. Systematic evaluation on a variety of data sets shows that compared with kernel k-means and the spectral clustering methods, the proposed approach gives equivalent or better performance, while running much faster. Most notably, it significantly reduces memory access, which makes it a good choice for large data collections. Material enabling the reproducibility of the results is made available online.

  14. SH c realization of minimal model CFT: triality, poset and Burge condition

    NASA Astrophysics Data System (ADS)

    Fukuda, M.; Nakamura, S.; Matsuo, Y.; Zhu, R.-D.

    2015-11-01

    Recently an orthogonal basis of {{W}}_N -algebra (AFLT basis) labeled by N-tuple Young diagrams was found in the context of 4D/2D duality. Recursion relations among the basis are summarized in the form of an algebra SH c which is universal for any N. We show that it has an {{S}}_3 automorphism which is referred to as triality. We study the level-rank duality between minimal models, which is a special example of the automorphism. It is shown that the nonvanishing states in both systems are described by N or M Young diagrams with the rows of boxes appropriately shuffled. The reshuffling of rows implies there exists partial ordering of the set which labels them. For the simplest example, one can compute the partition functions for the partially ordered set (poset) explicitly, which reproduces the Rogers-Ramanujan identities. We also study the description of minimal models by SH c . Simple analysis reproduces some known properties of minimal models, the structure of singular vectors and the N-Burge condition in the Hilbert space.

  15. Source imaging of potential fields through a matrix space-domain algorithm

    NASA Astrophysics Data System (ADS)

    Baniamerian, Jamaledin; Oskooi, Behrooz; Fedi, Maurizio

    2017-01-01

    Imaging of potential fields yields a fast 3D representation of the source distribution of potential fields. Imaging methods are all based on multiscale methods allowing the source parameters of potential fields to be estimated from a simultaneous analysis of the field at various scales or, in other words, at many altitudes. Accuracy in performing upward continuation and differentiation of the field has therefore a key role for this class of methods. We here describe an accurate method for performing upward continuation and vertical differentiation in the space-domain. We perform a direct discretization of the integral equations for upward continuation and Hilbert transform; from these equations we then define matrix operators performing the transformation, which are symmetric (upward continuation) or anti-symmetric (differentiation), respectively. Thanks to these properties, just the first row of the matrices needs to be computed, so to decrease dramatically the computation cost. Our approach allows a simple procedure, with the advantage of not involving large data extension or tapering, as due instead in case of Fourier domain computation. It also allows level-to-drape upward continuation and a stable differentiation at high frequencies; finally, upward continuation and differentiation kernels may be merged into a single kernel. The accuracy of our approach is shown to be important for multi-scale algorithms, such as the continuous wavelet transform or the DEXP (depth from extreme point method), because border errors, which tend to propagate largely at the largest scales, are radically reduced. The application of our algorithm to synthetic and real-case gravity and magnetic data sets confirms the accuracy of our space domain strategy over FFT algorithms and standard convolution procedures.

  16. Hilbert's 'Foundations of Physics': Gravitation and electromagnetism within the axiomatic method

    NASA Astrophysics Data System (ADS)

    Brading, K. A.; Ryckman, T. A.

    2008-01-01

    In November and December 1915, Hilbert presented two communications to the Göttingen Academy of Sciences under the common title 'The Foundations of Physics'. Versions of each eventually appeared in the Nachrichten of the Academy. Hilbert's first communication has received significant reconsideration in recent years, following the discovery of printer's proofs of this paper, dated 6 December 1915. The focus has been primarily on the 'priority dispute' over the Einstein field equations. Our contention, in contrast, is that the discovery of the December proofs makes it possible to see the thematic linkage between the material that Hilbert cut from the published version of the first communication and the content of the second, as published in 1917. The latter has been largely either disregarded or misinterpreted, and our aim is to show that (a) Hilbert's two communications should be regarded as part of a wider research program within the overarching framework of 'the axiomatic method' (as Hilbert expressly stated was the case), and (b) the second communication is a fine and coherent piece of work within this framework, whose principal aim is to address an apparent tension between general invariance and causality (in the precise sense of Cauchy determination), pinpointed in Theorem I of the first communication. This is not the same problem as that found in Einstein's 'hole argument'-something that, we argue, never confused Hilbert.

  17. The eigenstate thermalization hypothesis in constrained Hilbert spaces: A case study in non-Abelian anyon chains

    NASA Astrophysics Data System (ADS)

    Chandran, A.; Schulz, Marc D.; Burnell, F. J.

    2016-12-01

    Many phases of matter, including superconductors, fractional quantum Hall fluids, and spin liquids, are described by gauge theories with constrained Hilbert spaces. However, thermalization and the applicability of quantum statistical mechanics has primarily been studied in unconstrained Hilbert spaces. In this paper, we investigate whether constrained Hilbert spaces permit local thermalization. Specifically, we explore whether the eigenstate thermalization hypothesis (ETH) holds in a pinned Fibonacci anyon chain, which serves as a representative case study. We first establish that the constrained Hilbert space admits a notion of locality by showing that the influence of a measurement decays exponentially in space. This suggests that the constraints are no impediment to thermalization. We then provide numerical evidence that ETH holds for the diagonal and off-diagonal matrix elements of various local observables in a generic disorder-free nonintegrable model. We also find that certain nonlocal observables obey ETH.

  18. Acoustical Applications of the HHT Method

    NASA Technical Reports Server (NTRS)

    Huang, Norden E.

    2003-01-01

    A document discusses applications of a method based on the Huang-Hilbert transform (HHT). The method was described, without the HHT name, in Analyzing Time Series Using EMD and Hilbert Spectra (GSC-13817), NASA Tech Briefs, Vol. 24, No. 10 (October 2000), page 63. To recapitulate: The method is especially suitable for analyzing time-series data that represent nonstationary and nonlinear physical phenomena. The method involves the empirical mode decomposition (EMD), in which a complicated signal is decomposed into a finite number of functions, called intrinsic mode functions (IMFs), that admit well-behaved Hilbert transforms. The HHT consists of the combination of EMD and Hilbert spectral analysis.

  19. Hilbert complexes of nonlinear elasticity

    NASA Astrophysics Data System (ADS)

    Angoshtari, Arzhang; Yavari, Arash

    2016-12-01

    We introduce some Hilbert complexes involving second-order tensors on flat compact manifolds with boundary that describe the kinematics and the kinetics of motion in nonlinear elasticity. We then use the general framework of Hilbert complexes to write Hodge-type and Helmholtz-type orthogonal decompositions for second-order tensors. As some applications of these decompositions in nonlinear elasticity, we study the strain compatibility equations of linear and nonlinear elasticity in the presence of Dirichlet boundary conditions and the existence of stress functions on non-contractible bodies. As an application of these Hilbert complexes in computational mechanics, we briefly discuss the derivation of a new class of mixed finite element methods for nonlinear elasticity.

  20. Inverse scattering transform and soliton classification of the coupled modified Korteweg-de Vries equation

    NASA Astrophysics Data System (ADS)

    Wu, Jianping; Geng, Xianguo

    2017-12-01

    The inverse scattering transform of the coupled modified Korteweg-de Vries equation is studied by the Riemann-Hilbert approach. In the direct scattering process, the spectral analysis of the Lax pair is performed, from which a Riemann-Hilbert problem is established for the equation. In the inverse scattering process, by solving Riemann-Hilbert problems corresponding to the reflectionless cases, three types of multi-soliton solutions are obtained. The multi-soliton classification is based on the zero structures of the Riemann-Hilbert problem. In addition, some figures are given to illustrate the soliton characteristics of the coupled modified Korteweg-de Vries equation.

  1. Application of the Hilbert-Huang Transform to Financial Data

    NASA Technical Reports Server (NTRS)

    Huang, Norden

    2005-01-01

    A paper discusses the application of the Hilbert-Huang transform (HHT) method to time-series financial-market data. The method was described, variously without and with the HHT name, in several prior NASA Tech Briefs articles and supporting documents. To recapitulate: The method is especially suitable for analyzing time-series data that represent nonstationary and nonlinear phenomena including physical phenomena and, in the present case, financial-market processes. The method involves the empirical mode decomposition (EMD), in which a complicated signal is decomposed into a finite number of functions, called "intrinsic mode functions" (IMFs), that admit well-behaved Hilbert transforms. The HHT consists of the combination of EMD and Hilbert spectral analysis. The local energies and the instantaneous frequencies derived from the IMFs through Hilbert transforms can be used to construct an energy-frequency-time distribution, denoted a Hilbert spectrum. The instant paper begins with a discussion of prior approaches to quantification of market volatility, summarizes the HHT method, then describes the application of the method in performing time-frequency analysis of mortgage-market data from the years 1972 through 2000. Filtering by use of the EMD is shown to be useful for quantifying market volatility.

  2. Clifford coherent state transforms on spheres

    NASA Astrophysics Data System (ADS)

    Dang, Pei; Mourão, José; Nunes, João P.; Qian, Tao

    2018-01-01

    We introduce a one-parameter family of transforms, U(m)t , t > 0, from the Hilbert space of Clifford algebra valued square integrable functions on the m-dimensional sphere, L2(Sm , dσm) ⊗Cm+1, to the Hilbert spaces, ML2(R m + 1 ∖ { 0 } , dμt) , of solutions of the Euclidean Dirac equation on R m + 1 ∖ { 0 } which are square integrable with respect to appropriate measures, dμt. We prove that these transforms are unitary isomorphisms of the Hilbert spaces and are extensions of the Segal-Bargman coherent state transform, U(1) :L2(S1 , dσ1) ⟶ HL2(C ∖ { 0 } , dμ) , to higher dimensional spheres in the context of Clifford analysis. In Clifford analysis it is natural to replace the analytic continuation from Sm to SCm as in (Hall, 1994; Stenzel, 1999; Hall and Mitchell, 2002) by the Cauchy-Kowalewski extension from Sm to R m + 1 ∖ { 0 } . One then obtains a unitary isomorphism from an L2-Hilbert space to a Hilbert space of solutions of the Dirac equation, that is to a Hilbert space of monogenic functions.

  3. Hybrid Techniques for Quantum Circuit Simulation

    DTIC Science & Technology

    2014-02-01

    Detailed theorems and proofs describing these results are included in our published manuscript [10]. Embedding of stabilizer geometry in the Hilbert ...space. We also describe how the discrete embedding of stabilizer geometry in Hilbert space complicates several natural geometric tasks. As described...the Hilbert space in which they are embedded, and that they are arranged in a fairly uniform pattern. These factors suggest that, if one seeks a

  4. Testing the Dimension of Hilbert Spaces

    NASA Astrophysics Data System (ADS)

    Brunner, Nicolas; Pironio, Stefano; Acin, Antonio; Gisin, Nicolas; Méthot, André Allan; Scarani, Valerio

    2008-05-01

    Given a set of correlations originating from measurements on a quantum state of unknown Hilbert space dimension, what is the minimal dimension d necessary to describe such correlations? We introduce the concept of dimension witness to put lower bounds on d. This work represents a first step in a broader research program aiming to characterize Hilbert space dimension in various contexts related to fundamental questions and quantum information applications.

  5. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels

    PubMed Central

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J.

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively “hiding” its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378

  6. Appraisal of ALM predictions of turbulent wake features

    NASA Astrophysics Data System (ADS)

    Rocchio, Benedetto; Cilurzo, Lorenzo; Ciri, Umberto; Salvetti, Maria Vittoria; Leonardi, Stefano

    2017-11-01

    Wind turbine blades create a turbulent wake that may persist far downstream, with significant implications on wind farm design and on its power production. The numerical representation of the real blade geometry would lead to simulations beyond the present computational resources. We focus our attention on the Actuator Line Model (ALM), in which the blade is replaced by a rotating line divided into finite segments with representative aerodynamic coefficients. The total aerodynamic force is projected along the computational axis and, to avoid numerical instabilities, it is distributed among the nearest grid points by using a Gaussian regularization kernel. The standard deviation of this kernel is a fundamental parameter that strongly affects the characteristics of the wake. We compare here the wake features obtained in direct numerical simulations of the flow around 2D bodies (a flat plate and an airfoil) modeled using the Immersed Boundary Method with the results of simulations in which the body is modeled by ALM. In particular, we investigate whether the ALM is able to reproduce the mean velocity field and the turbulent kinetic energy in the wake for the considered bodies at low and high angles of attack and how this depends on the choice of the ALM kernel. S. Leonardi was supported by the National Science Foundation, Grant No. 1243482 (the WINDINSPIRE project).

  7. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels.

    PubMed

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.

  8. H-SLAM: Rao-Blackwellized Particle Filter SLAM Using Hilbert Maps.

    PubMed

    Vallicrosa, Guillem; Ridao, Pere

    2018-05-01

    Occupancy Grid maps provide a probabilistic representation of space which is important for a variety of robotic applications like path planning and autonomous manipulation. In this paper, a SLAM (Simultaneous Localization and Mapping) framework capable of obtaining this representation online is presented. The H-SLAM (Hilbert Maps SLAM) is based on Hilbert Map representation and uses a Particle Filter to represent the robot state. Hilbert Maps offer a continuous probabilistic representation with a small memory footprint. We present a series of experimental results carried both in simulation and with real AUVs (Autonomous Underwater Vehicles). These results demonstrate that our approach is able to represent the environment more consistently while capable of running online.

  9. Terahertz bandwidth photonic Hilbert transformers based on synthesized planar Bragg grating fabrication.

    PubMed

    Sima, Chaotan; Gates, J C; Holmes, C; Mennea, P L; Zervas, M N; Smith, P G R

    2013-09-01

    Terahertz bandwidth photonic Hilbert transformers are proposed and experimentally demonstrated. The integrated device is fabricated via a direct UV grating writing technique in a silica-on-silicon platform. The photonic Hilbert transformer operates at bandwidths of up to 2 THz (~16 nm) in the telecom band, a 10-fold greater bandwidth than any previously reported experimental approaches. Achieving this performance requires detailed knowledge of the system transfer function of the direct UV grating writing technique; this allows improved linearity and yields terahertz bandwidth Bragg gratings with improved spectral quality. By incorporating a flat-top reflector and Hilbert grating with a waveguide coupler, an ultrawideband all-optical single-sideband filter is demonstrated.

  10. Hilbert's axiomatic method and Carnap's general axiomatics.

    PubMed

    Stöltzner, Michael

    2015-10-01

    This paper compares the axiomatic method of David Hilbert and his school with Rudolf Carnap's general axiomatics that was developed in the late 1920s, and that influenced his understanding of logic of science throughout the 1930s, when his logical pluralism developed. The distinct perspectives become visible most clearly in how Richard Baldus, along the lines of Hilbert, and Carnap and Friedrich Bachmann analyzed the axiom system of Hilbert's Foundations of Geometry—the paradigmatic example for the axiomatization of science. Whereas Hilbert's axiomatic method started from a local analysis of individual axiom systems in which the foundations of mathematics as a whole entered only when establishing the system's consistency, Carnap and his Vienna Circle colleague Hans Hahn instead advocated a global analysis of axiom systems in general. A primary goal was to evade, or formalize ex post, mathematicians' 'material' talk about axiom systems for such talk was held to be error-prone and susceptible to metaphysics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. The place of probability in Hilbert's axiomatization of physics, ca. 1900-1928

    NASA Astrophysics Data System (ADS)

    Verburgt, Lukas M.

    2016-02-01

    Although it has become a common place to refer to the 'sixth problem' of Hilbert's (1900) Paris lecture as the starting point for modern axiomatized probability theory, his own views on probability have received comparatively little explicit attention. The central aim of this paper is to provide a detailed account of this topic in light of the central observation that the development of Hilbert's project of the axiomatization of physics went hand-in-hand with a redefinition of the status of probability theory and the meaning of probability. Where Hilbert first regarded the theory as a mathematizable physical discipline and later approached it as a 'vague' mathematical application in physics, he eventually understood probability, first, as a feature of human thought and, then, as an implicitly defined concept without a fixed physical interpretation. It thus becomes possible to suggest that Hilbert came to question, from the early 1920s on, the very possibility of achieving the goal of the axiomatization of probability as described in the 'sixth problem' of 1900.

  12. A combined approach for weak fault signature extraction of rolling element bearing using Hilbert envelop and zero frequency resonator

    NASA Astrophysics Data System (ADS)

    Kumar, Keshav; Shukla, Sumitra; Singh, Sachin Kumar

    2018-04-01

    Periodic impulses arise due to localised defects in rolling element bearing. At the early stage of defects, the weak impulses are immersed in strong machinery vibration. This paper proposes a combined approach based upon Hilbert envelop and zero frequency resonator for the detection of the weak periodic impulses. In the first step, the strength of impulses is increased by taking normalised Hilbert envelop of the signal. It also helps in better localization of these impulses on time axis. In the second step, Hilbert envelope of the signal is passed through the zero frequency resonator for the exact localization of the periodic impulses. Spectrum of the resonator output gives peak at the fault frequency. Simulated noisy signal with periodic impulses is used to explain the working of the algorithm. The proposed technique is verified with experimental data also. A comparison of the proposed method with Hilbert-Haung transform (HHT) based method is presented to establish the effectiveness of the proposed method.

  13. Elliptic complexes over C∗-algebras of compact operators

    NASA Astrophysics Data System (ADS)

    Krýsl, Svatopluk

    2016-03-01

    For a C∗-algebra A of compact operators and a compact manifold M, we prove that the Hodge theory holds for A-elliptic complexes of pseudodifferential operators acting on smooth sections of finitely generated projective A-Hilbert bundles over M. For these C∗-algebras and manifolds, we get a topological isomorphism between the cohomology groups of an A-elliptic complex and the space of harmonic elements of the complex. Consequently, the cohomology groups appear to be finitely generated projective C∗-Hilbert modules and especially, Banach spaces. We also prove that in the category of Hilbert A-modules and continuous adjointable Hilbert A-module homomorphisms, the property of a complex of being self-adjoint parametrix possessing characterizes the complexes of Hodge type.

  14. Experimental demonstration of an efficient hybrid equalizer for short-reach optical SSB systems

    NASA Astrophysics Data System (ADS)

    Zhu, Mingyue; Ying, Hao; Zhang, Jing; Yi, Xingwen; Qiu, Kun

    2018-02-01

    We propose an efficient enhanced hybrid equalizer combining the feed forward equalization (FFE) with a modified Volterra filter to mitigate the linear and nonlinear interference for the short-reach optical single side-band (SSB) system. The optical SSB signal is generated by a relatively low-cost dual-drive Mach-Zehnder modulator (DDMZM). The two driving signals are a pair of Hilbert signals with Nyquist pulse-shaped four-level pulse amplitude modulation (NPAM-4). After the fiber transmission, the neighboring received symbols are strongly correlated due to the pulse spreading in time domain caused by the chromatic dispersion (CD). At the receiver equalization stage, the FFE followed by higher order terms of modified Volterra filter, which utilizes the forward and backward neighboring symbols to construct the kernels with strong correlation, are used as an enhanced hybrid equalizer to mitigate the inter symbol interference (ISI) and nonlinear distortion due to the interaction of the CD and the square-law detection. We experimentally demonstrate that the optical SSB NPAM-4 signal of 40 Gb/s transmitting over 80 km standard single mode fiber (SSMF) with a bit-error-rate (BER) of 7 . 59 × 10-4.

  15. Quantum decimation in Hilbert space: Coarse graining without structure

    NASA Astrophysics Data System (ADS)

    Singh, Ashmeet; Carroll, Sean M.

    2018-03-01

    We present a technique to coarse grain quantum states in a finite-dimensional Hilbert space. Our method is distinguished from other approaches by not relying on structures such as a preferred factorization of Hilbert space or a preferred set of operators (local or otherwise) in an associated algebra. Rather, we use the data corresponding to a given set of states, either specified independently or constructed from a single state evolving in time. Our technique is based on principle component analysis (PCA), and the resulting coarse-grained quantum states live in a lower-dimensional Hilbert space whose basis is defined using the underlying (isometric embedding) transformation of the set of fine-grained states we wish to coarse grain. Physically, the transformation can be interpreted to be an "entanglement coarse-graining" scheme that retains most of the global, useful entanglement structure of each state, while needing fewer degrees of freedom for its reconstruction. This scheme could be useful for efficiently describing collections of states whose number is much smaller than the dimension of Hilbert space, or a single state evolving over time.

  16. Hilbert's sixth problem: between the foundations of geometry and the axiomatization of physics.

    PubMed

    Corry, Leo

    2018-04-28

    The sixth of Hilbert's famous 1900 list of 23 problems was a programmatic call for the axiomatization of the physical sciences. It was naturally and organically rooted at the core of Hilbert's conception of what axiomatization is all about. In fact, the axiomatic method which he applied at the turn of the twentieth century in his famous work on the foundations of geometry originated in a preoccupation with foundational questions related with empirical science in general. Indeed, far from a purely formal conception, Hilbert counted geometry among the sciences with strong empirical content, closely related to other branches of physics and deserving a treatment similar to that reserved for the latter. In this treatment, the axiomatization project was meant to play, in his view, a crucial role. Curiously, and contrary to a once-prevalent view, from all the problems in the list, the sixth is the only one that continually engaged Hilbet's efforts over a very long period of time, at least between 1894 and 1932.This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  17. Hilbert's sixth problem: between the foundations of geometry and the axiomatization of physics

    NASA Astrophysics Data System (ADS)

    Corry, Leo

    2018-04-01

    The sixth of Hilbert's famous 1900 list of 23 problems was a programmatic call for the axiomatization of the physical sciences. It was naturally and organically rooted at the core of Hilbert's conception of what axiomatization is all about. In fact, the axiomatic method which he applied at the turn of the twentieth century in his famous work on the foundations of geometry originated in a preoccupation with foundational questions related with empirical science in general. Indeed, far from a purely formal conception, Hilbert counted geometry among the sciences with strong empirical content, closely related to other branches of physics and deserving a treatment similar to that reserved for the latter. In this treatment, the axiomatization project was meant to play, in his view, a crucial role. Curiously, and contrary to a once-prevalent view, from all the problems in the list, the sixth is the only one that continually engaged Hilbet's efforts over a very long period of time, at least between 1894 and 1932. This article is part of the theme issue `Hilbert's sixth problem'.

  18. Transition probabilities for non self-adjoint Hamiltonians in infinite dimensional Hilbert spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagarello, F., E-mail: fabio.bagarello@unipa.it

    In a recent paper we have introduced several possible inequivalent descriptions of the dynamics and of the transition probabilities of a quantum system when its Hamiltonian is not self-adjoint. Our analysis was carried out in finite dimensional Hilbert spaces. This is useful, but quite restrictive since many physically relevant quantum systems live in infinite dimensional Hilbert spaces. In this paper we consider this situation, and we discuss some applications to well known models, introduced in the literature in recent years: the extended harmonic oscillator, the Swanson model and a generalized version of the Landau levels Hamiltonian. Not surprisingly we willmore » find new interesting features not previously found in finite dimensional Hilbert spaces, useful for a deeper comprehension of this kind of physical systems.« less

  19. Singular value decomposition for the truncated Hilbert transform

    NASA Astrophysics Data System (ADS)

    Katsevich, A.

    2010-11-01

    Starting from a breakthrough result by Gelfand and Graev, inversion of the Hilbert transform became a very important tool for image reconstruction in tomography. In particular, their result is useful when the tomographic data are truncated and one deals with an interior problem. As was established recently, the interior problem admits a stable and unique solution when some a priori information about the object being scanned is available. The most common approach to solving the interior problem is based on converting it to the Hilbert transform and performing analytic continuation. Depending on what type of tomographic data are available, one gets different Hilbert inversion problems. In this paper, we consider two such problems and establish singular value decomposition for the operators involved. We also propose algorithms for performing analytic continuation.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vostokov, S V

    A new method for calculating an explicit form of the Hilbert pairing is proposed. It is used to calculate the Hilbert pairing in a classical local field and in a complete higher-dimensional field. Bibliography: 25 titles.

  1. Efficient approach to include molecular polarizations using charge and atom dipole response kernels to calculate free energy gradients in the QM/MM scheme.

    PubMed

    Asada, Toshio; Ando, Kanta; Sakurai, Koji; Koseki, Shiro; Nagaoka, Masataka

    2015-10-28

    An efficient approach to evaluate free energy gradients (FEGs) within the quantum mechanical/molecular mechanical (QM/MM) framework has been proposed to clarify reaction processes on the free energy surface (FES) in molecular assemblies. The method is based on response kernel approximations denoted as the charge and the atom dipole response kernel (CDRK) model that include explicitly induced atom dipoles. The CDRK model was able to reproduce polarization effects for both electrostatic interactions between QM and MM regions and internal energies in the QM region obtained by conventional QM/MM methods. In contrast to charge response kernel (CRK) models, CDRK models could be applied to various kinds of molecules, even linear or planer molecules, without using imaginary interaction sites. Use of the CDRK model enabled us to obtain FEGs on QM atoms in significantly reduced computational time. It was also clearly demonstrated that the time development of QM forces of the solvated propylene carbonate radical cation (PC˙(+)) provided reliable results for 1 ns molecular dynamics (MD) simulation, which were quantitatively in good agreement with expensive QM/MM results. Using FEG and nudged elastic band (NEB) methods, we found two optimized reaction paths on the FES for decomposition reactions to generate CO2 molecules from PC˙(+), whose reaction is known as one of the degradation mechanisms in the lithium-ion battery. Two of these reactions proceed through an identical intermediate structure whose molecular dipole moment is larger than that of the reactant to be stabilized in the solvent, which has a high relative dielectric constant. Thus, in order to prevent decomposition reactions, PC˙(+) should be modified to have a smaller dipole moment along two reaction paths.

  2. On the physical Hilbert space of loop quantum cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noui, Karim; Perez, Alejandro; Vandersloot, Kevin

    2005-02-15

    In this paper we present a model of Riemannian loop quantum cosmology with a self-adjoint quantum scalar constraint. The physical Hilbert space is constructed using refined algebraic quantization. When matter is included in the form of a cosmological constant, the model is exactly solvable and we show explicitly that the physical Hilbert space is separable, consisting of a single physical state. We extend the model to the Lorentzian sector and discuss important implications for standard loop quantum cosmology.

  3. Experimental Issues in Coherent Quantum-State Manipulation of Trapped Atomic Ions

    DTIC Science & Technology

    1998-05-01

    in Hilbert space and almost always precludes the exis- tence of “large” Schrödinger-cat-like states except on extremely short time scales. A...Hamiltonian Hideal operate on the Hilbert space formed by the ↓l and ↑l states of the L qubits. In practice, for the case of trapped ions, the...auxiliary state (Sec. 3.3). If decoherence mechanisms cause other states to be populated, the Hilbert space must be expanded. Although more streamlined

  4. An Efficient Multiparty Quantum Secret Sharing Protocol Based on Bell States in the High Dimension Hilbert Space

    NASA Astrophysics Data System (ADS)

    Gao, Gan; Wang, Li-Ping

    2010-11-01

    We propose a quantum secret sharing protocol, in which Bell states in the high dimension Hilbert space are employed. The biggest advantage of our protocol is the high source capacity. Compared with the previous secret sharing protocol, ours has the higher controlling efficiency. In addition, as decoy states in the high dimension Hilbert space are used, we needn’t destroy quantum entanglement for achieving the goal to check the channel security.

  5. The role of Tre6P and SnRK1 in maize early kernel development and events leading to stress-induced kernel abortion.

    PubMed

    Bledsoe, Samuel W; Henry, Clémence; Griffiths, Cara A; Paul, Matthew J; Feil, Regina; Lunn, John E; Stitt, Mark; Lagrimini, L Mark

    2017-04-12

    Drought stress during flowering is a major contributor to yield loss in maize. Genetic and biotechnological improvement in yield sustainability requires an understanding of the mechanisms underpinning yield loss. Sucrose starvation has been proposed as the cause for kernel abortion; however, potential targets for genetic improvement have not been identified. Field and greenhouse drought studies with maize are expensive and it can be difficult to reproduce results; therefore, an in vitro kernel culture method is presented as a proxy for drought stress occurring at the time of flowering in maize (3 days after pollination). This method is used to focus on the effects of drought on kernel metabolism, and the role of trehalose 6-phosphate (Tre6P) and the sucrose non-fermenting-1-related kinase (SnRK1) as potential regulators of this response. A precipitous drop in Tre6P is observed during the first two hours after removing the kernels from the plant, and the resulting changes in transcript abundance are indicative of an activation of SnRK1, and an immediate shift from anabolism to catabolism. Once Tre6P levels are depleted to below 1 nmol∙g -1 FW in the kernel, SnRK1 remained active throughout the 96 h experiment, regardless of the presence or absence of sucrose in the medium. Recovery on sucrose enriched medium results in the restoration of sucrose synthesis and glycolysis. Biosynthetic processes including the citric acid cycle and protein and starch synthesis are inhibited by excision, and do not recover even after the re-addition of sucrose. It is also observed that excision induces the transcription of the sugar transporters SUT1 and SWEET1, the sucrose hydrolyzing enzymes CELL WALL INVERTASE 2 (INCW2) and SUCROSE SYNTHASE 1 (SUSY1), the class II TREHALOSE PHOSPHATE SYNTHASES (TPS), TREHALASE (TRE), and TREHALOSE PHOSPHATE PHOSPHATASE (ZmTPPA.3), previously shown to enhance drought tolerance (Nuccio et al., Nat Biotechnol (October 2014):1-13, 2015). The impact of kernel excision from the ear triggers a cascade of events starting with the precipitous drop in Tre6P levels. It is proposed that the removal of Tre6P suppression of SnRK1 activity results in transcription of putative SnRK1 target genes, and the metabolic transition from biosynthesis to catabolism. This highlights the importance of Tre6P in the metabolic response to starvation. We also present evidence that sugars can mediate the activation of SnRK1. The precipitous drop in Tre6P corresponds to a large increase in transcription of ZmTPPA.3, indicating that this specific enzyme may be responsible for the de-phosphorylation of Tre6P. The high levels of Tre6P in the immature embryo are likely important for preventing kernel abortion.

  6. BPS counting for knots and combinatorics on words

    NASA Astrophysics Data System (ADS)

    Kucharski, Piotr; Sułkowski, Piotr

    2016-11-01

    We discuss relations between quantum BPS invariants defined in terms of a product decomposition of certain series, and difference equations (quantum A-polynomials) that annihilate such series. We construct combinatorial models whose structure is encoded in the form of such difference equations, and whose generating functions (Hilbert-Poincaré series) are solutions to those equations and reproduce generating series that encode BPS invariants. Furthermore, BPS invariants in question are expressed in terms of Lyndon words in an appropriate language, thereby relating counting of BPS states to the branch of mathematics referred to as combinatorics on words. We illustrate these results in the framework of colored extremal knot polynomials: among others we determine dual quantum extremal A-polynomials for various knots, present associated combinatorial models, find corresponding BPS invariants (extremal Labastida-Mariño-Ooguri-Vafa invariants) and discuss their integrality.

  7. Measurements and mathematical formalism of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Slavnov, D. A.

    2007-03-01

    A scheme for constructing quantum mechanics is given that does not have Hilbert space and linear operators as its basic elements. Instead, a version of algebraic approach is considered. Elements of a noncommutative algebra (observables) and functionals on this algebra (elementary states) associated with results of single measurements are used as primary components of the scheme. On the one hand, it is possible to use within the scheme the formalism of the standard (Kolmogorov) probability theory, and, on the other hand, it is possible to reproduce the mathematical formalism of standard quantum mechanics, and to study the limits of its applicability. A short outline is given of the necessary material from the theory of algebras and probability theory. It is described how the mathematical scheme of the paper agrees with the theory of quantum measurements, and avoids quantum paradoxes.

  8. Projective flatness in the quantisation of bosons and fermions

    NASA Astrophysics Data System (ADS)

    Wu, Siye

    2015-07-01

    We compare the quantisation of linear systems of bosons and fermions. We recall the appearance of projectively flat connection and results on parallel transport in the quantisation of bosons. We then discuss pre-quantisation and quantisation of fermions using the calculus of fermionic variables. We define a natural connection on the bundle of Hilbert spaces and show that it is projectively flat. This identifies, up to a phase, equivalent spinor representations constructed by various polarisations. We introduce the concept of metaplectic correction for fermions and show that the bundle of corrected Hilbert spaces is naturally flat. We then show that the parallel transport in the bundle of Hilbert spaces along a geodesic is a rescaled projection provided that the geodesic lies within the complement of a cut locus. Finally, we study the bundle of Hilbert spaces when there is a symmetry.

  9. Improved specimen reconstruction by Hilbert phase contrast tomography.

    PubMed

    Barton, Bastian; Joos, Friederike; Schröder, Rasmus R

    2008-11-01

    The low signal-to-noise ratio (SNR) in images of unstained specimens recorded with conventional defocus phase contrast makes it difficult to interpret 3D volumes obtained by electron tomography (ET). The high defocus applied for conventional tilt series generates some phase contrast but leads to an incomplete transfer of object information. For tomography of biological weak-phase objects, optimal image contrast and subsequently an optimized SNR are essential for the reconstruction of details such as macromolecular assemblies at molecular resolution. The problem of low contrast can be partially solved by applying a Hilbert phase plate positioned in the back focal plane (BFP) of the objective lens while recording images in Gaussian focus. Images recorded with the Hilbert phase plate provide optimized positive phase contrast at low spatial frequencies, and the contrast transfer in principle extends to the information limit of the microscope. The antisymmetric Hilbert phase contrast (HPC) can be numerically converted into isotropic contrast, which is equivalent to the contrast obtained by a Zernike phase plate. Thus, in-focus HPC provides optimal structure factor information without limiting effects of the transfer function. In this article, we present the first electron tomograms of biological specimens reconstructed from Hilbert phase plate image series. We outline the technical implementation of the phase plate and demonstrate that the technique is routinely applicable for tomography. A comparison between conventional defocus tomograms and in-focus HPC volumes shows an enhanced SNR and an improved specimen visibility for in-focus Hilbert tomography.

  10. Characterizing resonant component in speech: A different view of tracking fundamental frequency

    NASA Astrophysics Data System (ADS)

    Dong, Bin

    2017-05-01

    Inspired by the nonlinearity and nonstationarity and the modulations in speech, Hilbert-Huang Transform and cyclostationarity analysis are employed to investigate the speech resonance in vowel in sequence. Cyclostationarity analysis is not directly manipulated on the target vowel, but on its intrinsic mode functions one by one. Thanks to the equivalence between the fundamental frequency in speech and the cyclic frequency in cyclostationarity analysis, the modulation intensity distributions of the intrinsic mode functions provide much information for the estimation of the fundamental frequency. To highlight the relationship between frequency and time, the pseudo-Hilbert spectrum is proposed to replace the Hilbert spectrum here. After contrasting the pseudo-Hilbert spectra of and the modulation intensity distributions of the intrinsic mode functions, it finds that there is usually one intrinsic mode function which works as the fundamental component of the vowel. Furthermore, the fundamental frequency of the vowel can be determined by tracing the pseudo-Hilbert spectrum of its fundamental component along the time axis. The later method is more robust to estimate the fundamental frequency, when meeting nonlinear components. Two vowels [a] and [i], picked up from a speech database FAU Aibo Emotion Corpus, are applied to validate the above findings.

  11. Computer implemented empirical mode decomposition method, apparatus and article of manufacture

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    1999-01-01

    A computer implemented physical signal analysis method is invented. This method includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  12. Quantum Computation of Fluid Dynamics

    DTIC Science & Technology

    1998-02-16

    state of the quantum computer’s "memory". With N qubits, the quantum state IT) resides in an exponentially large Hilbert space with 2 N dimensions. A new...size of the Hilbert space in which the entanglement occurs. And to make matters worse, even if a quantum computer was constructed with a large number of...number of qubits "* 2 N is the size of the full Hilbert space "* 2 B is the size of the on-site submanifold, denoted 71 "* B is the size of the

  13. Support vector machine based decision for mechanical fault condition monitoring in induction motor using an advanced Hilbert-Park transform.

    PubMed

    Ben Salem, Samira; Bacha, Khmais; Chaari, Abdelkader

    2012-09-01

    In this work we suggest an original fault signature based on an improved combination of Hilbert and Park transforms. Starting from this combination we can create two fault signatures: Hilbert modulus current space vector (HMCSV) and Hilbert phase current space vector (HPCSV). These two fault signatures are subsequently analysed using the classical fast Fourier transform (FFT). The effects of mechanical faults on the HMCSV and HPCSV spectrums are described, and the related frequencies are determined. The magnitudes of spectral components, relative to the studied faults (air-gap eccentricity and outer raceway ball bearing defect), are extracted in order to develop the input vector necessary for learning and testing the support vector machine with an aim of classifying automatically the various states of the induction motor. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  14. The Riemann-Hilbert problem for nonsymmetric systems

    NASA Astrophysics Data System (ADS)

    Greenberg, W.; Zweifel, P. F.; Paveri-Fontana, S.

    1991-12-01

    A comparison of the Riemann-Hilbert problem and the Wiener-Hopf factorization problem arising in the solution of half-space singular integral equations is presented. Emphasis is on the factorization of functions lacking the reflection symmetry usual in transport theory.

  15. A Hilbert Space Representation of Generalized Observables and Measurement Processes in the ESR Model

    NASA Astrophysics Data System (ADS)

    Sozzo, Sandro; Garola, Claudio

    2010-12-01

    The extended semantic realism ( ESR) model recently worked out by one of the authors embodies the mathematical formalism of standard (Hilbert space) quantum mechanics in a noncontextual framework, reinterpreting quantum probabilities as conditional instead of absolute. We provide here a Hilbert space representation of the generalized observables introduced by the ESR model that satisfy a simple physical condition, propose a generalization of the projection postulate, and suggest a possible mathematical description of the measurement process in terms of evolution of the compound system made up of the measured system and the measuring apparatus.

  16. Empirical mode decomposition for analyzing acoustical signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2005-01-01

    The present invention discloses a computer implemented signal analysis method through the Hilbert-Huang Transformation (HHT) for analyzing acoustical signals, which are assumed to be nonlinear and nonstationary. The Empirical Decomposition Method (EMD) and the Hilbert Spectral Analysis (HSA) are used to obtain the HHT. Essentially, the acoustical signal will be decomposed into the Intrinsic Mode Function Components (IMFs). Once the invention decomposes the acoustic signal into its constituting components, all operations such as analyzing, identifying, and removing unwanted signals can be performed on these components. Upon transforming the IMFs into Hilbert spectrum, the acoustical signal may be compared with other acoustical signals.

  17. Experimental validation of a structural damage detection method based on marginal Hilbert spectrum

    NASA Astrophysics Data System (ADS)

    Banerji, Srishti; Roy, Timir B.; Sabamehr, Ardalan; Bagchi, Ashutosh

    2017-04-01

    Structural Health Monitoring (SHM) using dynamic characteristics of structures is crucial for early damage detection. Damage detection can be performed by capturing and assessing structural responses. Instrumented structures are monitored by analyzing the responses recorded by deployed sensors in the form of signals. Signal processing is an important tool for the processing of the collected data to diagnose anomalies in structural behavior. The vibration signature of the structure varies with damage. In order to attain effective damage detection, preservation of non-linear and non-stationary features of real structural responses is important. Decomposition of the signals into Intrinsic Mode Functions (IMF) by Empirical Mode Decomposition (EMD) and application of Hilbert-Huang Transform (HHT) addresses the time-varying instantaneous properties of the structural response. The energy distribution among different vibration modes of the intact and damaged structure depicted by Marginal Hilbert Spectrum (MHS) detects location and severity of the damage. The present work investigates damage detection analytically and experimentally by employing MHS. The testing of this methodology for different damage scenarios of a frame structure resulted in its accurate damage identification. The sensitivity of Hilbert Spectral Analysis (HSA) is assessed with varying frequencies and damage locations by means of calculating Damage Indices (DI) from the Hilbert spectrum curves of the undamaged and damaged structures.

  18. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2004-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  19. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2002-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  20. Computer implemented empirical mode decomposition method apparatus, and article of manufacture utilizing curvature extrema

    NASA Technical Reports Server (NTRS)

    Shen, Zheng (Inventor); Huang, Norden Eh (Inventor)

    2003-01-01

    A computer implemented physical signal analysis method is includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals based on local extrema and curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  1. The canonical quantization of chaotic maps on the torus

    NASA Astrophysics Data System (ADS)

    Rubin, Ron Shai

    In this thesis, a quantization method for classical maps on the torus is presented. The quantum algebra of observables is defined as the quantization of measurable functions on the torus with generators exp (2/pi ix) and exp (2/pi ip). The Hilbert space we use remains the infinite-dimensional L2/ (/IR, dx). The dynamics is given by a unitary quantum propagator such that as /hbar /to 0, the classical dynamics is returned. We construct such a quantization for the Kronecker map, the cat map, the baker's map, the kick map, and the Harper map. For the cat map, we find the same for the propagator on the plane the same integral kernel conjectured in (HB) using semiclassical methods. We also define a quantum 'integral over phase space' as a trace over the quantum algebra. Using this definition, we proceed to define quantum ergodicity and mixing for maps on the torus. We prove that the quantum cat map and Kronecker map are both ergodic, but only the cat map is mixing, true to its classical origins. For Planck's constant satisfying the integrality condition h = 1/N, with N/in doubz+, we construct an explicit isomorphism between L2/ (/IR, dx) and the Hilbert space of sections of an N-dimensional vector bundle over a θ-torus T2 of boundary conditions. The basis functions are distributions in L2/ (/IR, dx), given by an infinite comb of Dirac δ-functions. In Bargmann space these distributions take on the form of Jacobi ϑ-functions. Transformations from position to momentum representation can be implemented via a finite N-dimensional discrete Fourier transform. With the θ-torus, we provide a connection between the finite-dimensional quantum maps given in the physics literature and the canonical quantization presented here and found in the language of pseudo-differential operators elsewhere in mathematics circles. Specifically, at a fixed point of the dynamics on the θ-torus, we return a finite-dimensional matrix propagator. We present this connection explicitly for several examples.

  2. Predictive analysis of beer quality by correlating sensory evaluation with higher alcohol and ester production using multivariate statistics methods.

    PubMed

    Dong, Jian-Jun; Li, Qing-Liang; Yin, Hua; Zhong, Cheng; Hao, Jun-Guang; Yang, Pan-Fei; Tian, Yu-Hong; Jia, Shi-Ru

    2014-10-15

    Sensory evaluation is regarded as a necessary procedure to ensure a reproducible quality of beer. Meanwhile, high-throughput analytical methods provide a powerful tool to analyse various flavour compounds, such as higher alcohol and ester. In this study, the relationship between flavour compounds and sensory evaluation was established by non-linear models such as partial least squares (PLS), genetic algorithm back-propagation neural network (GA-BP), support vector machine (SVM). It was shown that SVM with a Radial Basis Function (RBF) had a better performance of prediction accuracy for both calibration set (94.3%) and validation set (96.2%) than other models. Relatively lower prediction abilities were observed for GA-BP (52.1%) and PLS (31.7%). In addition, the kernel function of SVM played an essential role of model training when the prediction accuracy of SVM with polynomial kernel function was 32.9%. As a powerful multivariate statistics method, SVM holds great potential to assess beer quality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Vertical integration from the large Hilbert space

    NASA Astrophysics Data System (ADS)

    Erler, Theodore; Konopka, Sebastian

    2017-12-01

    We develop an alternative description of the procedure of vertical integration based on the observation that amplitudes can be written in BRST exact form in the large Hilbert space. We relate this approach to the description of vertical integration given by Sen and Witten.

  4. Incorporation of memory effects in coarse-grained modeling via the Mori-Zwanzig formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhen; Bian, Xin; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu

    2015-12-28

    The Mori-Zwanzig formalism for coarse-graining a complex dynamical system typically introduces memory effects. The Markovian assumption of delta-correlated fluctuating forces is often employed to simplify the formulation of coarse-grained (CG) models and numerical implementations. However, when the time scales of a system are not clearly separated, the memory effects become strong and the Markovian assumption becomes inaccurate. To this end, we incorporate memory effects into CG modeling by preserving non-Markovian interactions between CG variables, and the memory kernel is evaluated directly from microscopic dynamics. For a specific example, molecular dynamics (MD) simulations of star polymer melts are performed while themore » corresponding CG system is defined by grouping many bonded atoms into single clusters. Then, the effective interactions between CG clusters as well as the memory kernel are obtained from the MD simulations. The constructed CG force field with a memory kernel leads to a non-Markovian dissipative particle dynamics (NM-DPD). Quantitative comparisons between the CG models with Markovian and non-Markovian approximations indicate that including the memory effects using NM-DPD yields similar results as the Markovian-based DPD if the system has clear time scale separation. However, for systems with small separation of time scales, NM-DPD can reproduce correct short-time properties that are related to how the system responds to high-frequency disturbances, which cannot be captured by the Markovian-based DPD model.« less

  5. Reproducibility and Prognosis of Quantitative Features Extracted from CT Images12

    PubMed Central

    Balagurunathan, Yoganand; Gu, Yuhua; Wang, Hua; Kumar, Virendra; Grove, Olya; Hawkins, Sam; Kim, Jongphil; Goldgof, Dmitry B; Hall, Lawrence O; Gatenby, Robert A; Gillies, Robert J

    2014-01-01

    We study the reproducibility of quantitative imaging features that are used to describe tumor shape, size, and texture from computed tomography (CT) scans of non-small cell lung cancer (NSCLC). CT images are dependent on various scanning factors. We focus on characterizing image features that are reproducible in the presence of variations due to patient factors and segmentation methods. Thirty-two NSCLC nonenhanced lung CT scans were obtained from the Reference Image Database to Evaluate Response data set. The tumors were segmented using both manual (radiologist expert) and ensemble (software-automated) methods. A set of features (219 three-dimensional and 110 two-dimensional) was computed, and quantitative image features were statistically filtered to identify a subset of reproducible and nonredundant features. The variability in the repeated experiment was measured by the test-retest concordance correlation coefficient (CCCTreT). The natural range in the features, normalized to variance, was measured by the dynamic range (DR). In this study, there were 29 features across segmentation methods found with CCCTreT and DR ≥ 0.9 and R2Bet ≥ 0.95. These reproducible features were tested for predicting radiologist prognostic score; some texture features (run-length and Laws kernels) had an area under the curve of 0.9. The representative features were tested for their prognostic capabilities using an independent NSCLC data set (59 lung adenocarcinomas), where one of the texture features, run-length gray-level nonuniformity, was statistically significant in separating the samples into survival groups (P ≤ .046). PMID:24772210

  6. High-speed spectral calibration by complex FIR filter in phase-sensitive optical coherence tomography.

    PubMed

    Kim, Sangmin; Raphael, Patrick D; Oghalai, John S; Applegate, Brian E

    2016-04-01

    Swept-laser sources offer a number of advantages for Phase-sensitive Optical Coherence Tomography (PhOCT). However, inter- and intra-sweep variability leads to calibration errors that adversely affect phase sensitivity. While there are several approaches to overcoming this problem, our preferred method is to simply calibrate every sweep of the laser. This approach offers high accuracy and phase stability at the expense of a substantial processing burden. In this approach, the Hilbert phase of the interferogram from a reference interferometer provides the instantaneous wavenumber of the laser, but is computationally expensive. Fortunately, the Hilbert transform may be approximated by a Finite Impulse-Response (FIR) filter. Here we explore the use of several FIR filter based Hilbert transforms for calibration, explicitly considering the impact of filter choice on phase sensitivity and OCT image quality. Our results indicate that the complex FIR filter approach is the most robust and accurate among those considered. It provides similar image quality and slightly better phase sensitivity than the traditional FFT-IFFT based Hilbert transform while consuming fewer resources in an FPGA implementation. We also explored utilizing the Hilbert magnitude of the reference interferogram to calculate an ideal window function for spectral amplitude calibration. The ideal window function is designed to carefully control sidelobes on the axial point spread function. We found that after a simple chromatic correction, calculating the window function using the complex FIR filter and the reference interferometer gave similar results to window functions calculated using a mirror sample and the FFT-IFFT Hilbert transform. Hence, the complex FIR filter can enable accurate and high-speed calibration of the magnitude and phase of spectral interferograms.

  7. High-speed spectral calibration by complex FIR filter in phase-sensitive optical coherence tomography

    PubMed Central

    Kim, Sangmin; Raphael, Patrick D.; Oghalai, John S.; Applegate, Brian E.

    2016-01-01

    Swept-laser sources offer a number of advantages for Phase-sensitive Optical Coherence Tomography (PhOCT). However, inter- and intra-sweep variability leads to calibration errors that adversely affect phase sensitivity. While there are several approaches to overcoming this problem, our preferred method is to simply calibrate every sweep of the laser. This approach offers high accuracy and phase stability at the expense of a substantial processing burden. In this approach, the Hilbert phase of the interferogram from a reference interferometer provides the instantaneous wavenumber of the laser, but is computationally expensive. Fortunately, the Hilbert transform may be approximated by a Finite Impulse-Response (FIR) filter. Here we explore the use of several FIR filter based Hilbert transforms for calibration, explicitly considering the impact of filter choice on phase sensitivity and OCT image quality. Our results indicate that the complex FIR filter approach is the most robust and accurate among those considered. It provides similar image quality and slightly better phase sensitivity than the traditional FFT-IFFT based Hilbert transform while consuming fewer resources in an FPGA implementation. We also explored utilizing the Hilbert magnitude of the reference interferogram to calculate an ideal window function for spectral amplitude calibration. The ideal window function is designed to carefully control sidelobes on the axial point spread function. We found that after a simple chromatic correction, calculating the window function using the complex FIR filter and the reference interferometer gave similar results to window functions calculated using a mirror sample and the FFT-IFFT Hilbert transform. Hence, the complex FIR filter can enable accurate and high-speed calibration of the magnitude and phase of spectral interferograms. PMID:27446666

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Znojil, Miloslav

    For many quantum models an apparent non-Hermiticity of observables just corresponds to their hidden Hermiticity in another, physical Hilbert space. For these models we show that the existence of observables which are manifestly time-dependent may require the use of a manifestly time-dependent representation of the physical Hilbert space of states.

  9. Computational algebraic geometry of epidemic models

    NASA Astrophysics Data System (ADS)

    Rodríguez Vega, Martín.

    2014-06-01

    Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.

  10. Applications of rigged Hilbert spaces in quantum mechanics and signal processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celeghini, E., E-mail: celeghini@fi.infn.it; Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, Paseo Belén 7, 47011 Valladolid; Gadella, M., E-mail: manuelgadella1@gmail.com

    Simultaneous use of discrete and continuous bases in quantum systems is not possible in the context of Hilbert spaces, but only in the more general structure of rigged Hilbert spaces (RHS). In addition, the relevant operators in RHS (but not in Hilbert space) are a realization of elements of a Lie enveloping algebra and support representations of semigroups. We explicitly construct here basis dependent RHS of the line and half-line and relate them to the universal enveloping algebras of the Weyl-Heisenberg algebra and su(1, 1), respectively. The complete sub-structure of both RHS and of the operators acting on them ismore » obtained from their algebraic structures or from the related fractional Fourier transforms. This allows us to describe both quantum and signal processing states and their dynamics. Two relevant improvements are introduced: (i) new kinds of filters related to restrictions to subspaces and/or the elimination of high frequency fluctuations and (ii) an operatorial structure that, starting from fix objects, describes their time evolution.« less

  11. Single and two-shot quantitative phase imaging using Hilbert-Huang Transform based fringe pattern analysis

    NASA Astrophysics Data System (ADS)

    Trusiak, Maciej; Micó, Vicente; Patorski, Krzysztof; García-Monreal, Javier; Sluzewski, Lukasz; Ferreira, Carlos

    2016-08-01

    In this contribution we propose two Hilbert-Huang Transform based algorithms for fast and accurate single-shot and two-shot quantitative phase imaging applicable in both on-axis and off-axis configurations. In the first scheme a single fringe pattern containing information about biological phase-sample under study is adaptively pre-filtered using empirical mode decomposition based approach. Further it is phase demodulated by the Hilbert Spiral Transform aided by the Principal Component Analysis for the local fringe orientation estimation. Orientation calculation enables closed fringes efficient analysis and can be avoided using arbitrary phase-shifted two-shot Gram-Schmidt Orthonormalization scheme aided by Hilbert-Huang Transform pre-filtering. This two-shot approach is a trade-off between single-frame and temporal phase shifting demodulation. Robustness of the proposed techniques is corroborated using experimental digital holographic microscopy studies of polystyrene micro-beads and red blood cells. Both algorithms compare favorably with the temporal phase shifting scheme which is used as a reference method.

  12. Monopole operators and Hilbert series of Coulomb branches of 3 d = 4 gauge theories

    NASA Astrophysics Data System (ADS)

    Cremonesi, Stefano; Hanany, Amihay; Zaffaroni, Alberto

    2014-01-01

    This paper addresses a long standing problem - to identify the chiral ring and moduli space (i.e. as an algebraic variety) on the Coulomb branch of an = 4 superconformal field theory in 2+1 dimensions. Previous techniques involved a computation of the metric on the moduli space and/or mirror symmetry. These methods are limited to sufficiently small moduli spaces, with enough symmetry, or to Higgs branches of sufficiently small gauge theories. We introduce a simple formula for the Hilbert series of the Coulomb branch, which applies to any good or ugly three-dimensional = 4 gauge theory. The formula counts monopole operators which are dressed by classical operators, the Casimir invariants of the residual gauge group that is left unbroken by the magnetic flux. We apply our formula to several classes of gauge theories. Along the way we make various tests of mirror symmetry, successfully comparing the Hilbert series of the Coulomb branch with the Hilbert series of the Higgs branch of the mirror theory.

  13. The Application of Hilbert-Huang Transforms to Meteorological Datasets

    NASA Technical Reports Server (NTRS)

    Duffy, Dean G.

    2003-01-01

    Recently a new spectral technique as been developed for the analysis of aperiodic and nonlinear signals - the Hilbert-Huang transform. This paper shows how these transforms can be used to discover synoptic and climatic features: For sea level data, the transforms capture the oceanic tides as well as large, aperiodic river outflows. In the case of solar radiation, we observe variations in the diurnal and seasonal cycles. Finally, from barographic data, the Hilbert-Huang transform reveals the passage of extratropical cyclones, fronts, and troughs. Thus, this technique can flag significant weather events such its a flood or the passage of a squall line.

  14. Optical Hilbert transform using fiber Bragg gratings

    NASA Astrophysics Data System (ADS)

    Ge, Jing; Wang, Chinhua; Zhu, Xiaojun

    2010-11-01

    In this paper, we demonstrate that a simple and practical phase-shifted fiber Bragg grating (PSFBG) operated in reflection can provide the required spectral response for implementing an all-optical Hilbert transformer (HT), including both integer and fractional orders. The PSFBG consists of two concatenated identical uniform FBGs with a phase shift between them. It can be proved that the phase shift of the FBG and the apodizing profile of the refractive index modulation determine the order of the transform. The device shows a good accuracy in calculating the Hilbert transform of the complex field of an arbitrary input optical waveforms when compared with the theoretical results.

  15. Hilbert's Hotel in polarization singularities.

    PubMed

    Wang, Yangyundou; Gbur, Greg

    2017-12-15

    We demonstrate theoretically how the creation of polarization singularities by the evolution of a fractional nonuniform polarization optical element involves the peculiar mathematics of countably infinite sets in the form of "Hilbert's Hotel." Two distinct topological processes can be observed, depending on the structure of the fractional optical element.

  16. Novel microwave photonic fractional Hilbert transformer using a ring resonator-based optical all-pass filter.

    PubMed

    Zhuang, Leimeng; Khan, Muhammad Rezaul; Beeker, Willem; Leinse, Arne; Heideman, René; Roeloffzen, Chris

    2012-11-19

    We propose and demonstrate a novel wideband microwave photonic fractional Hilbert transformer implemented using a ring resonator-based optical all-pass filter. The full programmability of the ring resonator allows variable and arbitrary fractional order of the Hilbert transformer. The performance analysis in both frequency and time domain validates that the proposed implementation provides a good approximation to an ideal fractional Hilbert transformer. This is also experimentally verified by an electrical S21 response characterization performed on a waveguide realization of a ring resonator. The waveguide-based structure allows the proposed Hilbert transformer to be integrated together with other building blocks on a photonic integrated circuit to create various system-level functionalities for on-chip microwave photonic signal processors. As an example, a circuit consisting of a splitter and a ring resonator has been realized which can perform on-chip phase control of microwave signals generated by means of optical heterodyning, and simultaneous generation of in-phase and quadrature microwave signals for a wide frequency range. For these functionalities, this simple and on-chip solution is considered to be practical, particularly when operating together with a dual-frequency laser. To our best knowledge, this is the first-time on-chip demonstration where ring resonators are employed to perform phase control functionalities for optical generation of microwave signals by means of optical heterodyning.

  17. Master Lovas-Andai and equivalent formulas verifying the 8/33 two-qubit Hilbert-Schmidt separability probability and companion rational-valued conjectures

    NASA Astrophysics Data System (ADS)

    Slater, Paul B.

    2018-04-01

    We begin by investigating relationships between two forms of Hilbert-Schmidt two-rebit and two-qubit "separability functions"—those recently advanced by Lovas and Andai (J Phys A Math Theor 50(29):295303, 2017), and those earlier presented by Slater (J Phys A 40(47):14279, 2007). In the Lovas-Andai framework, the independent variable ɛ \\in [0,1] is the ratio σ (V) of the singular values of the 2 × 2 matrix V=D_2^{1/2} D_1^{-1/2} formed from the two 2 × 2 diagonal blocks (D_1, D_2) of a 4 × 4 density matrix D= ||ρ _{ij}||. In the Slater setting, the independent variable μ is the diagonal-entry ratio √{ρ _{11} ρ _ {44}/ρ _ {22 ρ _ {33}}}—with, of central importance, μ =ɛ or μ =1/ɛ when both D_1 and D_2 are themselves diagonal. Lovas and Andai established that their two-rebit "separability function" \\tilde{χ }_1 (ɛ ) (≈ ɛ ) yields the previously conjectured Hilbert-Schmidt separability probability of 29/64. We are able, in the Slater framework (using cylindrical algebraic decompositions [CAD] to enforce positivity constraints), to reproduce this result. Further, we newly find its two-qubit, two-quater[nionic]-bit and "two-octo[nionic]-bit" counterparts, \\tilde{χ _2}(ɛ ) =1/3 ɛ ^2 ( 4-ɛ ^2) , \\tilde{χ _4}(ɛ ) =1/35 ɛ ^4 ( 15 ɛ ^4-64 ɛ ^2+84) and \\tilde{χ _8} (ɛ )= 1/1287ɛ ^8 ( 1155 ɛ ^8-7680 ɛ ^6+20160 ɛ ^4-25088 ɛ ^2+12740) . These immediately lead to predictions of Hilbert-Schmidt separability/PPT-probabilities of 8/33, 26/323 and 44482/4091349, in full agreement with those of the "concise formula" (Slater in J Phys A 46:445302, 2013), and, additionally, of a "specialized induced measure" formula. Then, we find a Lovas-Andai "master formula," \\tilde{χ _d}(ɛ )= ɛ ^d Γ (d+1)^3 _3\\tilde{F}_2( -{d/2,d/2,d;d/2+1,3 d/2+1;ɛ ^2) }/{Γ ( d/2+1) ^2}, encompassing both even and odd values of d. Remarkably, we are able to obtain the \\tilde{χ _d}(ɛ ) formulas, d=1,2,4, applicable to full (9-, 15-, 27-) dimensional sets of density matrices, by analyzing (6-, 9, 15-) dimensional sets, with not only diagonal D_1 and D_2, but also an additional pair of nullified entries. Nullification of a further pair still leads to X-matrices, for which a distinctly different, simple Dyson-index phenomenon is noted. C. Koutschan, then, using his HolonomicFunctions program, develops an order-4 recurrence satisfied by the predictions of the several formulas, establishing their equivalence. A two-qubit separability probability of 1-256/27 π ^2 is obtained based on the operator monotone function √{x}, with the use of \\tilde{χ _2}(ɛ ).

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suchanecki, Z.; Antoniou, I.; Tasaki, S.

    We consider the problem of rigging for the Koopman operators of the Renyi and the baker maps. We show that the rigged Hilbert space for the Renyi maps has some of the properties of a strict inductive limit and give a detailed description of the rigged Hilbert space for the baker maps. {copyright} {ital 1996 American Institute of Physics.}

  19. Excitation energies of dissociating H2: A problematic case for the adiabatic approximation of time-dependent density functional theory

    NASA Astrophysics Data System (ADS)

    Gritsenko, O. V.; van Gisbergen, S. J. A.; Görling, A.; Baerends, E. J.

    2000-11-01

    Time-dependent density functional theory (TDDFT) is applied for calculation of the excitation energies of the dissociating H2 molecule. The standard TDDFT method of adiabatic local density approximation (ALDA) totally fails to reproduce the potential curve for the lowest excited singlet 1Σu+ state of H2. Analysis of the eigenvalue problem for the excitation energies as well as direct derivation of the exchange-correlation (xc) kernel fxc(r,r',ω) shows that ALDA fails due to breakdown of its simple spatially local approximation for the kernel. The analysis indicates a complex structure of the function fxc(r,r',ω), which is revealed in a different behavior of the various matrix elements K1c,1cxc (between the highest occupied Kohn-Sham molecular orbital ψ1 and virtual MOs ψc) as a function of the bond distance R(H-H). The effect of nonlocality of fxc(r,r') is modeled by using different expressions for the corresponding matrix elements of different orbitals. Asymptotically corrected ALDA (ALDA-AC) expressions for the matrix elements K12,12xc(στ) are proposed, while for other matrix elements the standard ALDA expressions are retained. This approach provides substantial improvement over the standard ALDA. In particular, the ALDA-AC curve for the lowest singlet excitation qualitatively reproduces the shape of the exact curve. It displays a minimum and approaches a relatively large positive energy at large R(H-H). ALDA-AC also produces a substantial improvement for the calculated lowest triplet excitation, which is known to suffer from the triplet instability problem of the restricted KS ground state. Failure of the ALDA for the excitation energies is related to the failure of the local density as well as generalized gradient approximations to reproduce correctly the polarizability of dissociating H2. The expression for the response function χ is derived to show the origin of the field-counteracting term in the xc potential, which is lacking in the local density and generalized gradient approximations and which is required to obtain a correct polarizability.

  20. Radiomics of CT Features May Be Nonreproducible and Redundant: Influence of CT Acquisition Parameters.

    PubMed

    Berenguer, Roberto; Pastor-Juan, María Del Rosario; Canales-Vázquez, Jesús; Castro-García, Miguel; Villas, María Victoria; Legorburo, Francisco Mansilla; Sabater, Sebastià

    2018-04-24

    Purpose To identify the reproducible and nonredundant radiomics features (RFs) for computed tomography (CT). Materials and Methods Two phantoms were used to test RF reproducibility by using test-retest analysis, by changing the CT acquisition parameters (hereafter, intra-CT analysis), and by comparing five different scanners with the same CT parameters (hereafter, inter-CT analysis). Reproducible RFs were selected by using the concordance correlation coefficient (as a measure of the agreement between variables) and the coefficient of variation (defined as the ratio of the standard deviation to the mean). Redundant features were grouped by using hierarchical cluster analysis. Results A total of 177 RFs including intensity, shape, and texture features were evaluated. The test-retest analysis showed that 91% (161 of 177) of the RFs were reproducible according to concordance correlation coefficient. Reproducibility of intra-CT RFs, based on coefficient of variation, ranged from 89.3% (151 of 177) to 43.1% (76 of 177) where the pitch factor and the reconstruction kernel were modified, respectively. Reproducibility of inter-CT RFs, based on coefficient of variation, also showed large material differences, from 85.3% (151 of 177; wood) to only 15.8% (28 of 177; polyurethane). Ten clusters were identified after the hierarchical cluster analysis and one RF per cluster was chosen as representative. Conclusion Many RFs were redundant and nonreproducible. If all the CT parameters are fixed except field of view, tube voltage, and milliamperage, then the information provided by the analyzed RFs can be summarized in only 10 RFs (each representing a cluster) because of redundancy. © RSNA, 2018 Online supplemental material is available for this article.

  1. Generation of dark hollow beams by using a fractional radial Hilbert transform system

    NASA Astrophysics Data System (ADS)

    Xie, Qiansen; Zhao, Daomu

    2007-07-01

    The radial Hilbert transform has been extend to the fractional field, which could be called the fractional radial Hilbert transform (FRHT). Using edge-enhancement characteristics of this transform, we convert a Gaussian light beam into a variety of dark hollow beams (DHBs). Based on the fact that a hard-edged aperture can be expanded approximately as a finite sum of complex Gaussian functions, the analytical expression of a Gaussian beam passing through a FRHT system has been derived. As a numerical example, the properties of the DHBs with different fractional orders are illustrated graphically. The calculation results obtained by use of the analytical method and the integral method are also compared.

  2. Convergence of Galerkin approximations for operator Riccati equations: A nonlinear evolution equation approach

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1988-01-01

    An approximation and convergence theory was developed for Galerkin approximations to infinite dimensional operator Riccati differential equations formulated in the space of Hilbert-Schmidt operators on a separable Hilbert space. The Riccati equation was treated as a nonlinear evolution equation with dynamics described by a nonlinear monotone perturbation of a strongly coercive linear operator. A generic approximation result was proven for quasi-autonomous nonlinear evolution system involving accretive operators which was then used to demonstrate the Hilbert-Schmidt norm convergence of Galerkin approximations to the solution of the Riccati equation. The application of the results was illustrated in the context of a linear quadratic optimal control problem for a one dimensional heat equation.

  3. Directionality fields generated by a local Hilbert transform

    NASA Astrophysics Data System (ADS)

    Ahmed, W. W.; Herrero, R.; Botey, M.; Hayran, Z.; Kurt, H.; Staliunas, K.

    2018-03-01

    We propose an approach based on a local Hilbert transform to design non-Hermitian potentials generating arbitrary vector fields of directionality, p ⃗(r ⃗) , with desired shapes and topologies. We derive a local Hilbert transform to systematically build such potentials by modifying background potentials (being either regular or random, extended or localized). We explore particular directionality fields, for instance in the form of a focus to create sinks for probe fields (which could help to increase absorption at the sink), or to generate vortices in the probe fields. Physically, the proposed directionality fields provide a flexible mechanism for dynamical shaping and precise control over probe fields leading to novel effects in wave dynamics.

  4. The Riemann-Hilbert approach to the Helmholtz equation in a quarter-plane: Neumann, Robin and Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Its, Alexander; Its, Elizabeth

    2018-04-01

    We revisit the Helmholtz equation in a quarter-plane in the framework of the Riemann-Hilbert approach to linear boundary value problems suggested in late 1990s by A. Fokas. We show the role of the Sommerfeld radiation condition in Fokas' scheme.

  5. Group-theoretical approach to the construction of bases in 2{sup n}-dimensional Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, A.; Romero, J. L.; Klimov, A. B., E-mail: klimov@cencar.udg.mx

    2011-06-15

    We propose a systematic procedure to construct all the possible bases with definite factorization structure in 2{sup n}-dimensional Hilbert space and discuss an algorithm for the determination of basis separability. The results are applied for classification of bases for an n-qubit system.

  6. Observables and density matrices embedded in dual Hilbert spaces

    NASA Astrophysics Data System (ADS)

    Prosen, T.; Martignon, L.; Seligman, T. H.

    2015-06-01

    The introduction of operator states and of observables in various fields of quantum physics has raised questions about the mathematical structures of the corresponding spaces. In the framework of third quantization it had been conjectured that we deal with Hilbert spaces although the mathematical background was not entirely clear, particularly, when dealing with bosonic operators. This in turn caused some doubts about the correct way to combine bosonic and fermionic operators or, in other words, regular and Grassmann variables. In this paper we present a formal answer to the problems on a simple and very general basis. We illustrate the resulting construction by revisiting the Bargmann transform and finding the known connection between {{L}}2({{R}}) and the Bargmann-Hilbert space. We pursue this line of thinking one step further and discuss the representations of complex extensions of linear canonical transformations as isometries between dual Hilbert spaces. We then use the formalism to give an explicit formulation for Fock spaces involving both fermions and bosons thus solving the problem at the origin of our considerations.

  7. Averaging of random walks and shift-invariant measures on a Hilbert space

    NASA Astrophysics Data System (ADS)

    Sakbaev, V. Zh.

    2017-06-01

    We study random walks in a Hilbert space H and representations using them of solutions of the Cauchy problem for differential equations whose initial conditions are numerical functions on H. We construct a finitely additive analogue of the Lebesgue measure: a nonnegative finitely additive measure λ that is defined on a minimal subset ring of an infinite-dimensional Hilbert space H containing all infinite-dimensional rectangles with absolutely converging products of the side lengths and is invariant under shifts and rotations in H. We define the Hilbert space H of equivalence classes of complex-valued functions on H that are square integrable with respect to a shift-invariant measure λ. Using averaging of the shift operator in H over random vectors in H with a distribution given by a one-parameter semigroup (with respect to convolution) of Gaussian measures on H, we define a one-parameter semigroup of contracting self-adjoint transformations on H, whose generator is called the diffusion operator. We obtain a representation of solutions of the Cauchy problem for the Schrödinger equation whose Hamiltonian is the diffusion operator.

  8. Basis-neutral Hilbert-space analyzers

    PubMed Central

    Martin, Lane; Mardani, Davood; Kondakci, H. Esat; Larson, Walker D.; Shabahang, Soroush; Jahromi, Ali K.; Malhotra, Tanya; Vamivakas, A. Nick; Atia, George K.; Abouraddy, Ayman F.

    2017-01-01

    Interferometry is one of the central organizing principles of optics. Key to interferometry is the concept of optical delay, which facilitates spectral analysis in terms of time-harmonics. In contrast, when analyzing a beam in a Hilbert space spanned by spatial modes – a critical task for spatial-mode multiplexing and quantum communication – basis-specific principles are invoked that are altogether distinct from that of ‘delay’. Here, we extend the traditional concept of temporal delay to the spatial domain, thereby enabling the analysis of a beam in an arbitrary spatial-mode basis – exemplified using Hermite-Gaussian and radial Laguerre-Gaussian modes. Such generalized delays correspond to optical implementations of fractional transforms; for example, the fractional Hankel transform is the generalized delay associated with the space of Laguerre-Gaussian modes, and an interferometer incorporating such a ‘delay’ obtains modal weights in the associated Hilbert space. By implementing an inherently stable, reconfigurable spatial-light-modulator-based polarization-interferometer, we have constructed a ‘Hilbert-space analyzer’ capable of projecting optical beams onto any modal basis. PMID:28344331

  9. Remarks on the "Non-canonicity Puzzle": Lagrangian Symmetries of the Einstein-Hilbert Action

    NASA Astrophysics Data System (ADS)

    Kiriushcheva, N.; Komorowski, P. G.; Kuzmin, S. V.

    2012-07-01

    Given the non-canonical relationship between variables used in the Hamiltonian formulations of the Einstein-Hilbert action (due to Pirani, Schild, Skinner (PSS) and Dirac) and the Arnowitt-Deser-Misner (ADM) action, and the consequent difference in the gauge transformations generated by the first-class constraints of these two formulations, the assumption that the Lagrangians from which they were derived are equivalent leads to an apparent contradiction that has been called "the non-canonicity puzzle". In this work we shall investigate the group properties of two symmetries derived for the Einstein-Hilbert action: diffeomorphism, which follows from the PSS and Dirac formulations, and the one that arises from the ADM formulation. We demonstrate that unlike the diffeomorphism transformations, the ADM transformations (as well as others, which can be constructed for the Einstein-Hilbert Lagrangian using Noether's identities) do not form a group. This makes diffeomorphism transformations unique (the term "canonical" symmetry might be suggested). If the two Lagrangians are to be called equivalent, canonical symmetry must be preserved. The interplay between general covariance and the canonicity of the variables used is discussed.

  10. On the BV formalism of open superstring field theory in the large Hilbert space

    NASA Astrophysics Data System (ADS)

    Matsunaga, Hiroaki; Nomura, Mitsuru

    2018-05-01

    We construct several BV master actions for open superstring field theory in the large Hilbert space. First, we show that a naive use of the conventional BV approach breaks down at the third order of the antifield number expansion, although it enables us to define a simple "string antibracket" taking the Darboux form as spacetime antibrackets. This fact implies that in the large Hilbert space, "string fields-antifields" should be reassembled to obtain master actions in a simple manner. We determine the assembly of the string anti-fields on the basis of Berkovits' constrained BV approach, and give solutions to the master equation defined by Dirac antibrackets on the constrained string field-antifield space. It is expected that partial gauge-fixing enables us to relate superstring field theories based on the large and small Hilbert spaces directly: reassembling string fields-antifields is rather natural from this point of view. Finally, inspired by these results, we revisit the conventional BV approach and construct a BV master action based on the minimal set of string fields-antifields.

  11. Interference in the classical probabilistic model and its representation in complex Hilbert space

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei Yu.

    2005-10-01

    The notion of a context (complex of physical conditions, that is to say: specification of the measurement setup) is basic in this paper.We show that the main structures of quantum theory (interference of probabilities, Born's rule, complex probabilistic amplitudes, Hilbert state space, representation of observables by operators) are present already in a latent form in the classical Kolmogorov probability model. However, this model should be considered as a calculus of contextual probabilities. In our approach it is forbidden to consider abstract context independent probabilities: “first context and only then probability”. We construct the representation of the general contextual probabilistic dynamics in the complex Hilbert space. Thus dynamics of the wave function (in particular, Schrödinger's dynamics) can be considered as Hilbert space projections of a realistic dynamics in a “prespace”. The basic condition for representing of the prespace-dynamics is the law of statistical conservation of energy-conservation of probabilities. In general the Hilbert space projection of the “prespace” dynamics can be nonlinear and even irreversible (but it is always unitary). Methods developed in this paper can be applied not only to quantum mechanics, but also to classical statistical mechanics. The main quantum-like structures (e.g., interference of probabilities) might be found in some models of classical statistical mechanics. Quantum-like probabilistic behavior can be demonstrated by biological systems. In particular, it was recently found in some psychological experiments.

  12. Projective loop quantum gravity. I. State space

    NASA Astrophysics Data System (ADS)

    Lanéry, Suzanne; Thiemann, Thomas

    2016-12-01

    Instead of formulating the state space of a quantum field theory over one big Hilbert space, it has been proposed by Kijowski to describe quantum states as projective families of density matrices over a collection of smaller, simpler Hilbert spaces. Beside the physical motivations for this approach, it could help designing a quantum state space holding the states we need. In a latter work by Okolów, the description of a theory of Abelian connections within this framework was developed, an important insight being to use building blocks labeled by combinations of edges and surfaces. The present work generalizes this construction to an arbitrary gauge group G (in particular, G is neither assumed to be Abelian nor compact). This involves refining the definition of the label set, as well as deriving explicit formulas to relate the Hilbert spaces attached to different labels. If the gauge group happens to be compact, we also have at our disposal the well-established Ashtekar-Lewandowski Hilbert space, which is defined as an inductive limit using building blocks labeled by edges only. We then show that the quantum state space presented here can be thought as a natural extension of the space of density matrices over this Hilbert space. In addition, it is manifest from the classical counterparts of both formalisms that the projective approach allows for a more balanced treatment of the holonomy and flux variables, so it might pave the way for the development of more satisfactory coherent states.

  13. Functional brain abnormalities in major depressive disorder using the Hilbert-Huang transform.

    PubMed

    Yu, Haibin; Li, Feng; Wu, Tong; Li, Rui; Yao, Li; Wang, Chuanyue; Wu, Xia

    2018-02-09

    Major depressive disorder is a common disease worldwide, which is characterized by significant and persistent depression. Non-invasive accessory diagnosis of depression can be performed by resting-state functional magnetic resonance imaging (rs-fMRI). However, the fMRI signal may not satisfy linearity and stationarity. The Hilbert-Huang transform (HHT) is an adaptive time-frequency localization analysis method suitable for nonlinear and non-stationary signals. The objective of this study was to apply the HHT to rs-fMRI to find the abnormal brain areas of patients with depression. A total of 35 patients with depression and 37 healthy controls were subjected to rs-fMRI. The HHT was performed to extract the Hilbert-weighted mean frequency of the rs-fMRI signals, and multivariate receiver operating characteristic analysis was applied to find the abnormal brain regions with high sensitivity and specificity. We observed differences in Hilbert-weighted mean frequency between the patients and healthy controls mainly in the right hippocampus, right parahippocampal gyrus, left amygdala, and left and right caudate nucleus. Subsequently, the above-mentioned regions were included in the results obtained from the compared region homogeneity and the fractional amplitude of low frequency fluctuation method. We found brain regions with differences in the Hilbert-weighted mean frequency, and examined their sensitivity and specificity, which suggested a potential neuroimaging biomarker to distinguish between patients with depression and healthy controls. We further clarified the pathophysiological abnormality of these regions for the population with major depressive disorder.

  14. Identification and reproducibility of dietary patterns in a Danish cohort: the Inter99 study.

    PubMed

    Lau, Cathrine; Glümer, Charlotte; Toft, Ulla; Tetens, Inge; Carstensen, Bendix; Jørgensen, Torben; Borch-Johnsen, Knut

    2008-05-01

    We aimed to identify dietary patterns in a Danish adult population and assess the reproducibility of the dietary patterns identified. Baseline data of 3,372 women and 3,191 men (30-60 years old) from the population-based survey Inter99 was used. Food intake, assessed by a FFQ, was aggregated into thirty-four separate food groups. Dietary patterns were identified by principal component analysis. Confirmatory factor analysis and Bland Altman plots were used to assess the reproducibility of the dietary patterns identified. The Bland Altman plots were used as an alternative and new method. Two factors were retained for both women and men, which accounted for 15.1-17.4 % of the total variation. The 'Traditional' pattern was characterised by high loadings ( > or = 0.40) on paté or high-fat meat for sandwiches, mayonnaise salads, red meat, potatoes, butter and lard, low-fat fish, low-fat meat for sandwiches, and sauces. The 'Modern' pattern was characterised by high loadings on vegetables, fruit, mixed vegetable dishes, vegetable oil and vinegar dressing, poultry, and pasta, rice and wheat kernels. Small differences were observed between patterns identified for women and men. The root mean square error approximation from the confirmatory factor analysis was 0.08. The variation observed from the Bland Altman plots of factors from explorative v. confirmative analyses and explorative analyses from two sub-samples was between 18.8 and 47.7 %. Pearson's correlation was >0.89 (P < 0.0001). The reproducibility was better for women than for men. We conclude that the 'Traditional' and 'Modern' dietary patterns identified were reproducible.

  15. Optical properties of alkali halide crystals from all-electron hybrid TD-DFT calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webster, R., E-mail: ross.webster07@imperial.ac.uk; Harrison, N. M.; Bernasconi, L.

    2015-06-07

    We present a study of the electronic and optical properties of a series of alkali halide crystals AX, with A = Li, Na, K, Rb and X = F, Cl, Br based on a recent implementation of hybrid-exchange time-dependent density functional theory (TD-DFT) (TD-B3LYP) in the all-electron Gaussian basis set code CRYSTAL. We examine, in particular, the impact of basis set size and quality on the prediction of the optical gap and exciton binding energy. The formation of bound excitons by photoexcitation is observed in all the studied systems and this is shown to be correlated to specific features ofmore » the Hartree-Fock exchange component of the TD-DFT response kernel. All computed optical gaps and exciton binding energies are however markedly below estimated experimental and, where available, 2-particle Green’s function (GW-Bethe-Salpeter equation, GW-BSE) values. We attribute this reduced exciton binding to the incorrect asymptotics of the B3LYP exchange correlation ground state functional and of the TD-B3LYP response kernel, which lead to a large underestimation of the Coulomb interaction between the excited electron and hole wavefunctions. Considering LiF as an example, we correlate the asymptotic behaviour of the TD-B3LYP kernel to the fraction of Fock exchange admixed in the ground state functional c{sub HF} and show that there exists one value of c{sub HF} (∼0.32) that reproduces at least semi-quantitatively the optical gap of this material.« less

  16. Unveiling signatures of interdecadal climate changes by Hilbert analysis

    NASA Astrophysics Data System (ADS)

    Zappalà, Dario; Barreiro, Marcelo; Masoller, Cristina

    2017-04-01

    A recent study demonstrated that, in a class of networks of oscillators, the optimal network reconstruction from dynamics is obtained when the similarity analysis is performed not on the original dynamical time series, but on transformed series obtained by Hilbert transform. [1] That motivated us to use Hilbert transform to study another kind of (in a broad sense) "oscillating" series, such as the series of temperature. Actually, we found that Hilbert analysis of SAT (Surface Air Temperature) time series uncovers meaningful information about climate and is therefore a promising tool for the study of other climatological variables. [2] In this work we analysed a large dataset of SAT series, performing Hilbert transform and further analysis with the goal of finding signs of climate change during the analysed period. We used the publicly available ERA-Interim dataset, containing reanalysis data. [3] In particular, we worked on daily SAT time series, from year 1979 to 2015, in 16380 points arranged over a regular grid on the Earth surface. From each SAT time series we calculate the anomaly series and also, by using the Hilbert transform, we calculate the instantaneous amplitude and instantaneous frequency series. Our first approach is to calculate the relative variation: the difference between the average value on the last 10 years and the average value on the first 10 years, divided by the average value over all the analysed period. We did this calculations on our transformed series: frequency and amplitude, both with average values and standard deviation values. Furthermore, to have a comparison with an already known analysis methods, we did these same calculations on the anomaly series. We plotted these results as maps, where the colour of each site indicates the value of its relative variation. Finally, to gain insight in the interpretation of our results over real SAT data, we generated synthetic sinusoidal series with various levels of additive noise. By applying Hilbert analysis to the synthetic data, we uncovered a clear trend between mean amplitude and mean frequency: as the noise level grows, the amplitude increases while the frequency decreases. Research funded in part by AGAUR (Generalitat de Catalunya), EU LINC project (Grant No. 289447) and Spanish MINECO (FIS2015-66503-C3-2-P).

  17. An algorithm for the split-feasibility problems with application to the split-equality problem.

    PubMed

    Chuang, Chih-Sheng; Chen, Chi-Ming

    2017-01-01

    In this paper, we study the split-feasibility problem in Hilbert spaces by using the projected reflected gradient algorithm. As applications, we study the convex linear inverse problem and the split-equality problem in Hilbert spaces, and we give new algorithms for these problems. Finally, numerical results are given for our main results.

  18. MURI: Optimal Quantum Dynamic Discrimination of Chemical and Biological Agents

    DTIC Science & Technology

    2008-06-12

    multiparameter) Hilbert space for enhanced detection and classification: an application of receiver operating curve statistics to laser-based mass...Adaptive reshaping of objects in (multiparameter) Hilbert space for enhanced detection and classification: an application of receiver operating curve...Doctoral Associate Muhannad Zamari, Graduate Student Ilya Greenberg , Computer Consultant Getahun Menkir, Graduate Student Lalinda Palliyaguru, Graduate

  19. Hidden simplicity of the gravity action

    DOE PAGES

    Cheung, Clifford; Remmen, Grant N.

    2017-09-01

    We derive new representations of the Einstein-Hilbert action in which graviton perturbation theory is immensely simplified. To accomplish this, we recast the Einstein-Hilbert action as a theory of purely cubic interactions among gravitons and a single auxiliary field. The corresponding equations of motion are the Einstein field equations rewritten as two coupled first-order differential equations. Since all Feynman diagrams are cubic, we are able to derive new off-shell recursion relations for tree-level graviton scattering amplitudes. With a judicious choice of gauge fixing, we then construct an especially compact form for the Einstein-Hilbert action in which all graviton interactions are simplymore » proportional to the graviton kinetic term. Our results apply to graviton perturbations about an arbitrary curved background spacetime.« less

  20. Hidden simplicity of the gravity action

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Clifford; Remmen, Grant N.

    We derive new representations of the Einstein-Hilbert action in which graviton perturbation theory is immensely simplified. To accomplish this, we recast the Einstein-Hilbert action as a theory of purely cubic interactions among gravitons and a single auxiliary field. The corresponding equations of motion are the Einstein field equations rewritten as two coupled first-order differential equations. Since all Feynman diagrams are cubic, we are able to derive new off-shell recursion relations for tree-level graviton scattering amplitudes. With a judicious choice of gauge fixing, we then construct an especially compact form for the Einstein-Hilbert action in which all graviton interactions are simplymore » proportional to the graviton kinetic term. Our results apply to graviton perturbations about an arbitrary curved background spacetime.« less

  1. Hilbert transform evaluation for electron-phonon self-energies

    NASA Astrophysics Data System (ADS)

    Bevilacqua, Giuseppe; Menichetti, Guido; Pastori Parravicini, Giuseppe

    2016-01-01

    The electron tunneling current through nanostructures is considered in the presence of the electron-phonon interactions. In the Keldysh nonequilibrium formalism, the lesser, greater, advanced and retarded self-energies components are expressed by means of appropriate Langreth rules. We discuss the key role played by the entailed Hilbert transforms, and provide an analytic way for their evaluation. Particular attention is given to the current-conserving lowest-order-expansion for the treament of the electron-phonon interaction; by means of an appropriate elaboration of the analytic properties and pole structure of the Green's functions and of the Fermi functions, we arrive at a surprising simple, elegant, fully analytic and easy-to-use expression of the Hilbert transforms and involved integrals in the energy domain.

  2. Solution of a cauchy problem for a diffusion equation in a Hilbert space by a Feynman formula

    NASA Astrophysics Data System (ADS)

    Remizov, I. D.

    2012-07-01

    The Cauchy problem for a class of diffusion equations in a Hilbert space is studied. It is proved that the Cauchy problem in well posed in the class of uniform limits of infinitely smooth bounded cylindrical functions on the Hilbert space, and the solution is presented in the form of the so-called Feynman formula, i.e., a limit of multiple integrals against a gaussian measure as the multiplicity tends to infinity. It is also proved that the solution of the Cauchy problem depends continuously on the diffusion coefficient. A process reducing an approximate solution of an infinite-dimensional diffusion equation to finding a multiple integral of a real function of finitely many real variables is indicated.

  3. Macroscopic and microscopic components of exchange-correlation interactions

    NASA Astrophysics Data System (ADS)

    Sottile, F.; Karlsson, K.; Reining, L.; Aryasetiawan, F.

    2003-11-01

    We consider two commonly used approaches for the ab initio calculation of optical-absorption spectra, namely, many-body perturbation theory based on Green’s functions and time-dependent density-functional theory (TDDFT). The former leads to the two-particle Bethe-Salpeter equation that contains a screened electron-hole interaction. We approximate this interaction in various ways, and discuss in particular the results obtained for a local contact potential. This, in fact, allows us to straightforwardly make the link to the TDDFT approach, and to discuss the exchange-correlation kernel fxc that corresponds to the contact exciton. Our main results, illustrated in the examples of bulk silicon, GaAs, argon, and LiF, are the following. (i) The simple contact exciton model, used on top of an ab initio calculated band structure, yields reasonable absorption spectra. (ii) Qualitatively extremely different fxc can be derived approximatively from the same Bethe-Salpeter equation. These kernels can however yield very similar spectra. (iii) A static fxc, both with or without a long-range component, can create transitions in the quasiparticle gap. To the best of our knowledge, this is the first time that TDDFT has been shown to be able to reproduce bound excitons.

  4. HS-SPME-GC-MS/MS Method for the Rapid and Sensitive Quantitation of 2-Acetyl-1-pyrroline in Single Rice Kernels.

    PubMed

    Hopfer, Helene; Jodari, Farman; Negre-Zakharov, Florence; Wylie, Phillip L; Ebeler, Susan E

    2016-05-25

    Demand for aromatic rice varieties (e.g., Basmati) is increasing in the US. Aromatic varieties typically have elevated levels of the aroma compound 2-acetyl-1-pyrroline (2AP). Due to its very low aroma threshold, analysis of 2AP provides a useful screening tool for rice breeders. Methods for 2AP analysis in rice should quantitate 2AP at or below sensory threshold level, avoid artifactual 2AP generation, and be able to analyze single rice kernels in cases where only small sample quantities are available (e.g., breeding trials). We combined headspace solid phase microextraction with gas chromatography tandem mass spectrometry (HS-SPME-GC-MS/MS) for analysis of 2AP, using an extraction temperature of 40 °C and a stable isotopologue as internal standard. 2AP calibrations were linear between the concentrations of 53 and 5380 pg/g, with detection limits below the sensory threshold of 2AP. Forty-eight aromatic and nonaromatic, milled rice samples from three harvest years were screened with the method for their 2AP content, and overall reproducibility, observed for all samples, ranged from 5% for experimental aromatic lines to 33% for nonaromatic lines.

  5. Kernel methods and flexible inference for complex stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Capobianco, Enrico

    2008-07-01

    Approximation theory suggests that series expansions and projections represent standard tools for random process applications from both numerical and statistical standpoints. Such instruments emphasize the role of both sparsity and smoothness for compression purposes, the decorrelation power achieved in the expansion coefficients space compared to the signal space, and the reproducing kernel property when some special conditions are met. We consider these three aspects central to the discussion in this paper, and attempt to analyze the characteristics of some known approximation instruments employed in a complex application domain such as financial market time series. Volatility models are often built ad hoc, parametrically and through very sophisticated methodologies. But they can hardly deal with stochastic processes with regard to non-Gaussianity, covariance non-stationarity or complex dependence without paying a big price in terms of either model mis-specification or computational efficiency. It is thus a good idea to look at other more flexible inference tools; hence the strategy of combining greedy approximation and space dimensionality reduction techniques, which are less dependent on distributional assumptions and more targeted to achieve computationally efficient performances. Advantages and limitations of their use will be evaluated by looking at algorithmic and model building strategies, and by reporting statistical diagnostics.

  6. An assessment of envelope-based demodulation in case of proximity of carrier and modulation frequencies

    NASA Astrophysics Data System (ADS)

    Shahriar, Md Rifat; Borghesani, Pietro; Randall, R. B.; Tan, Andy C. C.

    2017-11-01

    Demodulation is a necessary step in the field of diagnostics to reveal faults whose signatures appear as an amplitude and/or frequency modulation. The Hilbert transform has conventionally been used for the calculation of the analytic signal required in the demodulation process. However, the carrier and modulation frequencies must meet the conditions set by the Bedrosian identity for the Hilbert transform to be applicable for demodulation. This condition, basically requiring the carrier frequency to be sufficiently higher than the frequency of the modulation harmonics, is usually satisfied in many traditional diagnostic applications (e.g. vibration analysis of gear and bearing faults) due to the order-of-magnitude ratio between the carrier and modulation frequency. However, the diversification of the diagnostic approaches and applications shows cases (e.g. electrical signature analysis-based diagnostics) where the carrier frequency is in close proximity to the modulation frequency, thus challenging the applicability of the Bedrosian theorem. This work presents an analytic study to quantify the error introduced by the Hilbert transform-based demodulation when the Bedrosian identity is not satisfied and proposes a mitigation strategy to combat the error. An experimental study is also carried out to verify the analytical results. The outcome of the error analysis sets a confidence limit on the estimated modulation (both shape and magnitude) achieved through the Hilbert transform-based demodulation in case of violated Bedrosian theorem. However, the proposed mitigation strategy is found effective in combating the demodulation error aroused in this scenario, thus extending applicability of the Hilbert transform-based demodulation.

  7. Dynamic characterization of a damaged beam using empirical mode decomposition and Hilbert spectrum method

    NASA Astrophysics Data System (ADS)

    Chang, Chih-Chen; Poon, Chun-Wing

    2004-07-01

    Recently, the empirical mode decomposition (EMD) in combination with the Hilbert spectrum method has been proposed to identify the dynamic characteristics of linear structures. In this study, this EMD and Hilbert spectrum method is used to analyze the dynamic characteristics of a damaged reinforced concrete (RC) beam in the laboratory. The RC beam is 4m long with a cross section of 200mm X 250mm. The beam is sequentially subjected to a concentrated load of different magnitudes at the mid-span to produce different degrees of damage. An impact load is applied around the mid-span to excite the beam. Responses of the beam are recorded by four accelerometers. Results indicate that the EMD and Hilbert spectrum method can reveal the variation of the dynamic characteristics in the time domain. These results are also compared with those obtained using the Fourier analysis. In general, it is found that the two sets of results correlate quite well in terms of mode counts and frequency values. Some differences, however, can be seen in the damping values, which perhaps can be attributed to the linear assumption of the Fourier transform.

  8. Rational Solutions of the Painlevé-II Equation Revisited

    NASA Astrophysics Data System (ADS)

    Miller, Peter D.; Sheng, Yue

    2017-08-01

    The rational solutions of the Painlevé-II equation appear in several applications and are known to have many remarkable algebraic and analytic properties. They also have several different representations, useful in different ways for establishing these properties. In particular, Riemann-Hilbert representations have proven to be useful for extracting the asymptotic behavior of the rational solutions in the limit of large degree (equivalently the large-parameter limit). We review the elementary properties of the rational Painlevé-II functions, and then we describe three different Riemann-Hilbert representations of them that have appeared in the literature: a representation by means of the isomonodromy theory of the Flaschka-Newell Lax pair, a second representation by means of the isomonodromy theory of the Jimbo-Miwa Lax pair, and a third representation found by Bertola and Bothner related to pseudo-orthogonal polynomials. We prove that the Flaschka-Newell and Bertola-Bothner Riemann-Hilbert representations of the rational Painlevé-II functions are explicitly connected to each other. Finally, we review recent results describing the asymptotic behavior of the rational Painlevé-II functions obtained from these Riemann-Hilbert representations by means of the steepest descent method.

  9. Faults Diagnostics of Railway Axle Bearings Based on IMF’s Confidence Index Algorithm for Ensemble EMD

    PubMed Central

    Yi, Cai; Lin, Jianhui; Zhang, Weihua; Ding, Jianming

    2015-01-01

    As train loads and travel speeds have increased over time, railway axle bearings have become critical elements which require more efficient non-destructive inspection and fault diagnostics methods. This paper presents a novel and adaptive procedure based on ensemble empirical mode decomposition (EEMD) and Hilbert marginal spectrum for multi-fault diagnostics of axle bearings. EEMD overcomes the limitations that often hypothesize about data and computational efforts that restrict the application of signal processing techniques. The outputs of this adaptive approach are the intrinsic mode functions that are treated with the Hilbert transform in order to obtain the Hilbert instantaneous frequency spectrum and marginal spectrum. Anyhow, not all the IMFs obtained by the decomposition should be considered into Hilbert marginal spectrum. The IMFs’ confidence index arithmetic proposed in this paper is fully autonomous, overcoming the major limit of selection by user with experience, and allows the development of on-line tools. The effectiveness of the improvement is proven by the successful diagnosis of an axle bearing with a single fault or multiple composite faults, e.g., outer ring fault, cage fault and pin roller fault. PMID:25970256

  10. Hilbert-Huang transform analysis of dynamic and earthquake motion recordings

    USGS Publications Warehouse

    Zhang, R.R.; Ma, S.; Safak, E.; Hartzell, S.

    2003-01-01

    This study examines the rationale of Hilbert-Huang transform (HHT) for analyzing dynamic and earthquake motion recordings in studies of seismology and engineering. In particular, this paper first provides the fundamentals of the HHT method, which consist of the empirical mode decomposition (EMD) and the Hilbert spectral analysis. It then uses the HHT to analyze recordings of hypothetical and real wave motion, the results of which are compared with the results obtained by the Fourier data processing technique. The analysis of the two recordings indicates that the HHT method is able to extract some motion characteristics useful in studies of seismology and engineering, which might not be exposed effectively and efficiently by Fourier data processing technique. Specifically, the study indicates that the decomposed components in EMD of HHT, namely, the intrinsic mode function (IMF) components, contain observable, physical information inherent to the original data. It also shows that the grouped IMF components, namely, the EMD-based low- and high-frequency components, can faithfully capture low-frequency pulse-like as well as high-frequency wave signals. Finally, the study illustrates that the HHT-based Hilbert spectra are able to reveal the temporal-frequency energy distribution for motion recordings precisely and clearly.

  11. Employing the Hilbert-Huang Transform to analyze observed natural complex signals: Calm wind meandering cases

    NASA Astrophysics Data System (ADS)

    Martins, Luis Gustavo Nogueira; Stefanello, Michel Baptistella; Degrazia, Gervásio Annes; Acevedo, Otávio Costa; Puhales, Franciano Scremin; Demarco, Giuliano; Mortarini, Luca; Anfossi, Domenico; Roberti, Débora Regina; Costa, Felipe Denardin; Maldaner, Silvana

    2016-11-01

    In this study we analyze natural complex signals employing the Hilbert-Huang spectral analysis. Specifically, low wind meandering meteorological data are decomposed into turbulent and non turbulent components. These non turbulent movements, responsible for the absence of a preferential direction of the horizontal wind, provoke negative lobes in the meandering autocorrelation functions. The meandering characteristic time scales (meandering periods) are determined from the spectral peak provided by the Hilbert-Huang marginal spectrum. The magnitudes of the temperature and horizontal wind meandering period obtained agree with the results found from the best fit of the heuristic meandering autocorrelation functions. Therefore, the new method represents a new procedure to evaluate meandering periods that does not employ mathematical expressions to represent observed meandering autocorrelation functions.

  12. On Replacing "Quantum Thinking" with Counterfactual Reasoning

    NASA Astrophysics Data System (ADS)

    Narens, Louis

    The probability theory used in quantum mechanics is currently being employed by psychologists to model the impact of context on decision. Its event space consists of closed subspaces of a Hilbert space, and its probability function sometimes violate the law of the finite additivity of probabilities. Results from the quantum mechanics literature indicate that such a "Hilbert space probability theory" cannot be extended in a useful way to standard, finitely additive, probability theory by the addition of new events with specific probabilities. This chapter presents a new kind of probability theory that shares many fundamental algebraic characteristics with Hilbert space probability theory but does extend to standard probability theory by adjoining new events with specific probabilities. The new probability theory arises from considerations about how psychological experiments are related through counterfactual reasoning.

  13. Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Zheng, Yang; Chen, Xihao; Zhu, Rui

    2017-07-01

    Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.

  14. Kernel abortion in maize : I. Carbohydrate concentration patterns and Acid invertase activity of maize kernels induced to abort in vitro.

    PubMed

    Hanft, J M; Jones, R J

    1986-06-01

    Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.

  15. Quantum finance

    NASA Astrophysics Data System (ADS)

    Schaden, Martin

    2002-12-01

    Quantum theory is used to model secondary financial markets. Contrary to stochastic descriptions, the formalism emphasizes the importance of trading in determining the value of a security. All possible realizations of investors holding securities and cash is taken as the basis of the Hilbert space of market states. The temporal evolution of an isolated market is unitary in this space. Linear operators representing basic financial transactions such as cash transfer and the buying or selling of securities are constructed and simple model Hamiltonians that generate the temporal evolution due to cash flows and the trading of securities are proposed. The Hamiltonian describing financial transactions becomes local when the profit/loss from trading is small compared to the turnover. This approximation may describe a highly liquid and efficient stock market. The lognormal probability distribution for the price of a stock with a variance that is proportional to the elapsed time is reproduced for an equilibrium market. The asymptotic volatility of a stock in this case is related to the long-term probability that it is traded.

  16. Machine learning of accurate energy-conserving molecular force fields.

    PubMed

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E; Poltavsky, Igor; Schütt, Kristof T; Müller, Klaus-Robert

    2017-05-01

    Using conservation of energy-a fundamental property of closed classical and quantum mechanical systems-we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol -1 for energies and 1 kcal mol -1 Å̊ -1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods.

  17. Machine learning of accurate energy-conserving molecular force fields

    PubMed Central

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E.; Poltavsky, Igor; Schütt, Kristof T.; Müller, Klaus-Robert

    2017-01-01

    Using conservation of energy—a fundamental property of closed classical and quantum mechanical systems—we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol−1 for energies and 1 kcal mol−1 Å̊−1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods. PMID:28508076

  18. Quantum theory in real Hilbert space: How the complex Hilbert space structure emerges from Poincaré symmetry

    NASA Astrophysics Data System (ADS)

    Moretti, Valter; Oppio, Marco

    As earlier conjectured by several authors and much later established by Solèr (relying on partial results by Piron, Maeda-Maeda and other authors), from the lattice theory point of view, Quantum Mechanics may be formulated in real, complex or quaternionic Hilbert spaces only. Stückelberg provided some physical, but not mathematically rigorous, reasons for ruling out the real Hilbert space formulation, assuming that any formulation should encompass a statement of Heisenberg principle. Focusing on this issue from another — in our opinion, deeper — viewpoint, we argue that there is a general fundamental reason why elementary quantum systems are not described in real Hilbert spaces. It is their basic symmetry group. In the first part of the paper, we consider an elementary relativistic system within Wigner’s approach defined as a locally-faithful irreducible strongly-continuous unitary representation of the Poincaré group in a real Hilbert space. We prove that, if the squared-mass operator is non-negative, the system admits a natural, Poincaré invariant and unique up to sign, complex structure which commutes with the whole algebra of observables generated by the representation itself. This complex structure leads to a physically equivalent reformulation of the theory in a complex Hilbert space. Within this complex formulation, differently from what happens in the real one, all selfadjoint operators represent observables in accordance with Solèr’s thesis, and the standard quantum version of Noether theorem may be formulated. In the second part of this work, we focus on the physical hypotheses adopted to define a quantum elementary relativistic system relaxing them on the one hand, and making our model physically more general on the other hand. We use a physically more accurate notion of irreducibility regarding the algebra of observables only, we describe the symmetries in terms of automorphisms of the restricted lattice of elementary propositions of the quantum system and we adopt a notion of continuity referred to the states viewed as probability measures on the elementary propositions. Also in this case, the final result proves that there exists a unique (up to sign) Poincaré invariant complex structure making the theory complex and completely fitting into Solèr’s picture. This complex structure reveals a nice interplay of Poincaré symmetry and the classification of the commutant of irreducible real von Neumann algebras.

  19. 7 CFR 810.602 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...

  20. Kernel Abortion in Maize 1

    PubMed Central

    Hanft, Jonathan M.; Jones, Robert J.

    1986-01-01

    Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846

  1. 7 CFR 810.1202 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...

  2. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize.

    PubMed

    Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.

  3. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize

    PubMed Central

    Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143

  4. Photonic Hilbert transformers based on laterally apodized integrated waveguide Bragg gratings on a SOI wafer.

    PubMed

    Bazargani, Hamed Pishvai; Burla, Maurizio; Chrostowski, Lukas; Azaña, José

    2016-11-01

    We experimentally demonstrate high-performance integer and fractional-order photonic Hilbert transformers based on laterally apodized Bragg gratings in a silicon-on-insulator technology platform. The sub-millimeter-long gratings have been fabricated using single-etch electron beam lithography, and the resulting HT devices offer operation bandwidths approaching the THz range, with time-bandwidth products between 10 and 20.

  5. Heterotic reduction of Courant algebroid connections and Einstein-Hilbert actions

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Vysoký, Jan

    2016-08-01

    We discuss Levi-Civita connections on Courant algebroids. We define an appropriate generalization of the curvature tensor and compute the corresponding scalar curvatures in the exact and heterotic case, leading to generalized (bosonic) Einstein-Hilbert type of actions known from supergravity. In particular, we carefully analyze the process of the reduction for the generalized metric, connection, curvature tensor and the scalar curvature.

  6. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow

    2014-10-15

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.

  7. 7 CFR 810.802 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...

  8. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  9. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  10. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  11. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  12. MO-FG-CAMPUS-TeP1-05: Rapid and Efficient 3D Dosimetry for End-To-End Patient-Specific QA of Rotational SBRT Deliveries Using a High-Resolution EPID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Han, B; Xing, L

    2016-06-15

    Purpose: EPID-based patient-specific quality assurance provides verification of the planning setup and delivery process that phantomless QA and log-file based virtual dosimetry methods cannot achieve. We present a method for EPID-based QA utilizing spatially-variant EPID response kernels that allows for direct calculation of the entrance fluence and 3D phantom dose. Methods: An EPID dosimetry system was utilized for 3D dose reconstruction in a cylindrical phantom for the purposes of end-to-end QA. Monte Carlo (MC) methods were used to generate pixel-specific point-spread functions (PSFs) characterizing the spatially non-uniform EPID portal response in the presence of phantom scatter. The spatially-variant PSFs weremore » decomposed into spatially-invariant basis PSFs with the symmetric central-axis kernel as the primary basis kernel and off-axis representing orthogonal perturbations in pixel-space. This compact and accurate characterization enables the use of a modified Richardson-Lucy deconvolution algorithm to directly reconstruct entrance fluence from EPID images without iterative scatter subtraction. High-resolution phantom dose kernels were cogenerated in MC with the PSFs enabling direct recalculation of the resulting phantom dose by rapid forward convolution once the entrance fluence was calculated. A Delta4 QA phantom was used to validate the dose reconstructed in this approach. Results: The spatially-invariant representation of the EPID response accurately reproduced the entrance fluence with >99.5% fidelity with a simultaneous reduction of >60% in computational overhead. 3D dose for 10{sub 6} voxels was reconstructed for the entire phantom geometry. A 3D global gamma analysis demonstrated a >95% pass rate at 3%/3mm. Conclusion: Our approach demonstrates the capabilities of an EPID-based end-to-end QA methodology that is more efficient than traditional EPID dosimetry methods. Displacing the point of measurement external to the QA phantom reduces the necessary complexity of the phantom itself while offering a method that is highly scalable and inherently generalizable to rotational and trajectory based deliveries. This research was partially supported by Varian.« less

  13. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  14. Large dynamic range optical vector analyzer based on optical single-sideband modulation and Hilbert transform

    NASA Astrophysics Data System (ADS)

    Xue, Min; Pan, Shilong; Zhao, Yongjiu

    2016-07-01

    A large dynamic range optical vector analyzer (OVA) based on optical single-sideband modulation is proposed and demonstrated. By dividing the optical signal after optical device under test into two paths, reversing the phase of one swept sideband using a Hilbert transformer in one path, and detecting the two signals from the two paths with a balanced photodetector, the measurement errors induced by the residual -1st-order sideband and the high-order sidebands can be eliminated and the dynamic range of the measurement is increased. In a proof-of-concept experiment, the stimulated Brillouin scattering and a fiber Bragg grating are measured by OVAs with and without the Hilbert transform and balanced photodetection. Results show that about 40-dB improvement in the measurement dynamic range is realized by the proposed OVA.

  15. An "unreasonable effectiveness" of Hilbert transform for the transition phase behavior in an Aharonov-Bohm two-path interferometer

    NASA Astrophysics Data System (ADS)

    Englman, R.

    2016-08-01

    The recent phase shift data of Takada et al. (Phys. Rev. Lett. 113 (2014) 126601) for a two level system are reconstructed from their current intensity curves by the method of Hilbert transform, for which the underlying Physics is the principle of causality. An introductory algebraic model illustrates pedagogically the working of the method and leads to newly derived relationships involving phenomenological parameters, in particular for the sign of the phase slope between the resonance peaks. While the parametrization of the experimental current intensity data in terms of a few model parameters shows only a qualitative agreement for the phase shift, due to the strong impact of small, detailed variations in the experimental intensity curve on the phase behavior, the numerical Hilbert transform yields a satisfactory reproduction of the phase.

  16. Connes distance function on fuzzy sphere and the connection between geometry and statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devi, Yendrembam Chaoba, E-mail: chaoba@bose.res.in; Chakraborty, Biswajit, E-mail: biswajit@bose.res.in; Prajapat, Shivraj, E-mail: shraprajapat@gmail.com

    An algorithm to compute Connes spectral distance, adaptable to the Hilbert-Schmidt operatorial formulation of non-commutative quantum mechanics, was developed earlier by introducing the appropriate spectral triple and used to compute infinitesimal distances in the Moyal plane, revealing a deep connection between geometry and statistics. In this paper, using the same algorithm, the Connes spectral distance has been calculated in the Hilbert-Schmidt operatorial formulation for the fuzzy sphere whose spatial coordinates satisfy the su(2) algebra. This has been computed for both the discrete and the Perelemov’s SU(2) coherent state. Here also, we get a connection between geometry and statistics which ismore » shown by computing the infinitesimal distance between mixed states on the quantum Hilbert space of a particular fuzzy sphere, indexed by n ∈ ℤ/2.« less

  17. Semiclassical propagation: Hilbert space vs. Wigner representation

    NASA Astrophysics Data System (ADS)

    Gottwald, Fabian; Ivanov, Sergei D.

    2018-03-01

    A unified viewpoint on the van Vleck and Herman-Kluk propagators in Hilbert space and their recently developed counterparts in Wigner representation is presented. Based on this viewpoint, the Wigner Herman-Kluk propagator is conceptually the most general one. Nonetheless, the respective semiclassical expressions for expectation values in terms of the density matrix and the Wigner function are mathematically proven here to coincide. The only remaining difference is a mere technical flexibility of the Wigner version in choosing the Gaussians' width for the underlying coherent states beyond minimal uncertainty. This flexibility is investigated numerically on prototypical potentials and it turns out to provide neither qualitative nor quantitative improvements. Given the aforementioned generality, utilizing the Wigner representation for semiclassical propagation thus leads to the same performance as employing the respective most-developed (Hilbert-space) methods for the density matrix.

  18. An Image Encryption Algorithm Utilizing Julia Sets and Hilbert Curves

    PubMed Central

    Sun, Yuanyuan; Chen, Lina; Xu, Rudan; Kong, Ruiqing

    2014-01-01

    Image encryption is an important and effective technique to protect image security. In this paper, a novel image encryption algorithm combining Julia sets and Hilbert curves is proposed. The algorithm utilizes Julia sets’ parameters to generate a random sequence as the initial keys and gets the final encryption keys by scrambling the initial keys through the Hilbert curve. The final cipher image is obtained by modulo arithmetic and diffuse operation. In this method, it needs only a few parameters for the key generation, which greatly reduces the storage space. Moreover, because of the Julia sets’ properties, such as infiniteness and chaotic characteristics, the keys have high sensitivity even to a tiny perturbation. The experimental results indicate that the algorithm has large key space, good statistical property, high sensitivity for the keys, and effective resistance to the chosen-plaintext attack. PMID:24404181

  19. Semiconductor laser self-mixing micro-vibration measuring technology based on Hilbert transform

    NASA Astrophysics Data System (ADS)

    Tao, Yufeng; Wang, Ming; Xia, Wei

    2016-06-01

    A signal-processing synthesizing Wavelet transform and Hilbert transform is employed to measurement of uniform or non-uniform vibrations in self-mixing interferometer on semiconductor laser diode with quantum well. Background noise and fringe inclination are solved by decomposing effect, fringe counting is adopted to automatic determine decomposing level, a couple of exact quadrature signals are produced by Hilbert transform to extract vibration. The tempting potential of real-time measuring micro vibration with high accuracy and wide dynamic response bandwidth using proposed method is proven by both simulation and experiment. Advantages and error sources are presented as well. Main features of proposed semiconductor laser self-mixing interferometer are constant current supply, high resolution, simplest optical path and much higher tolerance to feedback level than existing self-mixing interferometers, which is competitive for non-contact vibration measurement.

  20. Classification With Truncated Distance Kernel.

    PubMed

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  1. Application of Huang-Hilbert Transforms to Geophysical Datasets

    NASA Technical Reports Server (NTRS)

    Duffy, Dean G.

    2003-01-01

    The Huang-Hilbert transform is a promising new method for analyzing nonstationary and nonlinear datasets. In this talk I will apply this technique to several important geophysical datasets. To understand the strengths and weaknesses of this method, multi- year, hourly datasets of the sea level heights and solar radiation will be analyzed. Then we will apply this transform to the analysis of gravity waves observed in a mesoscale observational net.

  2. Efficient Asymptotic Preserving Deterministic methods for the Boltzmann Equation

    DTIC Science & Technology

    2011-04-01

    history tracing back to Hilbert , Chapmann and Enskog (Cercignani, 1988) at the beginning of the last century. The mathematical difficulties related to the...accurate determin- istic computations of the stationary solutions, which may be treated by schemes aimed to capture the stationary state ( Greenberg and...Stokes model, can be considered using the Chapmann-Enskog and the Hilbert expansions. We refer to Levermore (1996) for a mathematical setting of the

  3. The Einstein-Hilbert gravitation with minimum length

    NASA Astrophysics Data System (ADS)

    Louzada, H. L. C.

    2018-05-01

    We study the Einstein-Hilbert gravitation with the deformed Heisenberg algebra leading to the minimum length, with the intention to find and estimate the corrections in this theory, clarifying whether or not it is possible to obtain, by means of the minimum length, a theory, in D=4, which is causal, unitary and provides a massive graviton. Therefore, we will calculate and analyze the dispersion relationships of the considered theory.

  4. Evaluation of accuracy of synthetic waveforms for subduction-zone earthquakes by using a land-ocean unified 3D structure model

    NASA Astrophysics Data System (ADS)

    Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi

    2018-06-01

    Seismic wave propagation from shallow subduction-zone earthquakes can be strongly affected by 3D heterogeneous structures, such as oceanic water and sedimentary layers with irregular thicknesses. Synthetic waveforms must incorporate these effects so that they reproduce the characteristics of the observed waveforms properly. In this paper, we evaluate the accuracy of synthetic waveforms for small earthquakes in the source area of the 2011 Tohoku-Oki earthquake ( M JMA 9.0) at the Japan Trench. We compute the synthetic waveforms on the basis of a land-ocean unified 3D structure model using our heterogeneity, oceanic layer, and topography finite-difference method. In estimating the source parameters, we apply the first-motion augmented moment tensor (FAMT) method that we have recently proposed to minimize biases due to inappropriate source parameters. We find that, among several estimates, only the FAMT solutions are located very near the plate interface, which demonstrates the importance of using a 3D model for ensuring the self-consistency of the structure model, source position, and source mechanisms. Using several different filter passbands, we find that the full waveforms with periods longer than about 10 s can be reproduced well, while the degree of waveform fitting becomes worse for periods shorter than about 10 s. At periods around 4 s, the initial body waveforms can be modeled, but the later large-amplitude surface waves are difficult to reproduce correctly. The degree of waveform fitting depends on the source location, with better fittings for deep sources near land. We further examine the 3D sensitivity kernels: for the period of 12.8 s, the kernel shows a symmetric pattern with respect to the straight path between the source and the station, while for the period of 6.1 s, a curved pattern is obtained. Also, the range of the sensitive area becomes shallower for the latter case. Such a 3D spatial pattern cannot be predicted by 1D Earth models and indicates the strong effects of 3D heterogeneity on short-period ( ≲ 10s) waveforms. Thus, it would be necessary to consider such 3D effects when improving the structure and source models.

  5. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    PubMed Central

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-01-01

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202

  6. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.

  7. Revision of laser-induced damage threshold evaluation from damage probability data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas

    2013-04-15

    In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametricmore » regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).« less

  8. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  9. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  10. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  11. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  12. Electronic polarization effect on low-frequency infrared and Raman spectra of aprotic solvent: Molecular dynamics simulation study with charge response kernel by second order Møller-Plesset perturbation method

    NASA Astrophysics Data System (ADS)

    Isegawa, Miho; Kato, Shigeki

    2007-12-01

    Low-frequency infrared (IR) and depolarized Raman scattering (DRS) spectra of acetonitrile, methylene chloride, and acetone liquids are simulated via molecular dynamics calculations with the charge response kernel (CRK) model obtained at the second order Møller-Plesset perturbation (MP2) level. For this purpose, the analytical second derivative technique for the MP2 energy is employed to evaluate the CRK matrices. The calculated IR spectra reasonably agree with the experiments. In particular, the agreement is excellent for acetone because the present CRK model well reproduces the experimental polarizability in the gas phase. The importance of interaction induced dipole moments in characterizing the spectral shapes is stressed. The DRS spectrum of acetone is mainly discussed because the experimental spectrum is available only for this molecule. The calculated spectrum is close to the experiment. The comparison of the present results with those by the multiple random telegraph model is also made. By decomposing the polarizability anisotropy time correlation function to the contributions from the permanent, induced polarizability and their cross term, a discrepancy from the previous calculations is observed in the sign of permanent-induce cross term contribution. The origin of this discrepancy is discussed by analyzing the correlation functions for acetonitrile.

  13. Construction of non-Markovian coarse-grained models employing the Mori-Zwanzig formalism and iterative Boltzmann inversion

    NASA Astrophysics Data System (ADS)

    Yoshimoto, Yuta; Li, Zhen; Kinefuchi, Ikuya; Karniadakis, George Em

    2017-12-01

    We propose a new coarse-grained (CG) molecular simulation technique based on the Mori-Zwanzig (MZ) formalism along with the iterative Boltzmann inversion (IBI). Non-Markovian dissipative particle dynamics (NMDPD) taking into account memory effects is derived in a pairwise interaction form from the MZ-guided generalized Langevin equation. It is based on the introduction of auxiliary variables that allow for the replacement of a non-Markovian equation with a Markovian one in a higher dimensional space. We demonstrate that the NMDPD model exploiting MZ-guided memory kernels can successfully reproduce the dynamic properties such as the mean square displacement and velocity autocorrelation function of a Lennard-Jones system, as long as the memory kernels are appropriately evaluated based on the Volterra integral equation using the force-velocity and velocity-velocity correlations. Furthermore, we find that the IBI correction of a pair CG potential significantly improves the representation of static properties characterized by a radial distribution function and pressure, while it has little influence on the dynamic processes. Our findings suggest that combining the advantages of both the MZ formalism and IBI leads to an accurate representation of both the static and dynamic properties of microscopic systems that exhibit non-Markovian behavior.

  14. Nuclear magnetic resonance shielding constants and chemical shifts in linear 199Hg compounds: a comparison of three relativistic computational methods.

    PubMed

    Arcisauskaite, Vaida; Melo, Juan I; Hemmingsen, Lars; Sauer, Stephan P A

    2011-07-28

    We investigate the importance of relativistic effects on NMR shielding constants and chemical shifts of linear HgL(2) (L = Cl, Br, I, CH(3)) compounds using three different relativistic methods: the fully relativistic four-component approach and the two-component approximations, linear response elimination of small component (LR-ESC) and zeroth-order regular approximation (ZORA). LR-ESC reproduces successfully the four-component results for the C shielding constant in Hg(CH(3))(2) within 6 ppm, but fails to reproduce the Hg shielding constants and chemical shifts. The latter is mainly due to an underestimation of the change in spin-orbit contribution. Even though ZORA underestimates the absolute Hg NMR shielding constants by ∼2100 ppm, the differences between Hg chemical shift values obtained using ZORA and the four-component approach without spin-density contribution to the exchange-correlation (XC) kernel are less than 60 ppm for all compounds using three different functionals, BP86, B3LYP, and PBE0. However, larger deviations (up to 366 ppm) occur for Hg chemical shifts in HgBr(2) and HgI(2) when ZORA results are compared with four-component calculations with non-collinear spin-density contribution to the XC kernel. For the ZORA calculations it is necessary to use large basis sets (QZ4P) and the TZ2P basis set may give errors of ∼500 ppm for the Hg chemical shifts, despite deceivingly good agreement with experimental data. A Gaussian nucleus model for the Coulomb potential reduces the Hg shielding constants by ∼100-500 ppm and the Hg chemical shifts by 1-143 ppm compared to the point nucleus model depending on the atomic number Z of the coordinating atom and the level of theory. The effect on the shielding constants of the lighter nuclei (C, Cl, Br, I) is, however, negligible. © 2011 American Institute of Physics

  15. Exploiting graph kernels for high performance biomedical relation extraction.

    PubMed

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.

  16. Geometry and experience: Einstein's 1921 paper and Hilbert's axiomatic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Gandt, Francois

    2006-06-19

    In his 1921 paper Geometrie und Erfahrung, Einstein decribes the new epistemological status of geometry, divorced from any intuitive or a priori content. He calls that 'axiomatics', following Hilbert's theoretical developments on axiomatic systems, which started with the stimulus given by a talk by Hermann Wiener in 1891 and progressed until the Foundations of geometry in 1899. Difficult questions arise: how is a theoretical system related to an intuitive empirical content?.

  17. An Investigation of the Overlap Among Disinhibited Eating Behaviors in Children and Adolescents

    DTIC Science & Technology

    2013-09-01

    findings suggest that emotional eating may be linked with aberrant eating patterns--excess overall energy intake and consumption of high- fat foods-that...overweight and have greater body fat mass than youth who reported no loss of control eating episodes (Ackard, Neumark-Sztainer, Story, & Perry, 2003; Field...reporting loss of control eating consumed more overall energy (Hilbert & Czaja, 2009; Hilbert, et al., 2010), especially from fat and carbohydrate

  18. Inverse Problems and Imaging (Pitman Research Notes in Mathematics Series Number 245)

    DTIC Science & Technology

    1991-01-01

    Multiparamcter spectral theory in Hilbert space functional differential cquations B D Sleeman F Kappel and W Schappacher 24 Mathematical modelling...techniques 49 Sequence spaces R Aris W 11 Ruckle 25 Singular points of smooth mappings 50 Recent contributions to nonlinear C G Gibson partial...of convergence in the central limit T Husain theorem 86 Hamilton-Jacobi equations in Hilbert spaces Peter Hall V Barbu and G Da Prato 63 Solution of

  19. Homogenization via Sequential Projection to Nested Subspaces Spanned by Orthogonal Scaling and Wavelet Orthonormal Families of Functions

    DTIC Science & Technology

    2008-07-01

    operators in Hilbert spaces. The homogenization procedure through successive multi- resolution projections is presented, followed by a numerical example of...is intended to be essentially self-contained. The mathematical ( Greenberg 1978; Gilbert 2006) and signal processing (Strang and Nguyen 1995...literature listed in the references. The ideas behind multi-resolution analysis unfold from the theory of linear operators in Hilbert spaces (Davis 1975

  20. Experimental Test of Nonclassicality for a Single Particle

    DTIC Science & Technology

    2008-08-01

    photon Greenberger -Horne-Zeilinger entanglement,” Nature 403, 515-519 (2000). 15. G. Brida, M. Genovese, C. Novero, and E. Predazzi, “New experimental...33, 34]) and its ability to show that some quantum states in a two dimensional Hilbert space cannot be classical. We note that because this is a...dimensional Hilbert space and a physical implementation of that test. Appendix A necessary requirement for a convincingly realizing the Alicki-Van Ryn’s

  1. Spherical harmonics and rigged Hilbert spaces

    NASA Astrophysics Data System (ADS)

    Celeghini, E.; Gadella, M.; del Olmo, M. A.

    2018-05-01

    This paper is devoted to study discrete and continuous bases for spaces supporting representations of SO(3) and SO(3, 2) where the spherical harmonics are involved. We show how discrete and continuous bases coexist on appropriate choices of rigged Hilbert spaces. We prove the continuity of relevant operators and the operators in the algebras spanned by them using appropriate topologies on our spaces. Finally, we discuss the properties of the functionals that form the continuous basis.

  2. Using the Hilbert uniqueness method in a reconstruction algorithm for electrical impedance tomography.

    PubMed

    Dai, W W; Marsili, P M; Martinez, E; Morucci, J P

    1994-05-01

    This paper presents a new version of the layer stripping algorithm in the sense that it works essentially by repeatedly stripping away the outermost layer of the medium after having determined the conductivity value in this layer. In order to stabilize the ill posed boundary value problem related to each layer, we base our algorithm on the Hilbert uniqueness method (HUM) and implement it with the boundary element method (BEM).

  3. Quantum Hilbert Hotel.

    PubMed

    Potoček, Václav; Miatto, Filippo M; Mirhosseini, Mohammad; Magaña-Loaiza, Omar S; Liapis, Andreas C; Oi, Daniel K L; Boyd, Robert W; Jeffers, John

    2015-10-16

    In 1924 David Hilbert conceived a paradoxical tale involving a hotel with an infinite number of rooms to illustrate some aspects of the mathematical notion of "infinity." In continuous-variable quantum mechanics we routinely make use of infinite state spaces: here we show that such a theoretical apparatus can accommodate an analog of Hilbert's hotel paradox. We devise a protocol that, mimicking what happens to the guests of the hotel, maps the amplitudes of an infinite eigenbasis to twice their original quantum number in a coherent and deterministic manner, producing infinitely many unoccupied levels in the process. We demonstrate the feasibility of the protocol by experimentally realizing it on the orbital angular momentum of a paraxial field. This new non-Gaussian operation may be exploited, for example, for enhancing the sensitivity of NOON states, for increasing the capacity of a channel, or for multiplexing multiple channels into a single one.

  4. Cosmic transit and anisotropic models in f(R,T) gravity

    NASA Astrophysics Data System (ADS)

    Sahu, S. K.; Tripathy, S. K.; Sahoo, P. K.; Nath, A.

    2017-06-01

    Accelerating cosmological models are constructed in a modified gravity theory dubbed as $f(R,T)$ gravity at the backdrop of an anisotropic Bianchi type-III universe. $f(R,T)$ is a function of the Ricci scalar $R$ and the trace $T$ of the energy-momentum tensor and it replaces the Ricci scalar in the Einstein-Hilbert action of General Relativity. The models are constructed for two different ways of modification of the Einstein-Hilbert action. Exact solutions of the field equations are obtained by a novel method of integration. We have explored the behaviour of the cosmic transit from an decelerated phase of expansion to an accelerated phase to get the dynamical features of the universe. Within the formalism of the present work, it is found that, the modification of the Einstein-Hilbert action does not affect the scale factor. However the dynamics of the effective dark energy equation of state is significantly affected.

  5. Computer implemented empirical mode decomposition method, apparatus, and article of manufacture for two-dimensional signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2001-01-01

    A computer implemented method of processing two-dimensional physical signals includes five basic components and the associated presentation techniques of the results. The first component decomposes the two-dimensional signal into one-dimensional profiles. The second component is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF's) from each profile based on local extrema and/or curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the profiles. In the third component, the IMF's of each profile are then subjected to a Hilbert Transform. The fourth component collates the Hilbert transformed IMF's of the profiles to form a two-dimensional Hilbert Spectrum. A fifth component manipulates the IMF's by, for example, filtering the two-dimensional signal by reconstructing the two-dimensional signal from selected IMF(s).

  6. Spectral Automorphisms in Quantum Logics

    NASA Astrophysics Data System (ADS)

    Ivanov, Alexandru; Caragheorgheopol, Dan

    2010-12-01

    In quantum mechanics, the Hilbert space formalism might be physically justified in terms of some axioms based on the orthomodular lattice (OML) mathematical structure (Piron in Foundations of Quantum Physics, Benjamin, Reading, 1976). We intend to investigate the extent to which some fundamental physical facts can be described in the more general framework of OMLs, without the support of Hilbert space-specific tools. We consider the study of lattice automorphisms properties as a “substitute” for Hilbert space techniques in investigating the spectral properties of observables. This is why we introduce the notion of spectral automorphism of an OML. Properties of spectral automorphisms and of their spectra are studied. We prove that the presence of nontrivial spectral automorphisms allow us to distinguish between classical and nonclassical theories. We also prove, for finite dimensional OMLs, that for every spectral automorphism there is a basis of invariant atoms. This is an analogue of the spectral theorem for unitary operators having purely point spectrum.

  7. Two elementary proofs of the Wigner theorem on symmetry in quantum mechanics

    NASA Astrophysics Data System (ADS)

    Simon, R.; Mukunda, N.; Chaturvedi, S.; Srinivasan, V.

    2008-11-01

    In quantum theory, symmetry has to be defined necessarily in terms of the family of unit rays, the state space. The theorem of Wigner asserts that a symmetry so defined at the level of rays can always be lifted into a linear unitary or an antilinear antiunitary operator acting on the underlying Hilbert space. We present two proofs of this theorem which are both elementary and economical. Central to our proofs is the recognition that a given Wigner symmetry can, by post-multiplication by a unitary symmetry, be taken into either the identity or complex conjugation. Our analysis often focuses on the behaviour of certain two-dimensional subspaces of the Hilbert space under the action of a given Wigner symmetry, but the relevance of this behaviour to the larger picture of the whole Hilbert space is made transparent at every stage.

  8. Liquid identification by Hilbert spectroscopy

    NASA Astrophysics Data System (ADS)

    Lyatti, M.; Divin, Y.; Poppe, U.; Urban, K.

    2009-11-01

    Fast and reliable identification of liquids is of great importance in, for example, security, biology and the beverage industry. An unambiguous identification of liquids can be made by electromagnetic measurements of their dielectric functions in the frequency range of their main dispersions, but this frequency range, from a few GHz to a few THz, is not covered by any conventional spectroscopy. We have developed a concept of liquid identification based on our new Hilbert spectroscopy and high- Tc Josephson junctions, which can operate at the intermediate range from microwaves to THz frequencies. A demonstration setup has been developed consisting of a polychromatic radiation source and a compact Hilbert spectrometer integrated in a Stirling cryocooler. Reflection polychromatic spectra of various bottled liquids have been measured at the spectral range of 15-300 GHz with total scanning time down to 0.2 s and identification of liquids has been demonstrated.

  9. Independence and totalness of subspaces in phase space methods

    NASA Astrophysics Data System (ADS)

    Vourdas, A.

    2018-04-01

    The concepts of independence and totalness of subspaces are introduced in the context of quasi-probability distributions in phase space, for quantum systems with finite-dimensional Hilbert space. It is shown that due to the non-distributivity of the lattice of subspaces, there are various levels of independence, from pairwise independence up to (full) independence. Pairwise totalness, totalness and other intermediate concepts are also introduced, which roughly express that the subspaces overlap strongly among themselves, and they cover the full Hilbert space. A duality between independence and totalness, that involves orthocomplementation (logical NOT operation), is discussed. Another approach to independence is also studied, using Rota's formalism on independent partitions of the Hilbert space. This is used to define informational independence, which is proved to be equivalent to independence. As an application, the pentagram (used in discussions on contextuality) is analysed using these concepts.

  10. Diurnal characteristics of turbulent intermittency in the Taklimakan Desert

    NASA Astrophysics Data System (ADS)

    Wei, Wei; Wang, Minzhong; Zhang, Hongsheng; He, Qing; Ali, Mamtimin; Wang, Yinjun

    2017-12-01

    A case study is performed to investigate the behavior of turbulent intermittency in the Taklimakan Desert using an intuitive, direct, and adaptive method, the arbitrary-order Hilbert spectral analysis (arbitrary-order HSA). Decomposed modes from the vertical wind speed series confirm the dyadic filter-bank essence of the empirical mode decomposition processes. Due to the larger eddies in the CBL, higher energy modes occur during the day. The second-order Hilbert spectra L2 (ω ) delineate the spectral gap separating fine-scale turbulence from large-scale motions. Both the values of kurtosis and the Hilbert-based scaling exponent ξ ( q ) reveal that the turbulence intermittency at night is much stronger than that during the day, and the stronger intermittency is associated with more stable stratification under clear-sky conditions. This study fills the gap in the characteristics of turbulence intermittency in the Taklimakan Desert area using a relatively new method.

  11. 7 CFR 810.2202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernels, foreign material, and shrunken and broken kernels. The sum of these three factors may not exceed... the removal of dockage and shrunken and broken kernels. (g) Heat-damaged kernels. Kernels, pieces of... sample after the removal of dockage and shrunken and broken kernels. (h) Other grains. Barley, corn...

  12. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  13. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...

  14. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  15. Unconventional protein sources: apricot seed kernels.

    PubMed

    Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M

    1981-09-01

    Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.

  16. An introduction to kernel-based learning algorithms.

    PubMed

    Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B

    2001-01-01

    This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.

  17. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  18. Design of CT reconstruction kernel specifically for clinical lung imaging

    NASA Astrophysics Data System (ADS)

    Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.

    2005-04-01

    In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.

  19. Quality changes in macadamia kernel between harvest and farm-gate.

    PubMed

    Walton, David A; Wallace, Helen M

    2011-02-01

    Macadamia integrifolia, Macadamia tetraphylla and their hybrids are cultivated for their edible kernels. After harvest, nuts-in-shell are partially dried on-farm and sorted to eliminate poor-quality kernels before consignment to a processor. During these operations, kernel quality may be lost. In this study, macadamia nuts-in-shell were sampled at five points of an on-farm postharvest handling chain from dehusking to the final storage silo to assess quality loss prior to consignment. Shoulder damage, weight of pieces and unsound kernel were assessed for raw kernels, and colour, mottled colour and surface damage for roasted kernels. Shoulder damage, weight of pieces and unsound kernel for raw kernels increased significantly between the dehusker and the final silo. Roasted kernels displayed a significant increase in dark colour, mottled colour and surface damage during on-farm handling. Significant loss of macadamia kernel quality occurred on a commercial farm during sorting and storage of nuts-in-shell before nuts were consigned to a processor. Nuts-in-shell should be dried as quickly as possible and on-farm handling minimised to maintain optimum kernel quality. 2010 Society of Chemical Industry.

  20. A new discriminative kernel from probabilistic models.

    PubMed

    Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert

    2002-10-01

    Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.

  1. Modeling and Control of Large Flexible Structures.

    DTIC Science & Technology

    1984-07-31

    59 4.5 Spectral factorization using the Hilbert transform 62 4.6 Gain computations 64 4.7 Software development and control system performance 66 Part...in the Hilbert space - L2(S) with the natural inner product, ,>. In many cases A O has a discrete spectrum with associated 2 eigenfunctions which...Davis and Barry 1977) ( Greenberg , MacCamy Nisel and 1968). The natural boundary~.; ’? , : conditions for (17) are in terms of s(zt) at s-O and 1

  2. Multirate Integration Properties of Waveform Relaxation with Applications to Circuit Simulation and Parallel Computation

    DTIC Science & Technology

    1985-11-18

    Greenberg and K. Sakallah at Digital Equipment Corporation, and C-F. Chen, L Nagel, and P. ,. Subrahmanyam at AT&T Bell Laboratories, both for providing...Circuit Theory McGraw-Hill, 1969. [37] R. Courant and D. Hilbert , Partial Differential Equations, Vol. 2 of Methods of Mathematical Physics...McGraw-Hill, N.Y., 1965. Page 161 [44) R. Courant and D. Hilbert , Partial Differential Equations, Vol. 2 of Methods of Mathematical Physics

  3. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  4. Einstein Meets Hilbert: At the Crossroads of Physics and Mathematics

    NASA Astrophysics Data System (ADS)

    Rowe, David E.

    One of the most famous episodes in the early history of general relativity involves the ``race'' in November 1915 between Albert Einstein and David Hilbert to uncover the ``correct'' form for the ten gravitational field equations. In light of recent archival findings, however, this story now has become a topic of renewed interest and controversy among historians of physics and mathematics. Drawing on recent studies and newly found sources, the present essay takes up this familiar tale from a new perspective, one that has seldom received due attention in the standard literature, namely, the mathematical issues at the heart of Einstein's theory. Told from this angle, the leading actors are Einstein's collaborator Marcel Grossmann, his critic Tullio Levi-Civita, his competitor David Hilbert, and several other mathematicians, many of them connected with Hilbert's Göttingen colleagues such as Hermann Weyl, Felix Klein, and Emmy Noether. As Einstein was the first to admit, Göttingen was far more important than Berlin as an active center for research in general relativity. Any account which, like this one, tries to understand both the actions and motives of the leading players must confront the problem of interpreting the rather sparse documentary evidence available. The interpretation offered herein, whatever its merits, aims first and foremost to show how mathematical issues deeply permeated the early history of general relativity.

  5. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  6. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    PubMed

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P < 0.05) but the unbroken kernels became significantly harder. Moisture content and moisture uptake rate were positively correlated, and cooked rice hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  7. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  8. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Study on Energy Productivity Ratio (EPR) at palm kernel oil processing factory: case study on PT-X at Sumatera Utara Plantation

    NASA Astrophysics Data System (ADS)

    Haryanto, B.; Bukit, R. Br; Situmeang, E. M.; Christina, E. P.; Pandiangan, F.

    2018-02-01

    The purpose of this study was to determine the performance, productivity and feasibility of the operation of palm kernel processing plant based on Energy Productivity Ratio (EPR). EPR is expressed as the ratio of output to input energy and by-product. Palm Kernel plan is process in palm kernel to become palm kernel oil. The procedure started from collecting data needed as energy input such as: palm kernel prices, energy demand and depreciation of the factory. The energy output and its by-product comprise the whole production price such as: palm kernel oil price and the remaining products such as shells and pulp price. Calculation the equality of energy of palm kernel oil is to analyze the value of Energy Productivity Ratio (EPR) bases on processing capacity per year. The investigation has been done in Kernel Oil Processing Plant PT-X at Sumatera Utara plantation. The value of EPR was 1.54 (EPR > 1), which indicated that the processing of palm kernel into palm kernel oil is feasible to be operated based on the energy productivity.

  10. Coarse graining of entanglement classes in 2 ×m ×n systems

    NASA Astrophysics Data System (ADS)

    Hebenstreit, M.; Gachechiladze, M.; Gühne, O.; Kraus, B.

    2018-03-01

    We consider three-partite pure states in the Hilbert space C2⊗Cm⊗Cn and investigate to which states a given state can be locally transformed with a nonvanishing probability. Whenever the initial and final states are elements of the same Hilbert space, the problem can be solved via the characterization of the entanglement classes which are determined via stochastic local operations and classical communication (SLOCC). In the particular case considered here, the matrix pencil theory can be utilized to address this point. In general, there are infinitely many SLOCC classes. However, when considering transformations from higher to lower dimensional Hilbert spaces, an additional hierarchy among the classes can be found. This hierarchy of SLOCC classes coarse grains SLOCC classes which can be reached from a common resource state of higher dimension. We first show that a generic set of states in C2⊗Cm⊗Cn for n =m is the union of infinitely many SLOCC classes, which can be parameterized by m -3 parameters. However, for n ≠m there exists a single SLOCC class which is generic. Using this result, we then show that there is a full-measure set of states in C2⊗Cm⊗Cn such that any state within this set can be transformed locally to a full measure set of states in any lower dimensional Hilbert space. We also investigate resource states, which can be transformed to any state (not excluding any zero-measure set) in the smaller dimensional Hilbert space. We explicitly derive a state in C2⊗Cm⊗C2 m -2 which is the optimal common resource of all states in C2⊗Cm⊗Cm . We also show that for any n <2 m it is impossible to reach all states in C2⊗Cm⊗Cn ˜ whenever n ˜>m .

  11. Improving 3d Spatial Queries Search: Newfangled Technique of Space Filling Curves in 3d City Modeling

    NASA Astrophysics Data System (ADS)

    Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.

    2013-09-01

    The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its clustering in 2D.

  12. Differentiable representations of finite dimensional Lie groups in rigged Hilbert spaces

    NASA Astrophysics Data System (ADS)

    Wickramasekara, Sujeewa

    The inceptive motivation for introducing rigged Hilbert spaces (RHS) in quantum physics in the mid 1960's was to provide the already well established Dirac formalism with a proper mathematical context. It has since become clear, however, that this mathematical framework is lissome enough to accommodate a class of solutions to the dynamical equations of quantum physics that includes some which are not possible in the normative Hilbert space theory. Among the additional solutions, in particular, are those which describe aspects of scattering and decay phenomena that have eluded the orthodox quantum physics. In this light, the RHS formulation seems to provide a mathematical rubric under which various phenomenological observations and calculational techniques, commonly known in the study of resonance scattering and decay as ``effective theories'' (e.g., the Wigner- Weisskopf method), receive a unified theoretical foundation. These observations lead to the inference that a theory founded upon the RHS mathematics may prove to be of better utility and value in understanding quantum physical phenomena. This dissertation primarily aims to contribute to the general formalism of the RHS theory of quantum mechanics by undertaking a study of differentiable representations of finite dimensional Lie groups. In particular, it is shown that a finite dimensional operator Lie algebra G in a rigged Hilbert space can be always integrated, provided one parameter integrability holds true for the elements of any basis for G . This result differs from and extends the well known integration theorem of E. Nelson and the subsequent works of others on unitary representations in that it does not require any assumptions on the existence of analytic vectors. Also presented here is a construction of a particular rigged Hilbert space of Hardy class functions that appears useful in formulating a relativistic version of the RHS theory of resonances and decay. As a contexture for the construction, a synopsis of the new relativistic theory is presented.

  13. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    PubMed Central

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  14. Separability and Entanglement in the Hilbert Space Reference Frames Related Through the Generic Unitary Transform for Four Level System

    NASA Astrophysics Data System (ADS)

    Man'ko, V. I.; Markovich, L. A.

    2018-02-01

    Quantum correlations in the state of four-level atom are investigated by using generic unitary transforms of the classical (diagonal) density matrix. Partial cases of pure state, X-state, Werner state are studied in details. The geometrical meaning of unitary Hilbert reference-frame rotations generating entanglement in the initially separable state is discussed. Characteristics of the entanglement in terms of concurrence, entropy and negativity are obtained as functions of the unitary matrix rotating the reference frame.

  15. The Ostrovsky-Vakhnenko equation by a Riemann-Hilbert approach

    NASA Astrophysics Data System (ADS)

    Boutet de Monvel, Anne; Shepelsky, Dmitry

    2015-01-01

    We present an inverse scattering transform (IST) approach for the (differentiated) Ostrovsky-Vakhnenko equation This equation can also be viewed as the short wave model for the Degasperis-Procesi (sDP) equation. Our IST approach is based on an associated Riemann-Hilbert problem, which allows us to give a representation for the classical (smooth) solution, to get the principal term of its long time asymptotics, and also to describe loop soliton solutions. Dedicated to Johannes Sjöstrand with gratitude and admiration.

  16. Global Bifurcation of Periodic Solutions with Symmetry,

    DTIC Science & Technology

    1987-07-01

    C4-family of sectorial operators on a real Hilbert (2.32.a) space X, with dense domain D(A(A)) which is independent of A E E, and with compact...Vanl, theorem 2.5.91. If .F and E’ are both Hilbert spaces with orthogonal action of r, we may drop the assumption that 1 is compact. Just take...some meandering. Let us define a limit for any sequence Si of subsets of some metric space . Following Whyburn [Why], we define lir sup Si {z: z

  17. Weak Solution Classes for Parabolic Integro-Differential Equations

    DTIC Science & Technology

    1982-09-01

    different existence argument for solutions of (I). It is partly based on a method that was used in (2) and (6] to treat a Hilbert - space version of (I) and...xx Differential Equations 35 (1980), 200-231. 121 V. Barbut Integro-Oifferential Squatton. in Hilbert Spaces. Ann. St. Univ. *Al. 1. Cuaxa 19 (1973... Greenberg : O,% the Existence, Uniqueness, and stability of the Equation 00 Xtt - 3(XX)X) AX *x . J Math. Anal. Appl. 25 (1969), S75-591. (131 7

  18. Effect of Hilbert space truncation on Anderson localization

    NASA Astrophysics Data System (ADS)

    Krishna, Akshay; Bhatt, R. N.

    2018-05-01

    The 1D Anderson model possesses a completely localized spectrum of eigenstates for all values of the disorder. We consider the effect of projecting the Hamiltonian to a truncated Hilbert space, destroying time-reversal symmetry. We analyze the ensuing eigenstates using different measures such as inverse participation ratio and sample-averaged moments of the position operator. In addition, we examine amplitude fluctuations in detail to detect the possibility of multifractal behavior (characteristic of mobility edges) that may arise as a result of the truncation procedure.

  19. A Lower Bound for the Norm of the Solution of a Nonlinear Volterra Equation in One-Dimensional Viscoelasticity.

    DTIC Science & Technology

    1980-12-09

    34, Symp. on Non-well-posed Problems and Logarithmic Convexity (Lecture Notes on Math. #316), pp. 31-5h, Springer, 1973. 3. Greenberg , J.M., MacCamy, R.C...34Continuous Data Dependence for an Abstract Volterra Integro- Differential Equation in Hilbert Space with Applications to Viscoelasticity", Annali Scuola... Hilbert Space", to appear in the J. Applicable Analysis. 8. Slemrod, M., "Instability of Steady Shearing Flows in a Nonlinear Viscoelastic Fluid", Arch

  20. New gravitational solutions via a Riemann-Hilbert approach

    NASA Astrophysics Data System (ADS)

    Cardoso, G. L.; Serra, J. C.

    2018-03-01

    We consider the Riemann-Hilbert factorization approach to solving the field equations of dimensionally reduced gravity theories. First we prove that functions belonging to a certain class possess a canonical factorization due to properties of the underlying spectral curve. Then we use this result, together with appropriate matricial decompositions, to study the canonical factorization of non-meromorphic monodromy matrices that describe deformations of seed monodromy matrices associated with known solutions. This results in new solutions, with unusual features, to the field equations.

  1. Hilbert-Schmidt Measure of Pairwise Quantum Discord for Three-Qubit X States

    NASA Astrophysics Data System (ADS)

    Daoud, M.; Laamara, R. Ahl; Seddik, S.

    2015-10-01

    The Hilbert-Schmidt distance between a mixed three-qubit state and its closest state is used to quantify the amount of pairwise quantum correlations in a tripartite system. Analytical expressions of geometric quantum discord are derived. A particular attention is devoted to two special classes of three-qubit X states. They include three-qubit states of W, GHZ and Bell type. We also discuss the monogamy property of geometric quantum discord in some mixed three-qubit systems.

  2. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  3. An SVM model with hybrid kernels for hydrological time series

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  4. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Multiple kernels learning-based biological entity relationship extraction method.

    PubMed

    Dongliang, Xu; Jingchang, Pan; Bailing, Wang

    2017-09-20

    Automatic extracting protein entity interaction information from biomedical literature can help to build protein relation network and design new drugs. There are more than 20 million literature abstracts included in MEDLINE, which is the most authoritative textual database in the field of biomedicine, and follow an exponential growth over time. This frantic expansion of the biomedical literature can often be difficult to absorb or manually analyze. Thus efficient and automated search engines are necessary to efficiently explore the biomedical literature using text mining techniques. The P, R, and F value of tag graph method in Aimed corpus are 50.82, 69.76, and 58.61%, respectively. The P, R, and F value of tag graph kernel method in other four evaluation corpuses are 2-5% higher than that of all-paths graph kernel. And The P, R and F value of feature kernel and tag graph kernel fuse methods is 53.43, 71.62 and 61.30%, respectively. The P, R and F value of feature kernel and tag graph kernel fuse methods is 55.47, 70.29 and 60.37%, respectively. It indicated that the performance of the two kinds of kernel fusion methods is better than that of simple kernel. In comparison with the all-paths graph kernel method, the tag graph kernel method is superior in terms of overall performance. Experiments show that the performance of the multi-kernels method is better than that of the three separate single-kernel method and the dual-mutually fused kernel method used hereof in five corpus sets.

  6. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  7. 7 CFR 810.206 - Grades and grade requirements for barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... weight per bushel (pounds) Sound barley (percent) Maximum Limits of— Damaged kernels 1 (percent) Heat damaged kernels (percent) Foreign material (percent) Broken kernels (percent) Thin barley (percent) U.S... or otherwise of distinctly low quality. 1 Includes heat-damaged kernels. Injured-by-frost kernels and...

  8. Implementing the Deutsch-Jozsa algorithm with macroscopic ensembles

    NASA Astrophysics Data System (ADS)

    Semenenko, Henry; Byrnes, Tim

    2016-05-01

    Quantum computing implementations under consideration today typically deal with systems with microscopic degrees of freedom such as photons, ions, cold atoms, and superconducting circuits. The quantum information is stored typically in low-dimensional Hilbert spaces such as qubits, as quantum effects are strongest in such systems. It has, however, been demonstrated that quantum effects can be observed in mesoscopic and macroscopic systems, such as nanomechanical systems and gas ensembles. While few-qubit quantum information demonstrations have been performed with such macroscopic systems, a quantum algorithm showing exponential speedup over classical algorithms is yet to be shown. Here, we show that the Deutsch-Jozsa algorithm can be implemented with macroscopic ensembles. The encoding that we use avoids the detrimental effects of decoherence that normally plagues macroscopic implementations. We discuss two mapping procedures which can be chosen depending upon the constraints of the oracle and the experiment. Both methods have an exponential speedup over the classical case, and only require control of the ensembles at the level of the total spin of the ensembles. It is shown that both approaches reproduce the qubit Deutsch-Jozsa algorithm, and are robust under decoherence.

  9. Machine Learning of Accurate Energy-Conserving Molecular Force Fields

    NASA Astrophysics Data System (ADS)

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel; Poltavsky, Igor; Schütt, Kristof; Müller, Klaus-Robert; GDML Collaboration

    Efficient and accurate access to the Born-Oppenheimer potential energy surface (PES) is essential for long time scale molecular dynamics (MD) simulations. Using conservation of energy - a fundamental property of closed classical and quantum mechanical systems - we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio MD trajectories (AIMD). The GDML implementation is able to reproduce global potential-energy surfaces of intermediate-size molecules with an accuracy of 0.3 kcal/mol for energies and 1 kcal/mol/Å for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, malonaldehyde, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative MD simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods.

  10. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  11. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  12. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will not...

  13. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  14. The Classification of Diabetes Mellitus Using Kernel k-means

    NASA Astrophysics Data System (ADS)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  15. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  16. Detection of maize kernels breakage rate based on K-means clustering

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping

    2017-04-01

    In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.

  17. Modeling adaptive kernels from probabilistic phylogenetic trees.

    PubMed

    Nicotra, Luca; Micheli, Alessio

    2009-01-01

    Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.

  18. Aflatoxin and nutrient contents of peanut collected from local market and their processed foods

    NASA Astrophysics Data System (ADS)

    Ginting, E.; Rahmianna, A. A.; Yusnawan, E.

    2018-01-01

    Peanut is succeptable to aflatoxin contamination and the sources of peanut as well as processing methods considerably affect aflatoxin content of the products. Therefore, the study on aflatoxin and nutrient contents of peanut collected from local market and their processed foods were performed. Good kernels of peanut were prepared into fried peanut, pressed-fried peanut, peanut sauce, peanut press cake, fermented peanut press cake (tempe) and fried tempe, while blended kernels (good and poor kernels) were processed into peanut sauce and tempe and poor kernels were only processed into tempe. The results showed that good and blended kernels which had high number of sound/intact kernels (82,46% and 62,09%), contained 9.8-9.9 ppb of aflatoxin B1, while slightly higher level was seen in poor kernels (12.1 ppb). However, the moisture, ash, protein, and fat contents of the kernels were similar as well as the products. Peanut tempe and fried tempe showed the highest increase in protein content, while decreased fat contents were seen in all products. The increase in aflatoxin B1 of peanut tempe prepared from poor kernels > blended kernels > good kernels. However, it averagely decreased by 61.2% after deep-fried. Excluding peanut tempe and fried tempe, aflatoxin B1 levels in all products derived from good kernels were below the permitted level (15 ppb). This suggests that sorting peanut kernels as ingredients and followed by heat processing would decrease the aflatoxin content in the products.

  19. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  20. Full-field optical coherence tomography image restoration based on Hilbert transformation

    NASA Astrophysics Data System (ADS)

    Na, Jihoon; Choi, Woo June; Choi, Eun Seo; Ryu, Seon Young; Lee, Byeong Ha

    2007-02-01

    We propose the envelope detection method that is based on Hilbert transform for image restoration in full-filed optical coherence tomography (FF-OCT). The FF-OCT system presenting a high-axial resolution of 0.9 μm was implemented with a Kohler illuminator based on Linnik interferometer configuration. A 250 W customized quartz tungsten halogen lamp was used as a broadband light source and a CCD camera was used as a 2-dimentional detector array. The proposed image restoration method for FF-OCT requires only single phase-shifting. By using both the original and the phase-shifted images, we could remove the offset and the background signals from the interference fringe images. The desired coherent envelope image was obtained by applying Hilbert transform. With the proposed image restoration method, we demonstrate en-face imaging performance of the implemented FF-OCT system by presenting a tilted mirror surface, an integrated circuit chip, and a piece of onion epithelium.

  1. Exact Fan-Beam Reconstruction With Arbitrary Object Translations and Truncated Projections

    NASA Astrophysics Data System (ADS)

    Hoskovec, Jan; Clackdoyle, Rolf; Desbat, Laurent; Rit, Simon

    2016-06-01

    This article proposes a new method for reconstructing two-dimensional (2D) computed tomography (CT) images from truncated and motion contaminated sinograms. The type of motion considered here is a sequence of rigid translations which are assumed to be known. The algorithm first identifies the sufficiency of angular coverage in each 2D point of the CT image to calculate the Hilbert transform from the local “virtual” trajectory which accounts for the motion and the truncation. By taking advantage of data redundancy in the full circular scan, our method expands the reconstructible region beyond the one obtained with chord-based methods. The proposed direct reconstruction algorithm is based on the Differentiated Back-Projection with Hilbert filtering (DBP-H). The motion is taken into account during backprojection which is the first step of our direct reconstruction, before taking the derivatives and inverting the finite Hilbert transform. The algorithm has been tested in a proof-of-concept study on Shepp-Logan phantom simulations with several motion cases and detector sizes.

  2. Bulk entanglement gravity without a boundary: Towards finding Einstein's equation in Hilbert space

    NASA Astrophysics Data System (ADS)

    Cao, ChunJun; Carroll, Sean M.

    2018-04-01

    We consider the emergence from quantum entanglement of spacetime geometry in a bulk region. For certain classes of quantum states in an appropriately factorized Hilbert space, a spatial geometry can be defined by associating areas along codimension-one surfaces with the entanglement entropy between either side. We show how radon transforms can be used to convert these data into a spatial metric. Under a particular set of assumptions, the time evolution of such a state traces out a four-dimensional spacetime geometry, and we argue using a modified version of Jacobson's "entanglement equilibrium" that the geometry should obey Einstein's equation in the weak-field limit. We also discuss how entanglement equilibrium is related to a generalization of the Ryu-Takayanagi formula in more general settings, and how quantum error correction can help specify the emergence map between the full quantum-gravity Hilbert space and the semiclassical limit of quantum fields propagating on a classical spacetime.

  3. Janus configurations with SL(2, ℤ)-duality twists, strings on mapping tori and a tridiagonal determinant formula

    NASA Astrophysics Data System (ADS)

    Ganor, Ori J.; Moore, Nathan P.; Sun, Hao-Yu; Torres-Chicon, Nesty R.

    2014-07-01

    We develop an equivalence between two Hilbert spaces: (i) the space of states of U(1) n Chern-Simons theory with a certain class of tridiagonal matrices of coupling constants (with corners) on T 2; and (ii) the space of ground states of strings on an associated mapping torus with T 2 fiber. The equivalence is deduced by studying the space of ground states of SL(2, ℤ)-twisted circle compactifications of U(1) gauge theory, connected with a Janus configuration, and further compactified on T 2. The equality of dimensions of the two Hilbert spaces (i) and (ii) is equivalent to a known identity on determinants of tridiagonal matrices with corners. The equivalence of operator algebras acting on the two Hilbert spaces follows from a relation between the Smith normal form of the Chern-Simons coupling constant matrix and the isometry group of the mapping torus, as well as the torsion part of its first homology group.

  4. Renormalization group scale-setting from the action—a road to modified gravity theories

    NASA Astrophysics Data System (ADS)

    Domazet, Silvije; Štefančić, Hrvoje

    2012-12-01

    The renormalization group (RG) corrected gravitational action in Einstein-Hilbert and other truncations is considered. The running scale of the RG is treated as a scalar field at the level of the action and determined in a scale-setting procedure recently introduced by Koch and Ramirez for the Einstein-Hilbert truncation. The scale-setting procedure is elaborated for other truncations of the gravitational action and applied to several phenomenologically interesting cases. It is shown how the logarithmic dependence of the Newton's coupling on the RG scale leads to exponentially suppressed effective cosmological constant and how the scale-setting in particular RG-corrected gravitational theories yields the effective f(R) modified gravity theories with negative powers of the Ricci scalar R. The scale-setting at the level of the action at the non-Gaussian fixed point in Einstein-Hilbert and more general truncations is shown to lead to universal effective action quadratic in the Ricci tensor.

  5. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  6. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  7. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  8. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  9. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  10. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  11. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  12. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  13. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  14. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  15. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature

    PubMed Central

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838

  17. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  18. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature.

    PubMed

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.

  19. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    PubMed

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  20. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  2. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    PubMed Central

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  3. Combined multi-kernel head computed tomography images optimized for depicting both brain parenchyma and bone.

    PubMed

    Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki

    2014-01-01

    The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.

  4. Free Energy Contribution Analysis Using Response Kernel Approximation: Insights into the Acylation Reaction of a Beta-Lactamase.

    PubMed

    Asada, Toshio; Ando, Kanta; Bandyopadhyay, Pradipta; Koseki, Shiro

    2016-09-08

    A widely applicable free energy contribution analysis (FECA) method based on the quantum mechanical/molecular mechanical (QM/MM) approximation using response kernel approaches has been proposed to investigate the influences of environmental residues and/or atoms in the QM region on the free energy profile. This method can evaluate atomic contributions to the free energy along the reaction path including polarization effects on the QM region within a dramatically reduced computational time. The rate-limiting step in the deactivation of the β-lactam antibiotic cefalotin (CLS) by β-lactamase was studied using this method. The experimentally observed activation barrier was successfully reproduced by free energy perturbation calculations along the optimized reaction path that involved activation by the carboxylate moiety in CLS. It was found that the free energy profile in the QM region was slightly higher than the isolated energy and that two residues, Lys67 and Lys315, as well as water molecules deeply influenced the QM atoms associated with the bond alternation reaction in the acyl-enzyme intermediate. These facts suggested that the surrounding residues are favorable for the reactant complex and prevent the intermediate from being too stabilized to proceed to the following deacylation reaction. We have demonstrated that the free energy contribution analysis should be a useful method to investigate enzyme catalysis and to facilitate intelligent molecular design.

  5. Adhesion and volume constraints via nonlocal interactions determine cell organisation and migration profiles.

    PubMed

    Carrillo, José Antonio; Colombi, Annachiara; Scianna, Marco

    2018-05-14

    The description of the cell spatial pattern and characteristic distances is fundamental in a wide range of physio-pathological biological phenomena, from morphogenesis to cancer growth. Discrete particle models are widely used in this field, since they are focused on the cell-level of abstraction and are able to preserve the identity of single individuals reproducing their behavior. In particular, a fundamental role in determining the usefulness and the realism of a particle mathematical approach is played by the choice of the intercellular pairwise interaction kernel and by the estimate of its parameters. The aim of the paper is to demonstrate how the concept of H-stability, deriving from statistical mechanics, can have important implications in this respect. For any given interaction kernel, it in fact allows to a priori predict the regions of the free parameter space that result in stable configurations of the system characterized by a finite and strictly positive minimal interparticle distance, which is fundamental when dealing with biological phenomena. The proposed analytical arguments are indeed able to restrict the range of possible variations of selected model coefficients, whose exact estimate however requires further investigations (e.g., fitting with empirical data), as illustrated in this paper by series of representative simulations dealing with cell colony reorganization, sorting phenomena and zebrafish embryonic development. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Organ-specific SPECT activity calibration using 3D printed phantoms for molecular radiotherapy dosimetry.

    PubMed

    Robinson, Andrew P; Tipping, Jill; Cullen, David M; Hamilton, David; Brown, Richard; Flynn, Alex; Oldfield, Christopher; Page, Emma; Price, Emlyn; Smith, Andrew; Snee, Richard

    2016-12-01

    Patient-specific absorbed dose calculations for molecular radiotherapy require accurate activity quantification. This is commonly derived from Single-Photon Emission Computed Tomography (SPECT) imaging using a calibration factor relating detected counts to known activity in a phantom insert. A series of phantom inserts, based on the mathematical models underlying many clinical dosimetry calculations, have been produced using 3D printing techniques. SPECT/CT data for the phantom inserts has been used to calculate new organ-specific calibration factors for (99m) Tc and (177)Lu. The measured calibration factors are compared to predicted values from calculations using a Gaussian kernel. Measured SPECT calibration factors for 3D printed organs display a clear dependence on organ shape for (99m) Tc and (177)Lu. The observed variation in calibration factor is reproduced using Gaussian kernel-based calculation over two orders of magnitude change in insert volume for (99m) Tc and (177)Lu. These new organ-specific calibration factors show a 24, 11 and 8 % reduction in absorbed dose for the liver, spleen and kidneys, respectively. Non-spherical calibration factors from 3D printed phantom inserts can significantly improve the accuracy of whole organ activity quantification for molecular radiotherapy, providing a crucial step towards individualised activity quantification and patient-specific dosimetry. 3D printed inserts are found to provide a cost effective and efficient way for clinical centres to access more realistic phantom data.

  7. redNumerical modelling of a peripheral arterial stenosis using dimensionally reduced models and kernel methods.

    PubMed

    Köppl, Tobias; Santin, Gabriele; Haasdonk, Bernard; Helmig, Rainer

    2018-05-06

    In this work, we consider two kinds of model reduction techniques to simulate blood flow through the largest systemic arteries, where a stenosis is located in a peripheral artery i.e. in an artery that is located far away from the heart. For our simulations we place the stenosis in one of the tibial arteries belonging to the right lower leg (right post tibial artery). The model reduction techniques that are used are on the one hand dimensionally reduced models (1-D and 0-D models, the so-called mixed-dimension model) and on the other hand surrogate models produced by kernel methods. Both methods are combined in such a way that the mixed-dimension models yield training data for the surrogate model, where the surrogate model is parametrised by the degree of narrowing of the peripheral stenosis. By means of a well-trained surrogate model, we show that simulation data can be reproduced with a satisfactory accuracy and that parameter optimisation or state estimation problems can be solved in a very efficient way. Furthermore it is demonstrated that a surrogate model enables us to present after a very short simulation time the impact of a varying degree of stenosis on blood flow, obtaining a speedup of several orders over the full model. This article is protected by copyright. All rights reserved.

  8. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  9. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  10. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  11. graphkernels: R and Python packages for graph comparison

    PubMed Central

    Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-01-01

    Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902

  12. Aflatoxin variability in pistachios.

    PubMed Central

    Mahoney, N E; Rodriguez, S B

    1996-01-01

    Pistachio fruit components, including hulls (mesocarps and epicarps), seed coats (testas), and kernels (seeds), all contribute to variable aflatoxin content in pistachios. Fresh pistachio kernels were individually inoculated with Aspergillus flavus and incubated 7 or 10 days. Hulled, shelled kernels were either left intact or wounded prior to inoculation. Wounded kernels, with or without the seed coat, were readily colonized by A. flavus and after 10 days of incubation contained 37 times more aflatoxin than similarly treated unwounded kernels. The aflatoxin levels in the individual wounded pistachios were highly variable. Neither fungal colonization nor aflatoxin was detected in intact kernels without seed coats. Intact kernels with seed coats had limited fungal colonization and low aflatoxin concentrations compared with their wounded counterparts. Despite substantial fungal colonization of wounded hulls, aflatoxin was not detected in hulls. Aflatoxin levels were significantly lower in wounded kernels with hulls than in kernels of hulled pistachios. Both the seed coat and a water-soluble extract of hulls suppressed aflatoxin production by A. flavus. PMID:8919781

  13. graphkernels: R and Python packages for graph comparison.

    PubMed

    Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-02-01

    Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.

  14. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    PubMed Central

    Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.

    2013-01-01

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507

  15. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Automated segmentation of foveal avascular zone in fundus fluorescein angiography.

    PubMed

    Zheng, Yalin; Gandhi, Jagdeep Singh; Stangos, Alexandros N; Campa, Claudio; Broadbent, Deborah M; Harding, Simon P

    2010-07-01

    PURPOSE. To describe and evaluate the performance of a computerized automated segmentation technique for use in quantification of the foveal avascular zone (FAZ). METHODS. A computerized technique for automated segmentation of the FAZ using images from fundus fluorescein angiography (FFA) was applied to 26 transit-phase images obtained from patients with various grades of diabetic retinopathy. The area containing the FAZ zone was first extracted from the original image and smoothed by a Gaussian kernel (sigma = 1.5). An initializing contour was manually placed inside the FAZ of the smoothed image and iteratively moved by the segmentation program toward the FAZ boundary. Five tests with different initializing curves were run on each of 26 images to assess reproducibility. The accuracy of the program was also validated by comparing results obtained by the program with the FAZ boundaries manually delineated by medical retina specialists. Interobserver performance was then evaluated by comparing delineations from two of the experts. RESULTS. One-way analysis of variance indicated that the disparities between different tests were not statistically significant, signifying excellent reproducibility for the computer program. There was a statistically significant linear correlation between the results obtained by automation and manual delineations by experts. CONCLUSIONS. This automated segmentation program can produce highly reproducible results that are comparable to those made by clinical experts. It has the potential to assist in the detection and management of foveal ischemia and to be integrated into automated grading systems.

  17. Grating-based phase contrast tomosynthesis imaging: Proof-of-concept experimental studies

    PubMed Central

    Li, Ke; Ge, Yongshuai; Garrett, John; Bevins, Nicholas; Zambelli, Joseph; Chen, Guang-Hong

    2014-01-01

    Purpose: This paper concerns the feasibility of x-ray differential phase contrast (DPC) tomosynthesis imaging using a grating-based DPC benchtop experimental system, which is equipped with a commercial digital flat-panel detector and a medical-grade rotating-anode x-ray tube. An extensive system characterization was performed to quantify its imaging performance. Methods: The major components of the benchtop system include a diagnostic x-ray tube with a 1.0 mm nominal focal spot size, a flat-panel detector with 96 μm pixel pitch, a sample stage that rotates within a limited angular span of ±30°, and a Talbot-Lau interferometer with three x-ray gratings. A total of 21 projection views acquired with 3° increments were used to reconstruct three sets of tomosynthetic image volumes, including the conventional absorption contrast tomosynthesis image volume (AC-tomo) reconstructed using the filtered-backprojection (FBP) algorithm with the ramp kernel, the phase contrast tomosynthesis image volume (PC-tomo) reconstructed using FBP with a Hilbert kernel, and the differential phase contrast tomosynthesis image volume (DPC-tomo) reconstructed using the shift-and-add algorithm. Three inhouse physical phantoms containing tissue-surrogate materials were used to characterize the signal linearity, the signal difference-to-noise ratio (SDNR), the three-dimensional noise power spectrum (3D NPS), and the through-plane artifact spread function (ASF). Results: While DPC-tomo highlights edges and interfaces in the image object, PC-tomo removes the differential nature of the DPC projection data and its pixel values are linearly related to the decrement of the real part of the x-ray refractive index. The SDNR values of polyoxymethylene in water and polystyrene in oil are 1.5 and 1.0, respectively, in AC-tomo, and the values were improved to 3.0 and 2.0, respectively, in PC-tomo. PC-tomo and AC-tomo demonstrate equivalent ASF, but their noise characteristics quantified by the 3D NPS were found to be different due to the difference in the tomosynthesis image reconstruction algorithms. Conclusions: It is feasible to simultaneously generate x-ray differential phase contrast, phase contrast, and absorption contrast tomosynthesis images using a grating-based data acquisition setup. The method shows promise in improving the visibility of several low-density materials and therefore merits further investigation. PMID:24387511

  18. SPHYNX: an accurate density-based SPH method for astrophysical applications

    NASA Astrophysics Data System (ADS)

    Cabezón, R. M.; García-Senz, D.; Figueira, J.

    2017-10-01

    Aims: Hydrodynamical instabilities and shocks are ubiquitous in astrophysical scenarios. Therefore, an accurate numerical simulation of these phenomena is mandatory to correctly model and understand many astrophysical events, such as supernovas, stellar collisions, or planetary formation. In this work, we attempt to address many of the problems that a commonly used technique, smoothed particle hydrodynamics (SPH), has when dealing with subsonic hydrodynamical instabilities or shocks. To that aim we built a new SPH code named SPHYNX, that includes many of the recent advances in the SPH technique and some other new ones, which we present here. Methods: SPHYNX is of Newtonian type and grounded in the Euler-Lagrange formulation of the smoothed-particle hydrodynamics technique. Its distinctive features are: the use of an integral approach to estimating the gradients; the use of a flexible family of interpolators called sinc kernels, which suppress pairing instability; and the incorporation of a new type of volume element which provides a better partition of the unity. Unlike other modern formulations, which consider volume elements linked to pressure, our volume element choice relies on density. SPHYNX is, therefore, a density-based SPH code. Results: A novel computational hydrodynamic code oriented to Astrophysical applications is described, discussed, and validated in the following pages. The ensuing code conserves mass, linear and angular momentum, energy, entropy, and preserves kernel normalization even in strong shocks. In our proposal, the estimation of gradients is enhanced using an integral approach. Additionally, we introduce a new family of volume elements which reduce the so-called tensile instability. Both features help to suppress the damp which often prevents the growth of hydrodynamic instabilities in regular SPH codes. Conclusions: On the whole, SPHYNX has passed the verification tests described below. For identical particle setting and initial conditions the results were similar (or better in some particular cases) than those obtained with other SPH schemes such as GADGET-2, PSPH or with the recent density-independent formulation (DISPH) and conservative reproducing kernel (CRKSPH) techniques.

  19. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  20. 7 CFR 810.204 - Grades and grade requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...— Damaged kernels 1 (percent) Foreign material (percent) Other grains (percent) Skinned and broken kernels....0 10.0 15.0 1 Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered against sound barley. Notes: Malting barley shall not be infested in accordance with...

  1. Uniform sparse bounds for discrete quadratic phase Hilbert transforms

    NASA Astrophysics Data System (ADS)

    Kesler, Robert; Arias, Darío Mena

    2017-09-01

    For each α \\in T consider the discrete quadratic phase Hilbert transform acting on finitely supported functions f : Z → C according to H^{α }f(n):= \\sum _{m ≠ 0} e^{iα m^2} f(n - m)/m. We prove that, uniformly in α \\in T , there is a sparse bound for the bilinear form < H^{α } f , g > for every pair of finitely supported functions f,g : Z→ C . The sparse bound implies several mapping properties such as weighted inequalities in an intersection of Muckenhoupt and reverse Hölder classes.

  2. Isomonodromy for the Degenerate Fifth Painlevé Equation

    NASA Astrophysics Data System (ADS)

    Acosta-Humánez, Primitivo B.; van der Put, Marius; Top, Jaap

    2017-05-01

    This is a sequel to papers by the last two authors making the Riemann-Hilbert correspondence and isomonodromy explicit. For the degenerate fifth Painlevé equation, the moduli spaces for connections and for monodromy are explicitly computed. It is proven that the extended Riemann-Hilbert morphism is an isomorphism. As a consequence these equations have the Painlevé property and the Okamoto-Painlevé space is identified with a moduli space of connections. Using MAPLE computations, one obtains formulas for the degenerate fifth Painlevé equation, for the Bäcklund transformations.

  3. Near-complete teleportation of a superposed coherent state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheong, Yong Wook; Kim, Hyunjae; Lee, Hai-Woong

    2004-09-01

    The four Bell-type entangled coherent states, {alpha}>-{alpha}>{+-}-{alpha}>{alpha}> and {alpha}>{alpha}>{+-}-{alpha}>-{alpha}>, can be discriminated with a high probability using only linear optical means, as long as {alpha} is not too small. Based on this observation, we propose a simple scheme to almost completely teleport a superposed coherent state. The nonunitary transformation that is required to complete the teleportation can be achieved by embedding the receiver's field state in a larger Hilbert space consisting of the field and a single atom and performing a unitary transformation on this Hilbert space00.

  4. Many-Body Quantum Chaos and Entanglement in a Quantum Ratchet

    NASA Astrophysics Data System (ADS)

    Valdez, Marc Andrew; Shchedrin, Gavriil; Heimsoth, Martin; Creffield, Charles E.; Sols, Fernando; Carr, Lincoln D.

    2018-06-01

    We uncover signatures of quantum chaos in the many-body dynamics of a Bose-Einstein condensate-based quantum ratchet in a toroidal trap. We propose measures including entanglement, condensate depletion, and spreading over a fixed basis in many-body Hilbert space, which quantitatively identify the region in which quantum chaotic many-body dynamics occurs, where random matrix theory is limited or inaccessible. With these tools, we show that many-body quantum chaos is neither highly entangled nor delocalized in the Hilbert space, contrary to conventionally expected signatures of quantum chaos.

  5. Many-Body Quantum Chaos and Entanglement in a Quantum Ratchet.

    PubMed

    Valdez, Marc Andrew; Shchedrin, Gavriil; Heimsoth, Martin; Creffield, Charles E; Sols, Fernando; Carr, Lincoln D

    2018-06-08

    We uncover signatures of quantum chaos in the many-body dynamics of a Bose-Einstein condensate-based quantum ratchet in a toroidal trap. We propose measures including entanglement, condensate depletion, and spreading over a fixed basis in many-body Hilbert space, which quantitatively identify the region in which quantum chaotic many-body dynamics occurs, where random matrix theory is limited or inaccessible. With these tools, we show that many-body quantum chaos is neither highly entangled nor delocalized in the Hilbert space, contrary to conventionally expected signatures of quantum chaos.

  6. Proceedings of the MIT Student Workshop on VLSI and Parallel Systems Held in Dedham, Massachusetts on 21 July 1992

    DTIC Science & Technology

    1992-07-01

    don’t multiplez the host References wtres, then neither does E., ® El. By repeated application of Theorem 1, using a leveled [1] D. S. Greenberg and S. N...1900, David Hilbert proposed twenty-three problems covering all areas of mathematics that guided the field for decades. These problems served a driving...session was to propose several grand challenge problems, similar in spirit to those of Hilbert [W1: [A] problem should be difficult in order to entice us

  7. Majorana fermions and orthogonal complex structures

    NASA Astrophysics Data System (ADS)

    Calderón-García, J. S.; Reyes-Lega, A. F.

    2018-05-01

    Ground states of quadratic Hamiltonians for fermionic systems can be characterized in terms of orthogonal complex structures. The standard way in which such Hamiltonians are diagonalized makes use of a certain “doubling” of the Hilbert space. In this work, we show that this redundancy in the Hilbert space can be completely lifted if the relevant orthogonal structure is taken into account. Such an approach allows for a treatment of Majorana fermions which is both physically and mathematically transparent. Furthermore, an explicit connection between orthogonal complex structures and the topological ℤ2-invariant is given.

  8. Unbounded Violations of Bipartite Bell Inequalities via Operator Space Theory

    NASA Astrophysics Data System (ADS)

    Junge, M.; Palazuelos, C.; Pérez-García, D.; Villanueva, I.; Wolf, M. M.

    2010-12-01

    In this work we show that bipartite quantum states with local Hilbert space dimension n can violate a Bell inequality by a factor of order {Ω left(sqrt{n}/log^2n right)} when observables with n possible outcomes are used. A central tool in the analysis is a close relation between this problem and operator space theory and, in particular, the very recent noncommutative L p embedding theory. As a consequence of this result, we obtain better Hilbert space dimension witnesses and quantum violations of Bell inequalities with better resistance to noise.

  9. Dynamical structure of pure Lovelock gravity

    NASA Astrophysics Data System (ADS)

    Dadhich, Naresh; Durka, Remigiusz; Merino, Nelson; Miskovic, Olivera

    2016-03-01

    We study the dynamical structure of pure Lovelock gravity in spacetime dimensions higher than four using the Hamiltonian formalism. The action consists of a cosmological constant and a single higher-order polynomial in the Riemann tensor. Similarly to the Einstein-Hilbert action, it possesses a unique constant curvature vacuum and charged black hole solutions. We analyze physical degrees of freedom and local symmetries in this theory. In contrast to the Einstein-Hilbert case, the number of degrees of freedom depends on the background and can vary from zero to the maximal value carried by the Lovelock theory.

  10. A New View of Earthquake Ground Motion Data: The Hilbert Spectral Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Norden; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    A brief description of the newly developed Empirical Mode Decomposition (ENID) and Hilbert Spectral Analysis (HSA) method will be given. The decomposition is adaptive and can be applied to both nonlinear and nonstationary data. Example of the method applied to a sample earthquake record will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.

  11. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  12. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  13. 7 CFR 810.205 - Grades and grade requirements for Two-rowed Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (percent) Maximum limits of— Wild oats (percent) Foreign material (percent) Skinned and broken kernels... Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered...

  14. Phonons around a soliton in a continuum model of t-(CH)x

    NASA Astrophysics Data System (ADS)

    Ono, Y.; Terai, A.; Wada, Y.

    1986-05-01

    The eigenvalue problem for phonons around a soliton in a continuum model of trans-polyacetylene t-(CH)x, the so-called TLM model (Takayama et al, 1980), is reinvestigated using a kernel which satisfies the correct boundary condition. The three localized modes are reproduced, two with even parity and one with odd parity. The phase-shift analysis of the extended modes confirms their existence if the one-dimensional version of Levinson's theorem is applicable to the present problem. It is found that the phase shifts of even and odd modes differ from each other in the long-wavelength limit. The conclusion of Ito et al. (1984), that the scattering of phonons by the soliton is reflectionless, has to be modified in this limit, where phonons suffer reflection from the soliton.

  15. Noncommutative coherent states and related aspects of Berezin-Toeplitz quantization

    NASA Astrophysics Data System (ADS)

    Hasibul Hassan Chowdhury, S.; Twareque Ali, S.; Engliš, Miroslav

    2017-05-01

    In this paper, we construct noncommutative coherent states using various families of unitary irreducible representations (UIRs) of Gnc , a connected, simply connected nilpotent Lie group, which was identified as the kinematical symmetry group of noncommutative quantum mechanics for a system of two degrees of freedom in an earlier paper. Similarly described are the degenerate noncommutative coherent states arising from the degenerate UIRs of Gnc . We then compute the reproducing kernels associated with both these families of coherent states and study the Berezin-Toeplitz quantization of the observables on the underlying 4-dimensional phase space, analyzing in particular the semi-classical asymptotics for both these cases. Dedicated by the first and the third authors to the memory of the second author, with gratitude for his friendship and for all they learnt from him.

  16. Efficient searching in meshfree methods

    NASA Astrophysics Data System (ADS)

    Olliff, James; Alford, Brad; Simkins, Daniel C.

    2018-04-01

    Meshfree methods such as the Reproducing Kernel Particle Method and the Element Free Galerkin method have proven to be excellent choices for problems involving complex geometry, evolving topology, and large deformation, owing to their ability to model the problem domain without the constraints imposed on the Finite Element Method (FEM) meshes. However, meshfree methods have an added computational cost over FEM that come from at least two sources: increased cost of shape function evaluation and the determination of adjacency or connectivity. The focus of this paper is to formally address the types of adjacency information that arises in various uses of meshfree methods; a discussion of available techniques for computing the various adjacency graphs; propose a new search algorithm and data structure; and finally compare the memory and run time performance of the methods.

  17. A biorthogonal decomposition for the identification and simulation of non-stationary and non-Gaussian random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zentner, I.; Ferré, G., E-mail: gregoire.ferre@ponts.org; Poirion, F.

    2016-06-01

    In this paper, a new method for the identification and simulation of non-Gaussian and non-stationary stochastic fields given a database is proposed. It is based on two successive biorthogonal decompositions aiming at representing spatio–temporal stochastic fields. The proposed double expansion allows to build the model even in the case of large-size problems by separating the time, space and random parts of the field. A Gaussian kernel estimator is used to simulate the high dimensional set of random variables appearing in the decomposition. The capability of the method to reproduce the non-stationary and non-Gaussian features of random phenomena is illustrated bymore » applications to earthquakes (seismic ground motion) and sea states (wave heights).« less

  18. Detection of ochratoxin A contamination in stored wheat using near-infrared hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Senthilkumar, T.; Jayas, D. S.; White, N. D. G.; Fields, P. G.; Gräfenhan, T.

    2017-03-01

    Near-infrared (NIR) hyperspectral imaging system was used to detect five concentration levels of ochratoxin A (OTA) in contaminated wheat kernels. The wheat kernels artificially inoculated with two different OTA producing Penicillium verrucosum strains, two different non-toxigenic P. verrucosum strains, and sterile control wheat kernels were subjected to NIR hyperspectral imaging. The acquired three-dimensional data were reshaped into readable two-dimensional data. Principal Component Analysis (PCA) was applied to the two dimensional data to identify the key wavelengths which had greater significance in detecting OTA contamination in wheat. Statistical and histogram features extracted at the key wavelengths were used in the linear, quadratic and Mahalanobis statistical discriminant models to differentiate between sterile control, five concentration levels of OTA contamination in wheat kernels, and five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels. The classification models differentiated sterile control samples from OTA contaminated wheat kernels and non-OTA producing P. verrucosum inoculated wheat kernels with a 100% accuracy. The classification models also differentiated between five concentration levels of OTA contaminated wheat kernels and between five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels with a correct classification of more than 98%. The non-OTA producing P. verrucosum inoculated wheat kernels and OTA contaminated wheat kernels subjected to hyperspectral imaging provided different spectral patterns.

  19. Application of kernel method in fluorescence molecular tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Baikejiang, Reheman; Li, Changqing

    2017-02-01

    Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.

  20. Credit scoring analysis using kernel discriminant

    NASA Astrophysics Data System (ADS)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  1. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  2. Correlation and classification of single kernel fluorescence hyperspectral data with aflatoxin concentration in corn kernels inoculated with Aspergillus flavus spores.

    PubMed

    Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D

    2010-05-01

    The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to 0.91 when a threshold of either 20 or 100 ng g(-1) was used. Overall, the results indicate that fluorescence hyperspectral imaging may be applicable in estimating aflatoxin content in individual corn kernels.

  3. Classification of Phylogenetic Profiles for Protein Function Prediction: An SVM Approach

    NASA Astrophysics Data System (ADS)

    Kotaru, Appala Raju; Joshi, Ramesh C.

    Predicting the function of an uncharacterized protein is a major challenge in post-genomic era due to problems complexity and scale. Having knowledge of protein function is a crucial link in the development of new drugs, better crops, and even the development of biochemicals such as biofuels. Recently numerous high-throughput experimental procedures have been invented to investigate the mechanisms leading to the accomplishment of a protein’s function and Phylogenetic profile is one of them. Phylogenetic profile is a way of representing a protein which encodes evolutionary history of proteins. In this paper we proposed a method for classification of phylogenetic profiles using supervised machine learning method, support vector machine classification along with radial basis function as kernel for identifying functionally linked proteins. We experimentally evaluated the performance of the classifier with the linear kernel, polynomial kernel and compared the results with the existing tree kernel. In our study we have used proteins of the budding yeast saccharomyces cerevisiae genome. We generated the phylogenetic profiles of 2465 yeast genes and for our study we used the functional annotations that are available in the MIPS database. Our experiments show that the performance of the radial basis kernel is similar to polynomial kernel is some functional classes together are better than linear, tree kernel and over all radial basis kernel outperformed the polynomial kernel, linear kernel and tree kernel. In analyzing these results we show that it will be feasible to make use of SVM classifier with radial basis function as kernel to predict the gene functionality using phylogenetic profiles.

  4. Intraear Compensation of Field Corn, Zea mays, from Simulated and Naturally Occurring Injury by Ear-Feeding Larvae.

    PubMed

    Steckel, S; Stewart, S D

    2015-06-01

    Ear-feeding larvae, such as corn earworm, Helicoverpa zea Boddie (Lepidoptera: Noctuidae), can be important insect pests of field corn, Zea mays L., by feeding on kernels. Recently introduced, stacked Bacillus thuringiensis (Bt) traits provide improved protection from ear-feeding larvae. Thus, our objective was to evaluate how injury to kernels in the ear tip might affect yield when this injury was inflicted at the blister and milk stages. In 2010, simulated corn earworm injury reduced total kernel weight (i.e., yield) at both the blister and milk stage. In 2011, injury to ear tips at the milk stage affected total kernel weight. No differences in total kernel weight were found in 2013, regardless of when or how much injury was inflicted. Our data suggested that kernels within the same ear could compensate for injury to ear tips by increasing in size, but this increase was not always statistically significant or sufficient to overcome high levels of kernel injury. For naturally occurring injury observed on multiple corn hybrids during 2011 and 2012, our analyses showed either no or a minimal relationship between number of kernels injured by ear-feeding larvae and the total number of kernels per ear, total kernel weight, or the size of individual kernels. The results indicate that intraear compensation for kernel injury to ear tips can occur under at least some conditions. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  6. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  7. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  8. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  9. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  10. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  11. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  12. Wigner functions defined with Laplace transform kernels.

    PubMed

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  13. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  14. Influence of wheat kernel physical properties on the pulverizing process.

    PubMed

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  15. Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.

    PubMed

    Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D

    2016-04-01

    Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the kernel fraction was redried at 60°C for 48 h in a forced-air oven and dry sieved to determine GMPS and surface area. Linear relationships between CSPS from WPCS (n=80) and kernel fraction GMPS, surface area, and proportion passing through the 4.75-mm screen were poor. Strong quadratic relationships between proportion of kernel fraction passing through the 4.75-mm screen and kernel fraction GMPS and surface area were observed. These findings suggest that hydrodynamic separation and dry sieving of the kernel fraction may provide a better assessment of kernel breakage in WPCS than CSPS. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Classification of corn kernels contaminated with aflatoxins using fluorescence and reflectance hyperspectral images analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas

    2015-05-01

    Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.

  17. Influence of Kernel Age on Fumonisin B1 Production in Maize by Fusarium moniliforme

    PubMed Central

    Warfield, Colleen Y.; Gilchrist, David G.

    1999-01-01

    Production of fumonisins by Fusarium moniliforme on naturally infected maize ears is an important food safety concern due to the toxic nature of this class of mycotoxins. Assessing the potential risk of fumonisin production in developing maize ears prior to harvest requires an understanding of the regulation of toxin biosynthesis during kernel maturation. We investigated the developmental-stage-dependent relationship between maize kernels and fumonisin B1 production by using kernels collected at the blister (R2), milk (R3), dough (R4), and dent (R5) stages following inoculation in culture at their respective field moisture contents with F. moniliforme. Highly significant differences (P ≤ 0.001) in fumonisin B1 production were found among kernels at the different developmental stages. The highest levels of fumonisin B1 were produced on the dent stage kernels, and the lowest levels were produced on the blister stage kernels. The differences in fumonisin B1 production among kernels at the different developmental stages remained significant (P ≤ 0.001) when the moisture contents of the kernels were adjusted to the same level prior to inoculation. We concluded that toxin production is affected by substrate composition as well as by moisture content. Our study also demonstrated that fumonisin B1 biosynthesis on maize kernels is influenced by factors which vary with the developmental age of the tissue. The risk of fumonisin contamination may begin early in maize ear development and increases as the kernels reach physiological maturity. PMID:10388675

  18. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  19. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Novel near-infrared sampling apparatus for single kernel analysis of oil content in maize.

    PubMed

    Janni, James; Weinstock, B André; Hagen, Lisa; Wright, Steve

    2008-04-01

    A method of rapid, nondestructive chemical and physical analysis of individual maize (Zea mays L.) kernels is needed for the development of high value food, feed, and fuel traits. Near-infrared (NIR) spectroscopy offers a robust nondestructive method of trait determination. However, traditional NIR bulk sampling techniques cannot be applied successfully to individual kernels. Obtaining optimized single kernel NIR spectra for applied chemometric predictive analysis requires a novel sampling technique that can account for the heterogeneous forms, morphologies, and opacities exhibited in individual maize kernels. In this study such a novel technique is described and compared to less effective means of single kernel NIR analysis. Results of the application of a partial least squares (PLS) derived model for predictive determination of percent oil content per individual kernel are shown.

  1. Quantifying phase synchronization using instances of Hilbert phase slips

    NASA Astrophysics Data System (ADS)

    Govindan, R. B.

    2018-07-01

    We propose to quantify phase synchronization between two signals, x(t) and y(t), by calculating variance in the Hilbert phase of y(t) at instances of phase slips exhibited by x(t). The proposed approach is tested on numerically simulated coupled chaotic Roessler systems and second order autoregressive processes. Furthermore we compare the performance of the proposed and original approaches using uterine electromyogram signals and show that both approaches yield consistent results A standard phase synchronization approach, which involves unwrapping the Hilbert phases (ϕ1(t) and ϕ2(t)) of the two signals and analyzing the variance in the | n ṡϕ1(t) - m ṡϕ2(t) | , mod 2 π, (n and m are integers), was used for comparison. The synchronization indexes obtained from the proposed approach and the standard approach agree reasonably well in all of the systems studied in this work. Our results indicate that the proposed approach, unlike the traditional approach, does not require the non-invertible transformations - unwrapping of the phases and calculation of mod 2 π and it can be used to reliably to quantify phase synchrony between two signals.

  2. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    NASA Astrophysics Data System (ADS)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-08-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  3. Arbitrary-order Hilbert Spectral Analysis and Intermittency in Solar Wind Density Fluctuations

    NASA Astrophysics Data System (ADS)

    Carbone, Francesco; Sorriso-Valvo, Luca; Alberti, Tommaso; Lepreti, Fabio; Chen, Christopher H. K.; Němeček, Zdenek; Šafránková, Jana

    2018-05-01

    The properties of inertial- and kinetic-range solar wind turbulence have been investigated with the arbitrary-order Hilbert spectral analysis method, applied to high-resolution density measurements. Due to the small sample size and to the presence of strong nonstationary behavior and large-scale structures, the classical analysis in terms of structure functions may prove to be unsuccessful in detecting the power-law behavior in the inertial range, and may underestimate the scaling exponents. However, the Hilbert spectral method provides an optimal estimation of the scaling exponents, which have been found to be close to those for velocity fluctuations in fully developed hydrodynamic turbulence. At smaller scales, below the proton gyroscale, the system loses its intermittent multiscaling properties and converges to a monofractal process. The resulting scaling exponents, obtained at small scales, are in good agreement with those of classical fractional Brownian motion, indicating a long-term memory in the process, and the absence of correlations around the spectral-break scale. These results provide important constraints on models of kinetic-range turbulence in the solar wind.

  4. Lagrangian single-particle turbulent statistics through the Hilbert-Huang transform.

    PubMed

    Huang, Yongxiang; Biferale, Luca; Calzavarini, Enrico; Sun, Chao; Toschi, Federico

    2013-04-01

    The Hilbert-Huang transform is applied to analyze single-particle Lagrangian velocity data from numerical simulations of hydrodynamic turbulence. The velocity trajectory is described in terms of a set of intrinsic mode functions C(i)(t) and of their instantaneous frequency ω(i)(t). On the basis of this decomposition we define the ω-conditioned statistical moments of the C(i) modes, named q-order Hilbert spectra (HS). We show that such quantities have enhanced scaling properties as compared to traditional Fourier transform- or correlation-based (structure functions) statistical indicators, thus providing better insights into the turbulent energy transfer process. We present clear empirical evidence that the energylike quantity, i.e., the second-order HS, displays a linear scaling in time in the inertial range, as expected from a dimensional analysis. We also measure high-order moment scaling exponents in a direct way, without resorting to the extended self-similarity procedure. This leads to an estimate of the Lagrangian structure function exponents which are consistent with the multifractal prediction in the Lagrangian frame as proposed by Biferale et al. [Phys. Rev. Lett. 93, 064502 (2004)].

  5. ψ-Epistemic Models are Exponentially Bad at Explaining the Distinguishability of Quantum States

    NASA Astrophysics Data System (ADS)

    Leifer, M. S.

    2014-04-01

    The status of the quantum state is perhaps the most controversial issue in the foundations of quantum theory. Is it an epistemic state (state of knowledge) or an ontic state (state of reality)? In realist models of quantum theory, the epistemic view asserts that nonorthogonal quantum states correspond to overlapping probability measures over the true ontic states. This naturally accounts for a large number of otherwise puzzling quantum phenomena. For example, the indistinguishability of nonorthogonal states is explained by the fact that the ontic state sometimes lies in the overlap region, in which case there is nothing in reality that could distinguish the two states. For this to work, the amount of overlap of the probability measures should be comparable to the indistinguishability of the quantum states. In this Letter, I exhibit a family of states for which the ratio of these two quantities must be ≤2de-cd in Hilbert spaces of dimension d that are divisible by 4. This implies that, for large Hilbert space dimension, the epistemic explanation of indistinguishability becomes implausible at an exponential rate as the Hilbert space dimension increases.

  6. A New Scheme for the Design of Hilbert Transform Pairs of Biorthogonal Wavelet Bases

    NASA Astrophysics Data System (ADS)

    Shi, Hongli; Luo, Shuqian

    2010-12-01

    In designing the Hilbert transform pairs of biorthogonal wavelet bases, it has been shown that the requirements of the equal-magnitude responses and the half-sample phase offset on the lowpass filters are the necessary and sufficient condition. In this paper, the relationship between the phase offset and the vanishing moment difference of biorthogonal scaling filters is derived, which implies a simple way to choose the vanishing moments so that the phase response requirement can be satisfied structurally. The magnitude response requirement is approximately achieved by a constrained optimization procedure, where the objective function and constraints are all expressed in terms of the auxiliary filters of scaling filters rather than the scaling filters directly. Generally, the calculation burden in the design implementation will be less than that of the current schemes. The integral of magnitude response difference between the primal and dual scaling filters has been chosen as the objective function, which expresses the magnitude response requirements in the whole frequency range. Two design examples illustrate that the biorthogonal wavelet bases designed by the proposed scheme are very close to Hilbert transform pairs.

  7. Towards a second law for Lovelock theories

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Sayantani; Haehl, Felix M.; Kundu, Nilay; Loganayagam, R.; Rangamani, Mukund

    2017-03-01

    In classical general relativity described by Einstein-Hilbert gravity, black holes behave as thermodynamic objects. In particular, the laws of black hole mechanics can be interpreted as laws of thermodynamics. The first law of black hole mechanics extends to higher derivative theories via the Noether charge construction of Wald. One also expects the statement of the second law, which in Einstein-Hilbert theory owes to Hawking's area theorem, to extend to higher derivative theories. To argue for this however one needs a notion of entropy for dynamical black holes, which the Noether charge construction does not provide. We propose such an entropy function for the family of Lovelock theories, treating the higher derivative terms as perturbations to the Einstein-Hilbert theory. Working around a dynamical black hole solution, and making no assumptions about the amplitude of departure from equilibrium, we construct a candidate entropy functional valid to all orders in the low energy effective field theory. This entropy functional satisfies a second law, modulo a certain subtle boundary term, which deserves further investigation in non-spherically symmetric situations.

  8. The Baker-Akhiezer Function and Factorization of the Chebotarev-Khrapkov Matrix

    NASA Astrophysics Data System (ADS)

    Antipov, Yuri A.

    2014-10-01

    A new technique is proposed for the solution of the Riemann-Hilbert problem with the Chebotarev-Khrapkov matrix coefficient {G(t) = α1(t)I + α2(t)Q(t)} , {α1(t), α2(t) in H(L)} , I = diag{1, 1}, Q(t) is a {2×2} zero-trace polynomial matrix. This problem has numerous applications in elasticity and diffraction theory. The main feature of the method is the removal of essential singularities of the solution to the associated homogeneous scalar Riemann-Hilbert problem on the hyperelliptic surface of an algebraic function by means of the Baker-Akhiezer function. The consequent application of this function for the derivation of the general solution to the vector Riemann-Hilbert problem requires the finding of the {ρ} zeros of the Baker-Akhiezer function ({ρ} is the genus of the surface). These zeros are recovered through the solution to the associated Jacobi problem of inversion of abelian integrals or, equivalently, the determination of the zeros of the associated degree-{ρ} polynomial and solution of a certain linear algebraic system of {ρ} equations.

  9. Thermodynamic limit of random partitions and dispersionless Toda hierarchy

    NASA Astrophysics Data System (ADS)

    Takasaki, Kanehisa; Nakatsu, Toshio

    2012-01-01

    We study the thermodynamic limit of random partition models for the instanton sum of 4D and 5D supersymmetric U(1) gauge theories deformed by some physical observables. The physical observables correspond to external potentials in the statistical model. The partition function is reformulated in terms of the density function of Maya diagrams. The thermodynamic limit is governed by a limit shape of Young diagrams associated with dominant terms in the partition function. The limit shape is characterized by a variational problem, which is further converted to a scalar-valued Riemann-Hilbert problem. This Riemann-Hilbert problem is solved with the aid of a complex curve, which may be thought of as the Seiberg-Witten curve of the deformed U(1) gauge theory. This solution of the Riemann-Hilbert problem is identified with a special solution of the dispersionless Toda hierarchy that satisfies a pair of generalized string equations. The generalized string equations for the 5D gauge theory are shown to be related to hidden symmetries of the statistical model. The prepotential and the Seiberg-Witten differential are also considered.

  10. Hilbert-Schmidt quantum coherence in multi-qudit systems

    NASA Astrophysics Data System (ADS)

    Maziero, Jonas

    2017-11-01

    Using Bloch's parametrization for qudits ( d-level quantum systems), we write the Hilbert-Schmidt distance (HSD) between two generic n-qudit states as an Euclidean distance between two vectors of observables mean values in R^{Π_{s=1}nds2-1}, where ds is the dimension for qudit s. Then, applying the generalized Gell-Mann's matrices to generate SU(ds), we use that result to obtain the Hilbert-Schmidt quantum coherence (HSC) of n-qudit systems. As examples, we consider in detail one-qubit, one-qutrit, two-qubit, and two copies of one-qubit states. In this last case, the possibility for controlling local and non-local coherences by tuning local populations is studied, and the contrasting behaviors of HSC, l1-norm coherence, and relative entropy of coherence in this regard are noticed. We also investigate the decoherent dynamics of these coherence functions under the action of qutrit dephasing and dissipation channels. At last, we analyze the non-monotonicity of HSD under tensor products and report the first instance of a consequence (for coherence quantification) of this kind of property of a quantum distance measure.

  11. Computed tomography coronary stent imaging with iterative reconstruction: a trade-off study between medium kernel and sharp kernel.

    PubMed

    Zhou, Qijing; Jiang, Biao; Dong, Fei; Huang, Peiyu; Liu, Hongtao; Zhang, Minming

    2014-01-01

    To evaluate the improvement of iterative reconstruction in image space (IRIS) technique in computed tomographic (CT) coronary stent imaging with sharp kernel, and to make a trade-off analysis. Fifty-six patients with 105 stents were examined by 128-slice dual-source CT coronary angiography (CTCA). Images were reconstructed using standard filtered back projection (FBP) and IRIS with both medium kernel and sharp kernel applied. Image noise and the stent diameter were investigated. Image noise was measured both in background vessel and in-stent lumen as objective image evaluation. Image noise score and stent score were performed as subjective image evaluation. The CTCA images reconstructed with IRIS were associated with significant noise reduction compared to that of CTCA images reconstructed using FBP technique in both of background vessel and in-stent lumen (the background noise decreased by approximately 25.4% ± 8.2% in medium kernel (P

  12. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  13. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    PubMed

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  14. Mapping QTLs controlling kernel dimensions in a wheat inter-varietal RIL mapping population.

    PubMed

    Cheng, Ruiru; Kong, Zhongxin; Zhang, Liwei; Xie, Quan; Jia, Haiyan; Yu, Dong; Huang, Yulong; Ma, Zhengqiang

    2017-07-01

    Seven kernel dimension QTLs were identified in wheat, and kernel thickness was found to be the most important dimension for grain weight improvement. Kernel morphology and weight of wheat (Triticum aestivum L.) affect both yield and quality; however, the genetic basis of these traits and their interactions has not been fully understood. In this study, to investigate the genetic factors affecting kernel morphology and the association of kernel morphology traits with kernel weight, kernel length (KL), width (KW) and thickness (KT) were evaluated, together with hundred-grain weight (HGW), in a recombinant inbred line population derived from Nanda2419 × Wangshuibai, with data from five trials (two different locations over 3 years). The results showed that HGW was more closely correlated with KT and KW than with KL. A whole genome scan revealed four QTLs for KL, one for KW and two for KT, distributed on five different chromosomes. Of them, QKl.nau-2D for KL, and QKt.nau-4B and QKt.nau-5A for KT were newly identified major QTLs for the respective traits, explaining up to 32.6 and 41.5% of the phenotypic variations, respectively. Increase of KW and KT and reduction of KL/KT and KW/KT ratios always resulted in significant higher grain weight. Lines combining the Nanda 2419 alleles of the 4B and 5A intervals had wider, thicker, rounder kernels and a 14% higher grain weight in the genotype-based analysis. A strong, negative linear relationship of the KW/KT ratio with grain weight was observed. It thus appears that kernel thickness is the most important kernel dimension factor in wheat improvement for higher yield. Mapping and marker identification of the kernel dimension-related QTLs definitely help realize the breeding goals.

  15. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Adaptive kernel function using line transect sampling

    NASA Astrophysics Data System (ADS)

    Albadareen, Baker; Ismail, Noriszura

    2018-04-01

    The estimation of f(0) is crucial in the line transect method which is used for estimating population abundance in wildlife survey's. The classical kernel estimator of f(0) has a high negative bias. Our study proposes an adaptation in the kernel function which is shown to be more efficient than the usual kernel estimator. A simulation study is adopted to compare the performance of the proposed estimators with the classical kernel estimators.

  17. Pollen source effects on growth of kernel structures and embryo chemical compounds in maize.

    PubMed

    Tanaka, W; Mantese, A I; Maddonni, G A

    2009-08-01

    Previous studies have reported effects of pollen source on the oil concentration of maize (Zea mays) kernels through modifications to both the embryo/kernel ratio and embryo oil concentration. The present study expands upon previous analyses by addressing pollen source effects on the growth of kernel structures (i.e. pericarp, endosperm and embryo), allocation of embryo chemical constituents (i.e. oil, protein, starch and soluble sugars), and the anatomy and histology of the embryos. Maize kernels with different oil concentration were obtained from pollinations with two parental genotypes of contrasting oil concentration. The dynamics of the growth of kernel structures and allocation of embryo chemical constituents were analysed during the post-flowering period. Mature kernels were dissected to study the anatomy (embryonic axis and scutellum) and histology [cell number and cell size of the scutellums, presence of sub-cellular structures in scutellum tissue (starch granules, oil and protein bodies)] of the embryos. Plants of all crosses exhibited a similar kernel number and kernel weight. Pollen source modified neither the growth period of kernel structures, nor pericarp growth rate. By contrast, pollen source determined a trade-off between embryo and endosperm growth rates, which impacted on the embryo/kernel ratio of mature kernels. Modifications to the embryo size were mediated by scutellum cell number. Pollen source also affected (P < 0.01) allocation of embryo chemical compounds. Negative correlations among embryo oil concentration and those of starch (r = 0.98, P < 0.01) and soluble sugars (r = 0.95, P < 0.05) were found. Coincidently, embryos with low oil concentration had an increased (P < 0.05-0.10) scutellum cell area occupied by starch granules and fewer oil bodies. The effects of pollen source on both embryo/kernel ratio and allocation of embryo chemicals seems to be related to the early established sink strength (i.e. sink size and sink activity) of the embryos.

  18. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall be...

  19. 7 CFR 51.2090 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... defect which makes a kernel or piece of kernel unsuitable for human consumption, and includes decay...: Shriveling when the kernel is seriously withered, shrunken, leathery, tough or only partially developed: Provided, that partially developed kernels are not considered seriously damaged if more than one-fourth of...

  20. Anisotropic hydrodynamics with a scalar collisional kernel

    NASA Astrophysics Data System (ADS)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  1. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Straight-chain halocarbon forming fluids for TRISO fuel kernel production - Tests with yttria-stabilized zirconia microspheres

    NASA Astrophysics Data System (ADS)

    Baker, M. P.; King, J. C.; Gorman, B. P.; Braley, J. C.

    2015-03-01

    Current methods of TRISO fuel kernel production in the United States use a sol-gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.

  3. The site, size, spatial stability, and energetics of an X-ray flare kernel

    NASA Technical Reports Server (NTRS)

    Petrasso, R.; Gerassimenko, M.; Nolte, J.

    1979-01-01

    The site, size evolution, and energetics of an X-ray kernel that dominated a solar flare during its rise and somewhat during its peak are investigated. The position of the kernel remained stationary to within about 3 arc sec over the 30-min interval of observations, despite pulsations in the kernel X-ray brightness in excess of a factor of 10. This suggests a tightly bound, deeply rooted magnetic structure, more plausibly associated with the near chromosphere or low corona rather than with the high corona. The H-alpha flare onset coincided with the appearance of the kernel, again suggesting a close spatial and temporal coupling between the chromospheric H-alpha event and the X-ray kernel. At the first kernel brightness peak its size was no larger than about 2 arc sec, when it accounted for about 40% of the total flare flux. In the second rise phase of the kernel, a source power input of order 2 times 10 to the 24th ergs/sec is minimally required.

  4. Stochastic dynamic modeling of regular and slow earthquakes

    NASA Astrophysics Data System (ADS)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal diffusion appears much slower than the particle velocity of each molecule. The concept of stochastic triggering originates in the Brownian walk model [Ide, 2008], and the present study introduces the stochastic dynamics into dynamic simulations. The stochastic dynamic model has the potential to explain both regular and slow earthquakes more realistically.

  5. The pre-image problem in kernel methods.

    PubMed

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  6. Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.

    PubMed

    Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I

    2016-03-01

    The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor.

  7. Development of a kernel function for clinical data.

    PubMed

    Daemen, Anneleen; De Moor, Bart

    2009-01-01

    For most diseases and examinations, clinical data such as age, gender and medical history guides clinical management, despite the rise of high-throughput technologies. To fully exploit such clinical information, appropriate modeling of relevant parameters is required. As the widely used linear kernel function has several disadvantages when applied to clinical data, we propose a new kernel function specifically developed for this data. This "clinical kernel function" more accurately represents similarities between patients. Evidently, three data sets were studied and significantly better performances were obtained with a Least Squares Support Vector Machine when based on the clinical kernel function compared to the linear kernel function.

  8. Manycore Performance-Portability: Kokkos Multidimensional Array Library

    DOE PAGES

    Edwards, H. Carter; Sunderland, Daniel; Porter, Vicki; ...

    2012-01-01

    Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel executionmore » performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].« less

  9. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    PubMed

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  10. Impact of deep learning on the normalization of reconstruction kernel effects in imaging biomarker quantification: a pilot study in CT emphysema

    NASA Astrophysics Data System (ADS)

    Jin, Hyeongmin; Heo, Changyong; Kim, Jong Hyo

    2018-02-01

    Differing reconstruction kernels are known to strongly affect the variability of imaging biomarkers and thus remain as a barrier in translating the computer aided quantification techniques into clinical practice. This study presents a deep learning application to CT kernel conversion which converts a CT image of sharp kernel to that of standard kernel and evaluates its impact on variability reduction of a pulmonary imaging biomarker, the emphysema index (EI). Forty cases of low-dose chest CT exams obtained with 120kVp, 40mAs, 1mm thickness, of 2 reconstruction kernels (B30f, B50f) were selected from the low dose lung cancer screening database of our institution. A Fully convolutional network was implemented with Keras deep learning library. The model consisted of symmetric layers to capture the context and fine structure characteristics of CT images from the standard and sharp reconstruction kernels. Pairs of the full-resolution CT data set were fed to input and output nodes to train the convolutional network to learn the appropriate filter kernels for converting the CT images of sharp kernel to standard kernel with a criterion of measuring the mean squared error between the input and target images. EIs (RA950 and Perc15) were measured with a software package (ImagePrism Pulmo, Seoul, South Korea) and compared for the data sets of B50f, B30f, and the converted B50f. The effect of kernel conversion was evaluated with the mean and standard deviation of pair-wise differences in EI. The population mean of RA950 was 27.65 +/- 7.28% for B50f data set, 10.82 +/- 6.71% for the B30f data set, and 8.87 +/- 6.20% for the converted B50f data set. The mean of pair-wise absolute differences in RA950 between B30f and B50f is reduced from 16.83% to 1.95% using kernel conversion. Our study demonstrates the feasibility of applying the deep learning technique for CT kernel conversion and reducing the kernel-induced variability of EI quantification. The deep learning model has a potential to improve the reliability of imaging biomarker, especially in evaluating the longitudinal changes of EI even when the patient CT scans were performed with different kernels.

  11. Metabolic network prediction through pairwise rational kernels.

    PubMed

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy values have been improved, while maintaining lower construction and execution times. The power of using kernels is that almost any sort of data can be represented using kernels. Therefore, completely disparate types of data can be combined to add power to kernel-based machine learning methods. When we compared our proposal using PRKs with other similar kernel, the execution times were decreased, with no compromise of accuracy. We also proved that by combining PRKs with other kernels that include evolutionary information, the accuracy can also also be improved. As our proposal can use any type of sequence data, genes do not need to be properly annotated, avoiding accumulation errors because of incorrect previous annotations.

  12. Differential metabolome analysis of field-grown maize kernels in response to drought stress

    USDA-ARS?s Scientific Manuscript database

    Drought stress constrains maize kernel development and can exacerbate aflatoxin contamination. In order to identify drought responsive metabolites and explore pathways involved in kernel responses, a metabolomics analysis was conducted on kernels from a drought tolerant line, Lo964, and a sensitive ...

  13. Occurrence of 'super soft' wheat kernel texture in hexaploid and tetraploid wheats

    USDA-ARS?s Scientific Manuscript database

    Wheat kernel texture is a key trait that governs milling performance, flour starch damage, flour particle size, flour hydration properties, and baking quality. Kernel texture is commonly measured using the Perten Single Kernel Characterization System (SKCS). The SKCS returns texture values (Hardness...

  14. 7 CFR 868.203 - Basis of determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Principles Governing..., heat-damaged kernels, red rice and damaged kernels, chalky kernels, other types, color, and the special grade Parboiled rough rice shall be on the basis of the whole and large broken kernels of milled rice...

  15. 7 CFR 868.203 - Basis of determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Principles Governing..., heat-damaged kernels, red rice and damaged kernels, chalky kernels, other types, color, and the special grade Parboiled rough rice shall be on the basis of the whole and large broken kernels of milled rice...

  16. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the use...

  17. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the use...

  18. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays.

  19. Performance Characteristics of a Kernel-Space Packet Capture Module

    DTIC Science & Technology

    2010-03-01

    Defense, or the United States Government . AFIT/GCO/ENG/10-03 PERFORMANCE CHARACTERISTICS OF A KERNEL-SPACE PACKET CAPTURE MODULE THESIS Presented to the...3.1.2.3 Prototype. The proof of concept for this research is the design, development, and comparative performance analysis of a kernel level N2d capture...changes to kernel code 5. Can be used for both user-space and kernel-space capture applications in order to control comparative performance analysis to

  20. High-throughput method for ear phenotyping and kernel weight estimation in maize using ear digital imaging.

    PubMed

    Makanza, R; Zaman-Allah, M; Cairns, J E; Eyre, J; Burgueño, J; Pacheco, Ángela; Diepenbrock, C; Magorokosho, C; Tarekegne, A; Olsen, M; Prasanna, B M

    2018-01-01

    Grain yield, ear and kernel attributes can assist to understand the performance of maize plant under different environmental conditions and can be used in the variety development process to address farmer's preferences. These parameters are however still laborious and expensive to measure. A low-cost ear digital imaging method was developed that provides estimates of ear and kernel attributes i.e., ear number and size, kernel number and size as well as kernel weight from photos of ears harvested from field trial plots. The image processing method uses a script that runs in a batch mode on ImageJ; an open source software. Kernel weight was estimated using the total kernel number derived from the number of kernels visible on the image and the average kernel size. Data showed a good agreement in terms of accuracy and precision between ground truth measurements and data generated through image processing. Broad-sense heritability of the estimated parameters was in the range or higher than that for measured grain weight. Limitation of the method for kernel weight estimation is discussed. The method developed in this work provides an opportunity to significantly reduce the cost of selection in the breeding process, especially for resource constrained crop improvement programs and can be used to learn more about the genetic bases of grain yield determinants.

  1. A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.

    2017-05-01

    Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.

  2. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  3. SEMI-SUPERVISED OBJECT RECOGNITION USING STRUCTURE KERNEL

    PubMed Central

    Wang, Botao; Xiong, Hongkai; Jiang, Xiaoqian; Ling, Fan

    2013-01-01

    Object recognition is a fundamental problem in computer vision. Part-based models offer a sparse, flexible representation of objects, but suffer from difficulties in training and often use standard kernels. In this paper, we propose a positive definite kernel called “structure kernel”, which measures the similarity of two part-based represented objects. The structure kernel has three terms: 1) the global term that measures the global visual similarity of two objects; 2) the part term that measures the visual similarity of corresponding parts; 3) the spatial term that measures the spatial similarity of geometric configuration of parts. The contribution of this paper is to generalize the discriminant capability of local kernels to complex part-based object models. Experimental results show that the proposed kernel exhibit higher accuracy than state-of-art approaches using standard kernels. PMID:23666108

  4. Remote preparation of a qudit using maximally entangled states of qubits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Changshui; Song Heshan; Wang Yahong

    2006-02-15

    Known quantum pure states of a qudit can be remotely prepared onto a group of particles of qubits exactly or probabilistically with the aid of two-level Einstein-Podolsky-Rosen states. We present a protocol for such kind of remote state preparation. We are mainly focused on the remote preparation of the ensembles of equatorial states and those of states in real Hilbert space. In particular, a kind of states of qudits in real Hilbert space have been shown to be remotely prepared in faith without the limitation of the input space dimension.

  5. Semiclassical limit of the focusing NLS: Whitham equations and the Riemann-Hilbert Problem approach

    NASA Astrophysics Data System (ADS)

    Tovbis, Alexander; El, Gennady A.

    2016-10-01

    The main goal of this paper is to put together: a) the Whitham theory applicable to slowly modulated N-phase nonlinear wave solutions to the focusing nonlinear Schrödinger (fNLS) equation, and b) the Riemann-Hilbert Problem approach to particular solutions of the fNLS in the semiclassical (small dispersion) limit that develop slowly modulated N-phase nonlinear wave in the process of evolution. Both approaches have their own merits and limitations. Understanding of the interrelations between them could prove beneficial for a broad range of problems involving the semiclassical fNLS.

  6. Hilbert and Blaschke phases in the temporal coherence function of stationary broadband light.

    PubMed

    Fernández-Pousa, Carlos R; Maestre, Haroldo; Torregrosa, Adrián J; Capmany, Juan

    2008-10-27

    We show that the minimal phase of the temporal coherence function gamma (tau) of stationary light having a partially-coherent symmetric spectral peak can be computed as a relative logarithmic Hilbert transform of its amplitude with respect to its asymptotic behavior. The procedure is applied to experimental data from amplified spontaneous emission broadband sources in the 1.55 microm band with subpicosecond coherence times, providing examples of degrees of coherence with both minimal and non-minimal phase. In the latter case, the Blaschke phase is retrieved and the position of the Blaschke zeros determined.

  7. Wavefront reconstruction from non-modulated pyramid wavefront sensor data using a singular value type expansion

    NASA Astrophysics Data System (ADS)

    Hutterer, Victoria; Ramlau, Ronny

    2018-03-01

    The new generation of extremely large telescopes includes adaptive optics systems to correct for atmospheric blurring. In this paper, we present a new method of wavefront reconstruction from non-modulated pyramid wavefront sensor data. The approach is based on a simplified sensor model represented as the finite Hilbert transform of the incoming phase. Due to the non-compactness of the finite Hilbert transform operator the classical theory for singular systems is not applicable. Nevertheless, we can express the Moore-Penrose inverse as a singular value type expansion with weighted Chebychev polynomials.

  8. Transactions of the Conference of Army Mathematicians (28th) Held at Bethesda, Maryland on 28-30 June 1982.

    DTIC Science & Technology

    1983-02-01

    real part is the Hilbert transform of its imaginary part. Thus we have o) d4’ . (5.1)+," _ Here r(o) and 6(o) denote respectively T(4’,0-) and 0(o,0...linear operators A in a Hilbert space H, eigenuv.aueA are critical values of the Raqt.igh quo 5ient (5.1) R(y) = (Ay,y)/(y,y), y # 0. An eigenvalue X...Das Gupta Ballistic Research Laboratory Jim Greenberg National Science Foundation Charles Giardina Fairleigh-Dickinson University Frank P. Kuhl U. S

  9. Properties of highly frustrated magnetic molecules studied by the finite-temperature Lanczos method

    NASA Astrophysics Data System (ADS)

    Schnack, J.; Wendland, O.

    2010-12-01

    The very interesting magnetic properties of frustrated magnetic molecules are often hardly accessible due to the prohibitive size of the related Hilbert spaces. The finite-temperature Lanczos method is able to treat spin systems for Hilbert space sizes up to 109. Here we first demonstrate for exactly solvable systems that the method is indeed accurate. Then we discuss the thermal properties of one of the biggest magnetic molecules synthesized to date, the icosidodecahedron with antiferromagnetically coupled spins of s = 1/2. We show how genuine quantum features such as the magnetization plateau behave as a function of temperature.

  10. Spinors in Hilbert Space

    NASA Astrophysics Data System (ADS)

    Plymen, Roger; Robinson, Paul

    1995-01-01

    Infinite-dimensional Clifford algebras and their Fock representations originated in the quantum mechanical study of electrons. In this book, the authors give a definitive account of the various Clifford algebras over a real Hilbert space and of their Fock representations. A careful consideration of the latter's transformation properties under Bogoliubov automorphisms leads to the restricted orthogonal group. From there, a study of inner Bogoliubov automorphisms enables the authors to construct infinite-dimensional spin groups. Apart from assuming a basic background in functional analysis and operator algebras, the presentation is self-contained with complete proofs, many of which offer a fresh perspective on the subject.

  11. Noisy bases in Hilbert space: A new class of thermal coherent states and their properties

    NASA Technical Reports Server (NTRS)

    Vourdas, A.; Bishop, R. F.

    1995-01-01

    Coherent mixed states (or thermal coherent states) associated with the displaced harmonic oscillator at finite temperature, are introduced as a 'random' (or 'thermal' or 'noisy') basis in Hilbert space. A resolution of the identity for these states is proved and used to generalize the usual coherent state formalism for the finite temperature case. The Bargmann representation of an operator is introduced and its relation to the P and Q representations is studied. Generalized P and Q representations for the finite temperature case are also considered and several interesting relations among them are derived.

  12. Geometry of quantum dynamics in infinite-dimensional Hilbert space

    NASA Astrophysics Data System (ADS)

    Grabowski, Janusz; Kuś, Marek; Marmo, Giuseppe; Shulman, Tatiana

    2018-04-01

    We develop a geometric approach to quantum mechanics based on the concept of the Tulczyjew triple. Our approach is genuinely infinite-dimensional, i.e. we do not restrict considerations to finite-dimensional Hilbert spaces, contrary to many other works on the geometry of quantum mechanics, and include a Lagrangian formalism in which self-adjoint (Schrödinger) operators are obtained as Lagrangian submanifolds associated with the Lagrangian. As a byproduct we also obtain results concerning coadjoint orbits of the unitary group in infinite dimensions, embedding of pure states in the unitary group, and self-adjoint extensions of symmetric relations.

  13. A degeneration of two-phase solutions of the focusing nonlinear Schrödinger equation via Riemann-Hilbert problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertola, Marco, E-mail: Marco.Bertola@concordia.ca; Centre de Recherches Mathématiques, Université de Montréal, Montréal, Québec H3C 3J7; SISSA/ISAS, via Bonomea 265, Trieste

    2015-06-15

    Two-phase solutions of focusing NLS equation are classically constructed out of an appropriate Riemann surface of genus two and expressed in terms of the corresponding theta-function. We show here that in a certain limiting regime, such solutions reduce to some elementary ones called “Solitons on unstable condensate.” This degeneration turns out to be conveniently studied by means of basic tools from the theory of Riemann-Hilbert problems. In particular, no acquaintance with Riemann surfaces and theta-function is required for such analysis.

  14. Burrower bugs (Heteroptera: Cydnidae) in peanut: seasonal species abundance, tillage effects, grade reduction effects, insecticide efficacy, and management.

    PubMed

    Chapin, Jay W; Thomas, James S

    2003-08-01

    Pitfall traps placed in South Carolina peanut, Arachis hypogaea (L.), fields collected three species of burrower bugs (Cydnidae): Cyrtomenus ciliatus (Palisot de Beauvois), Sehirus cinctus cinctus (Palisot de Beauvois), and Pangaeus bilineatus (Say). Cyrtomenus ciliatus was rarely collected. Sehirus cinctus produced a nymphal cohort in peanut during May and June, probably because of abundant henbit seeds, Lamium amplexicaule L., in strip-till production systems. No S. cinctus were present during peanut pod formation. Pangaeus bilineatus was the most abundant species collected and the only species associated with peanut kernel feeding injury. Overwintering P. bilineatus adults were present in a conservation tillage peanut field before planting and two to three subsequent generations were observed. Few nymphs were collected until the R6 (full seed) growth stage. Tillage and choice of cover crop affected P. bilineatus populations. Peanuts strip-tilled into corn or wheat residue had greater P. bilineatus populations and kernel-feeding than conventional tillage or strip-tillage into rye residue. Fall tillage before planting a wheat cover crop also reduced burrower bug feeding on peanut. At-pegging (early July) granular chlorpyrifos treatments were most consistent in suppressing kernel feeding. Kernels fed on by P. bilineatus were on average 10% lighter than unfed on kernels. Pangaeus bilineatus feeding reduced peanut grade by reducing individual kernel weight, and increasing the percentage damaged kernels. Each 10% increase in kernels fed on by P. bilineatus was associated with a 1.7% decrease in total sound mature kernels, and kernel feeding levels above 30% increase the risk of damaged kernel grade penalties.

  15. Imaging and automated detection of Sitophilus oryzae (Coleoptera: Curculionidae) pupae in hard red winter wheat.

    PubMed

    Toews, Michael D; Pearson, Tom C; Campbell, James F

    2006-04-01

    Computed tomography, an imaging technique commonly used for diagnosing internal human health ailments, uses multiple x-rays and sophisticated software to recreate a cross-sectional representation of a subject. The use of this technique to image hard red winter wheat, Triticum aestivm L., samples infested with pupae of Sitophilus oryzae (L.) was investigated. A software program was developed to rapidly recognize and quantify the infested kernels. Samples were imaged in a 7.6-cm (o.d.) plastic tube containing 0, 50, or 100 infested kernels per kg of wheat. Interkernel spaces were filled with corn oil so as to increase the contrast between voids inside kernels and voids among kernels. Automated image processing, using a custom C language software program, was conducted separately on each 100 g portion of the prepared samples. The average detection accuracy in the five infested kernels per 100-g samples was 94.4 +/- 7.3% (mean +/- SD, n = 10), whereas the average detection accuracy in the 10 infested kernels per 100-g sample was 87.3 +/- 7.9% (n = 10). Detection accuracy in the 10 infested kernels per 100-g samples was slightly less than the five infested kernels per 100-g samples because of some infested kernels overlapping with each other or air bubbles in the oil. A mean of 1.2 +/- 0.9 (n = 10) bubbles (per tube) was incorrectly classed as infested kernels in replicates containing no infested kernels. In light of these positive results, future studies should be conducted using additional grains, insect species, and life stages.

  16. Relationship of source and sink in determining kernel composition of maize

    PubMed Central

    Seebauer, Juliann R.; Singletary, George W.; Krumpelman, Paulette M.; Ruffo, Matías L.; Below, Frederick E.

    2010-01-01

    The relative role of the maternal source and the filial sink in controlling the composition of maize (Zea mays L.) kernels is unclear and may be influenced by the genotype and the N supply. The objective of this study was to determine the influence of assimilate supply from the vegetative source and utilization of assimilates by the grain sink on the final composition of maize kernels. Intermated B73×Mo17 recombinant inbred lines (IBM RILs) which displayed contrasting concentrations of endosperm starch were grown in the field with deficient or sufficient N, and the source supply altered by ear truncation (45% reduction) at 15 d after pollination (DAP). The assimilate supply into the kernels was determined at 19 DAP using the agar trap technique, and the final kernel composition was measured. The influence of N supply and kernel ear position on final kernel composition was also determined for a commercial hybrid. Concentrations of kernel protein and starch could be altered by genotype or the N supply, but remained fairly constant along the length of the ear. Ear truncation also produced a range of variation in endosperm starch and protein concentrations. The C/N ratio of the assimilate supply at 19 DAP was directly related to the final kernel composition, with an inverse relationship between the concentrations of starch and protein in the mature endosperm. The accumulation of kernel starch and protein in maize is uniform along the ear, yet adaptable within genotypic limits, suggesting that kernel composition is source limited in maize. PMID:19917600

  17. Genetic dissection of the maize kernel development process via conditional QTL mapping for three developing kernel-related traits in an immortalized F2 population.

    PubMed

    Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua

    2016-02-01

    Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.

  18. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  19. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing, manufacturing, packing, processing, preparing, treating...

  20. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

Top